Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241
Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision History..................................................... ix CHAPTER 2 Creating and Updating LUNs and Targets....... 1 Creating a New iscsi Target.................................... 1 General Tab in Create iscsi Target Dialog Box................... 2 Updating an Existing Target to an SPC-3 PR Compliance Enabled Target 3 Revision History.............................................. 4 CHAPTER 3 Common Mistakes.............................. 5 Revision History...................................................... 5 CHAPTER 4 Requirements for Working with Hyper-V........ 7 Resources for Working with Hyper-V................................... 7 Hardware Requirements.............................................. 8 Network Requirements................................................ 8 Storage Requirements................................................ 9 Enabling Virtual IP Addresses........................................ 10 Enabling SPC-3 PR Compliance on Existing Targets................... 11 Revision History..................................................... 12 CHAPTER 5 Network Adapters for Hyper-V................. 13 External Virtual Networks............................................ 14 Internal Virtual Networks............................................. 14 Private Virtual Networks.............................................. 14 Dedicated Virtual Networks........................................... 15 Scale Computing i
Contents Revision History..................................................... 15 CHAPTER 6 Install Hyper-V Role............................ 17 Install Hyper-V Role on Each Server.................................. 17 Install the Failover Cluster Feature on Each Server..................... 18 Configuring Virtual Networks......................................... 18 Revision History..................................................... 22 CHAPTER 7 Using Hyper-V and Failover Clustering......... 23 Before You Begin.................................................... 23 Network Infrastructure and Domain Account Requirements for a Two-Node Failover Cluster..................................................... 24 Preparing Storage on a Scale Computing Cluster...................... 25 Connect the Computers to the Networks.............................. 27 Discover Targets on Hyper-V Servers................................. 27 Initialize a LUN...................................................... 29 Validate Cluster Configuration........................................ 32 Create the Cluster................................................... 33 Change Witness Disk Selection (Optional)............................. 34 Initialize a LUN as a Cluster Shared Volume........................... 34 Create a Virtual Machine............................................. 37 Revision History..................................................... 41 CHAPTER 8 Live Migration................................. 43 Configure Automatic Start Action for Your Virtual Machine.............. 43 Make the Virtual Machine Highly Available............................. 44 Bring the Virtual Machine Online...................................... 45 Configure Cluster Networks for Live Migration......................... 45 Initiate Live Migration Using Failover Cluster Manager.................. 46 Test a Planned or Unplanned Failover................................. 47 Moving, Modifying, or Removing a Virtual Machine from Your Hyper-V Cluster............................................................. 47 Scale Computing ii
Contents Revision History..................................................... 47 CHAPTER 9 Glossary...................................... 49 Revision History..................................................... 49 CHAPTER 10 Resources..................................... 51 Revision History..................................................... 51 Scale Computing iii
Contents Scale Computing iv
List of Figures 2 Node Hyper-V Failover Cluster with CSV Enabled................................... viii Create an iscsi Target Dialog Box................................................. 2 Virtual Network Manager Choice.................................................. 19 Virtual Manager Page.......................................................... 20 New Virtual Network Page....................................................... 21 Create iscsi Target Dialog Box.................................................. 26 iscsi Initiator Properties Dialog Box............................................... 28 Disk with Offline Status......................................................... 30 Initialize Disk Dialog Box........................................................ 31 Storage Box Next to Disk Number................................................. 31 Configure Panel............................................................... 36 Hyper-V Manager.............................................................. 38 Action Pane Menu Choices Under New............................................. 39 Specify Name and Location Page................................................. 40 Scale Computing v
Scale Computing vi
CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START Hyper-V is a role in Windows Server 2008 R2 that provides you with tools and services for creating a virtualized environment. Setting up Hyper-V correctly is easiest when you are thoroughly aware of all the prerequisites for configuration. Because of this, Scale Computing strongly recommends that you take the time to review the requirements chapter. It is easy to overlook a requirement and discover you need it midway through configuration. Assuming you meet the prerequisites stated in Chapter 4, Requirements for Working with Hyper-V, this guide provides a walkthrough for setting up a two node Hyper-V cluster with failover clustering and Cluster Shared Volumes (CSV) enabled. It is assumed you are not using CHAP. You will confirm correct set up by performing a live migration. Your final configuration will look something like the one shown in Figure 1-1, 2 Node Hyper-V Failover Cluster with CSV Enabled. Scale Computing vii
FIGURE 1-1. 2 Node Hyper-V Failover Cluster with CSV Enabled In order to carry out the instructions provided in this guide, you must meet the following prerequisites: You have set up two servers with exactly matching, full installations of Windows 2008 R2. You have installed and configured your Scale Computing cluster and have access to the Scale Cluster Manager. You have appropriate installation sofware and licenses for your virtual machines (VMs). You have basic configuration information for defining your VMs such as IP addresses, required memory, and disk sizes. You must have a domain account with sufficient access rights to join a machine to the domain. Scale Computing viii
Revision History Revision History This section contains information describing how this chapter has been revised. Scale Computing ix
Revision History Scale Computing x
Creating a New iscsi Target CHAPTER 2 Creating and Updating LUNs and Targets This chapter is provided as a refresher for how to work with iscsi targets and LUNs on the Scale Computing cluster. Information about how to create a new target or update an existing one for use with Hyper-V failover clustering is provided in the following sections: Creating a New iscsi Target Updating an Existing Target to an SPC-3 PR Compliance Enabled Target Creating a New iscsi Target To create a new iscsi target, click + below the Targets panel. This brings up the Create iscsi Target dialog box as shown in Figure 2-1, Create an iscsi Target Dialog Box. Scale Computing 1
Creating a New iscsi Target FIGURE 2-1. Create an iscsi Target Dialog Box There are two tabs in the Create iscsi Target dialog box: General and CHAP. For this walkthrough, you will not work with CHAP, only the General Tab. General Tab in Create iscsi Target Dialog Box Use the General tab to create new iscsi targets. Take the following steps to add a new iscsi target: Scale Computing 2
Creating a New iscsi Target 1 Assign the target a name in the Target Name field. Depending on your particular iscsi initiator, you may also need to configure checksums (CRC) for Header and/or Data. (Checksums must match with whatever Windows is configured for). NOTE: You cannot begin a target name with a number. 2 Ensure that the Strict SCSI Compliance checkbox is turned on. If this checkbox is not on, your target may not work properly. 3 Turn on the SPC-3 PR Compliance checkbox. Targets must have SPC-3 PR Compliance enabled if you want to do failover clustering. Additionally, iscsi initiators wishing to connect to targets with this feature turned on must be configured to connect via virtual IPs. 4 Add or remove IP addresses from the Access List as needed. Access List entries must be full IP addresses in dotted notation: 192.168.0.101 or IP/CIDR address ranges. The following Access List entry would allow connections from hosts within the IP address range 192.168.0.1 to 192.168.0.254: 192.168.0.0/24. An entry of ALL allows all connections to the target. 5 Click Create iscsi Target in the lower righthand corner of the dialog box to create your iscsi target with the settings you have selected. Updating an Existing Target to an SPC-3 PR Compliance Enabled Target Scale Computing recommends updating existing targets so that they are SPC-3 PR Compliance enabled. Targets with SPC-3 PR Compliance enabled are required for failover clustering. To do this take the following steps: 1 Disconnect initiators from all nodes. 2 Under CIFS/NFS/iSCSI click iscsi. The iscsi Management screen appears. 3 Select the target you wish to update by highlighting it in the Targets table. 4 Click Modify. The Modify iscsi Target dialog box appears. 5 Turn on the SPC-3 PR Compliance checkbox. 6 Click Modify iscsi Target. Your changes are committed across the cluster. 7 Reconfigure iscsi initiators to use virtual IP portals. NOTE: SPC-3 PR Compliance enabled targets no longer have LAN addresses in their IQN. Scale Computing 3
Revision History NOTE: You cannot begin a target name with a number. 8 Reconnect to the nodes. Revision History This section contains information describing how this chapter has been revised. Scale Computing 4
Revision History CHAPTER 3 Common Mistakes This chapter details some common errors to avoid when setting up your Hyper-V cluster. Not thoroughly reviewing all the requirements for successful installation of Hyper-V. It is very easy to overlook something so be sure to review requirements thoroughly. Not having proper credentials for joining a machine to the domain. Not using enough NICs for set up. You must have a minimum of three NICs per server used in your Hyper-V cluster. Not matching the server installations exactly. If there is even a slight difference between installations of roles on your servers, you cannot set up your Hyper-V cluster. Not ensuring all servers to be used in your Hyper-V cluster can see the storage. Not planning your NICs and associated IP addresses in advance of installation. Putting in extra time up front to ensure you meet requirements and understand all the steps will result in a smooth installation process. It is harder to fix issues in the middle of installation and configuration than it is to take care of them at the start. Revision History This section contains information describing how this chapter has been revised. Scale Computing 5
Revision History Scale Computing 6
Resources for Working with Hyper-V CHAPTER 4 Requirements for Working with Hyper- V Requirements for setting up a two node Hyper-V cluster with Cluster Shared Volumes (CSV) enabled are described in the following sections: Resources for Working with Hyper-V Hardware Requirements Network Requirements Storage Requirements Enabling SPC-3 PR Compliance on Existing Targets Resources for Working with Hyper-V This chapter provides a summary of requirements for creating a two node Hyper-V cluster with failover clustering. For detailed versions of the information provided here, refer to these documents on the Microsoft website (http://technet.microsoft.com): Hyper-V Installation Prerequisites Hyper-V: Hyper-V and Failover Clustering Requirements for Using Cluster Shared Volumes in a Failover Cluster in Windows Server 2008 R2 Scale Computing 7
Hardware Requirements Hardware Requirements You must meet the following hardware requirements: A x64-based processor is required. Hardware-assisted virtualization (available in processors that include a virtualization option) is required. Hardware Data Execution Prevention (DEP) must be available and enabled. For details about whether a processor model supports Hyper-V, check with the manufacturer of the computer. If you modify settings for hardware-assisted virtualization or hardware-enforced DEP, you may need to turn off the power to the server, then turn it back on, and change the setting in the BIOS. Restarting the computer may not apply the changes to the settings. Use matching computers with the same or similar features for your two nodes in the Hyper- V cluster. Selected hardware must be Certified for Windows Server 2008 R2 (For more details, refer to http://go.microsoft.com/fwlink/?linkid=139145). You must have the same operating system on each node in your Hyper-V cluster. The exception is you can combine nodes running Hyper-V Server 2008 R2 in the same cluster with nodes running the Server Core installation. For information on these variants of Hyper-V, refer to Chapter 1, Introduction to Hyper-V: BEFORE YOU START. For all nodes in the Hyper-V cluster, you must use the same drive letter for the system disk. For all nodes in the Hyper-V cluster, you must enable the NT LAN Manager (NTLM) authentication protocol. For all nodes in the Hyper-V cluster, you must install the Hyper-V server role. Network Requirements Network requirements for failover clustering are as follows: Network hardware must be marked Certified for Windows Server 2008 R2. Scale Computing 8
Storage Requirements When using iscsi, network adapters must be dedicated to either network communication or iscsi, but not both. You need three network adapters per server at a minimum, four is ideal. For details regarding assignment for the ideal configuration, refer to Chapter 5, Network Adapters for Hyper-V. When using iscsi, do not use network adapter teaming. iscsi does not support this. You must use MPIO instead. Avoid single points of failure when connecting the two nodes of your Hyper-V cluster together. For each node in the Hyper-V cluster, you must provide a static IP address. For use with CSV, you must use more than one network adapter for each node of your Hyper-V cluster. Do not use the same network adapter for CSV communication as you use for virtual machine access and management. Let your failover cluster automatically choose the network for CSV communication. Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), as it is required for CSV. If your Hyper-V cluster nodes connect to networks that should not be used by the Hyper-V cluster, in the Failover Cluster Manager, in the properties for each of those networks, select Do not allow cluster network communication on this network. For each network in a failover cluster where CSV is enabled, all nodes must be on the same logical subnet. If you have a multisite Hyper-V cluster using CSV, it must use a VLAN. On all nodes, the drive letter for the system disk must be the same. Storage Requirements For failover clustering with CSV, you must ensure the following requirements are met on your Scale Computing cluster: Ensure targets on your Scale Computing cluster are SPC-3 PR Compliance enabled. If you want to use targets on the Scale Computing cluster that are not SPC-3 PR Compliance enabled with your Hyper-V cluster, you must enable SPC-3 PR Compliance on each Scale Computing 9
Enabling Virtual IP Addresses target prior to using it for this purpose. Follow the procedure provided in section Enabling SPC-3 PR Compliance on Existing Targets. Create and configure your targets before you create your failover cluster. You will not be able to complete the Windows Server Validate a Configuration Wizard if your targets are not prepared ahead of time. Ensure that you configure your network adapters so that iscsi and network communication are on separate network adapters. You cannot place iscsi on the same network adapter as network communication. To use native disk support included in failover clustering, use basic disks, not dynamic disks. Format the witness disk using NTFS. While it is not required that you format all partitions with NTFS, it is recommended. You must use the Physical Disk resource type for CSV disks. Choose disk types carefully by considering how each disk will be used. You can use CSV disks for VHD files and configuration files. You cannot use a CSV disk if you plan to attach a physical disk to a virtual machine (called a pass-through disk). CSV disks are identified by path name. The path is the same when viewed from any node in the cluster. An example might be \ClusterStorage\Volume1. CSV disks do not use mount points or drive letters. Format partitions with NTFS. NTFS partitions are created host side on top of iscsi LUNs residing on SPC-3 PR Compliance enabled targets located on your Scale Computing cluster. All you need to do on the Scale Computing cluster is ensure that you are using iscsi and you have created LUNs on targets that are SPC-3 PR Compliance enabled. LUNs created on your Scale Computing cluster should not exceed 500GB. Enabling Virtual IP Addresses You must add round robin DNS entries on your DNS server for virtual IP addresses. You need virtual IP addresses because once you turn on SPC-3 PR compliance in targets, it means the target is hosted on the VIPs but not on the LAN address. This is a two part process: Scale Computing 10
Enabling SPC-3 PR Compliance on Existing Targets Create a round robin entry on the DNS and include a hostname for your cluster as long as it is a legitimate DNS name. Then provide at least 3 IP addresses (you need one virtual IP address per node you have in your cluster) that are in the same subnet as the LAN and those will be the IP addresses that the Hyper-V servers connect to for their iscsi connections. On the cluster itself under configuration virtual IPs, there is a field that says round robin DNS entry...whatever hostname you use on the DNS server for your IP addresses, you put it in the field under virtual IP setup round robin DNS entry... Enabling SPC-3 PR Compliance on Existing Targets If you want to enable SPC-3 PR Compliance on existing targets on your Scale Computing cluster, take the following steps: 1 Disconnect initiators from all nodes. 2 Under CIFS/NFS/iSCSI click iscsi. The iscsi Management screen appears. 3 Select the target you wish to update by highlighting it in the Targets table. 4 Click Modify. The Modify iscsi Target dialog box appears. 5 Turn on the SPC-3 PR Compliance checkbox. 6 Click Modify iscsi Target. Your changes are committed across the cluster. 7 Reconfigure iscsi initiators to use virtual IP portals. NOTE: SPC-3 PR Compliance enabled targets no longer have LAN addresses in their IQN. NOTE: You cannot begin a target name with a number. 8 Reconnect to the nodes. Scale Computing 11
Revision History Revision History This section contains information describing how this chapter has been revised. Scale Computing 12
CHAPTER 5 Network Adapters for Hyper-V A Hyper-V failover cluster with CSV enabled can successfully be set up with a minimum of three Network Interface Cards (NICs). However, Microsoft recommends four NICs. Generally, the way you want to dedicate your NICs is as follows: NIC #1: Parent partition (normal network) NIC #2: Cluster heartbeat (private network) NIC #3: Live Migration (private network) NIC #4: Virtual Switch (normal/trunked network) This list does not account for iscsi NICs which should be dedicated to their role. Scale Computing recommends that you diagram your configuration and plan for each NIC and IP configuration for that NIC in advance. For an idea of how the NICs are placed, refer to the image in Chapter 1, Introduction to Hyper-V: BEFORE YOU START. The rest of this chapter details the different kinds of networks available for use, and when you want to use them with your Hyper-V failover cluster. There are four kinds of virtual networks you can create: External Virtual Networks Internal Virtual Networks Private Virtual Networks Dedicated Virtual Networks Scale Computing 13
External Virtual Networks External Virtual Networks Use external virtual networks when you want to allow communications between: Virtual machine to virtual machine on the same physical server Virtual machine to parent partition (and vice-versa) Virtual machine to externally located servers (and vice-versa) (Optional) Parent partition to externally located servers (and vice-versa) Internal Virtual Networks Use internal virtual networks when you want to allow communications between: Virtual machine to virtual machine on the same physical server Virtual machine to parent partition (and vice-versa) An internal network is an external network without a binding to a physical NIC. You commonly use an internal network to build a test environment where you need network connectivity into the virtual machines from the parent partition itself. Private Virtual Networks Private virtual networks are used when you want to allow communications between: Virtual machine to virtual machine on the same physical server A private network is an internal network without a virtual NIC in the parent partition. A private network is often used when you need complete isolation of virtual machines from external and parent partition traffic. Scale Computing 14
Dedicated Virtual Networks Dedicated Virtual Networks Dedicated virtual networks are networks where you dedicate a physical NIC for use for one type of traffic or service. They allow communication between: Virtual machine to virtual machine on the same physical server Virtual machine to externally located servers (and vice-versa) The parent partition cannot use a dedicated virtual network for its own communication. Revision History This section contains information describing how this chapter has been revised. Scale Computing 15
Revision History Scale Computing 16
Install Hyper-V Role on Each Server CHAPTER 6 Install Hyper-V Role If you have met all the requirements provided in Chapter 1, Introduction to Hyper-V: BEFORE YOU START and Chapter 4, Requirements for Working with Hyper-V, you are ready to install Hyper-V. Scale Computing strongly recommends that you thoroughly review both requirements chapters before continuing with the installation process. Take the following steps to install Hyper-V on a full installation of Windows Server 2008 R2: Install Hyper-V Role on Each Server Install the Failover Cluster Feature on Each Server Install Hyper-V Role on Each Server To install the Hyper-V role on each server, take the following steps: 1 Click Start. The Windows Start Menu opens. 2 Point to Administrative Tools and click Server Manager. The Server Manager opens. 3 In the center panel of the screen, scroll down to Roles Summary. 4 In the Roles Summary area of the Server Manager window, click Add Roles. The Add Roles Wizard opens. 5 Navigate through the Add Roles Wizard. On the Select Server Roles page, click Hyper-V. 6 On the Create Virtual Networks page, click the network adapter(s) with virtual connection(s) you want to make available to virtual machines. 7 On the Confirm Installation Selections page, click Install. 8 To complete installation, restart your computer. To do this click Close to finish the wizard. A dialog box appears prompting you as to whether you want to restart. 9 Click Yes to restart the computer. Scale Computing 17
Install the Failover Cluster Feature on Each Server Repeat this process for the other server. Install the Failover Cluster Feature on Each Server To install the failover cluster feature on a full installation of Windows Server 2008 R2, take the following steps: 1 If you recently installed Windows Server 2008 R2, the Initial Configuration Tasks interface is displayed. Under Customize This Server, click Add features. Then move to step 6. 2 If the Initial Configuration Tasks interface is not displayed and Server Manager is not running, click Start. 3 Click Administrative Tools. 4 Click Server Manager. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.) 5 In Server Manager, in the center panel, scroll down to the Features Summary section and click Add Features. 6 In the Add Features Wizard, click Failover Clustering. 7 Click Install. 8 Follow the instructions in the wizard to complete the installation of failover clustering. When the wizard finishes, close it. Repeat this process for the other server. Configuring Virtual Networks There are four common types of virtual networks you can install on the server running Hyper- V: Scale Computing 18
Configuring Virtual Networks Private network - communication between virtual machines only Internal network - communication between the virtualization server and virtual machines External network - communication between a virtual machine and a physical network by creating an association to a physical network adapter on the virtualization server Dedicated network - communication between virtual machines or a virtual machine and an externally located server, where the network is devoted to one type of traffic For more information about networks refer to Chapter 5, Network Adapters for Hyper-V. Take these steps to create a virtual network: 1 Open the Server Manager. 2 From the Actions menu, click Virtual Network Manager as shown in Figure 6-1, Virtual Network Manager Choice. FIGURE 6-1. Virtual Network Manager Choice 3 The Virtual Manager page opens as shown in Figure 6-2, Virtual Manager Page. Under the Create virtual network section of the page, select the type of network you want to create. Scale Computing 19
Configuring Virtual Networks FIGURE 6-2. Virtual Manager Page 4 Click Add. The New Virtual Network page appears as shown in Figure 6-3, New Virtual Network Page. Scale Computing 20
Configuring Virtual Networks FIGURE 6-3. New Virtual Network Page 5 Type a name for the new network. Make sure the network name is the same across all Hyper-V nodes in the cluster. This is required to allow VMs to move between nodes. Review the other properties and modify them as appropriate. 6 Select the adapter you want to use from the list of choices provided. 7 Click OK to create the virtual network and close the Virtual Network Manager, or click Apply to create the virtual network and continue using Virtual Network Manager. Scale Computing 21
Revision History Revision History This section contains information describing how this chapter has been revised. Scale Computing 22
Before You Begin CHAPTER 7 Using Hyper-V and Failover Clustering Before proceeding with this chapter, be sure you have met all the requirements for Hyper-V with Failover Clustering described in Chapter 4, Requirements for Working with Hyper- V. This chapter covers how to use Hyper-V with failover clustering and Cluster Shared Volumes (CSV) enabled. Before You Begin Network Infrastructure and Domain Account Requirements for a Two-Node Failover Cluster Preparing Storage on a Scale Computing Cluster Connect the Computers to the Networks Discover Targets on Hyper-V Servers Validate Cluster Configuration Create the Cluster Initialize a LUN as a Cluster Shared Volume Create a Virtual Machine Before You Begin To use Hyper-V with failover clustering, there are a few things you should be aware of up front: You must install and enable MPIO before building your cluster. You must have a minimum of two network adapters per node in your Hyper-V cluster. All installations of Hyper-V, all roles, must match exactly on all nodes in your Hyper-V cluster. A way to ensure this is to use answer files. Scale Computing 23
Network Infrastructure and Domain Account Requirements for a Two-Node Failover Cluster Hyper-V does not natively support dynamic disks. All nodes in your Hyper-V cluster must be able to get to any storage you want to use in the cluster. You must create targets on your Scale Computing cluster using SPC-3 enabled targets. You can see available storage from any node in your Hyper-V cluster, but it may be labelled differently. For example Disk 1 on one Hyper-V node may be Disk 10 on another Hyper-V node. Ensure that each server in your cluster has a virtual network that is identically configured. Name and connection type must match otherwise when you try to use failover, the tests will fail and it will not work. Network Infrastructure and Domain Account Requirements for a Two-Node Failover Cluster For a two-node failover cluster you need the following network infrastructure and an administrative account with the following domain permissions: Network settings and IP addresses: Ideally use identical network adapters with identical communication settings. Compare your adapters to the switch they connect to to ensure there is no settings conflict. For storage, Scale Computing clusters have specific requirements when it comes to provisioning IP addresses. For details, refer to the Scale Computing Storage Cluster Installation Guide, Chapter 3, Provisioning IP Addresses for a Cluster. You can also refer to the Concepts and Planning Guide for the Scale Computing Storage Cluster, Chapter 7, How a Scale Computing Cluster Works. DNS: Servers in your Hyper-V cluster must use Domain Name System (DNS) for name resolution. Domain role: All servers in your Hyper-V cluster must be in the same Active Directory domain. Ideally all clustered servers should have the same domain role (either member server or domain controller). The recommended role is member server. Scale Computing 24
Preparing Storage on a Scale Computing Cluster Domain controller: It is recommended that your clustered servers be member servers. If they are, you need an additional server that acts as the domain controller in the domain that contains your failover cluster. Clients: As needed, you can connect one or more networked clients to the failover cluster that you create, and observe the effect on a client when you move or fail over the highly available virtual machine (VM) from one cluster node to the other. Account for administering the cluster: When you first create a cluster or add servers to it, you must be logged on to the domain with an account that has administrator rights and permissions on all servers in that cluster. The account does not need to be a Domain Admins account, but can be a Domain Users account that is in the Administrators group on each clustered server. In addition, if the account is not a Domain Admins account, the account (or the group that the account is a member of) must be given the Create Computer Objects and Read All Properties permissions in the domain. Preparing Storage on a Scale Computing Cluster Targets on the Scale Computing cluster must have SPC-3 Persistent Reservation (PR) Compliance enabled. You can create new targets or use existing targets so long as they are SPC-3 PR compliant. For information about how to create or update targets for SPC-3 PR compliance, refer to Chapter 2, Creating and Updating LUNs and Targets. For the purposes of this guide, you must create at least one target and two LUNs. They are labelled in examples this way: Target: hvcluster1 LUN for Quorum: quorum1 (1 GB) LUN for CSV: csv1 (400 GB) To prepare storage for the walkthrough, take the following steps: 1 Open the Scale Cluster Manager. Scale Computing 25
Preparing Storage on a Scale Computing Cluster 2 From the list of choices in the menu on the left side of the screen, click CIFS/NFS/iSCSI. 3 From the list of choices that appear in the menu on the left side of the screen, click iscsi. The iscsi Management screen appears. 4 Under the Targets section, click the +. The Create iscsi Target dialog box appears. 5 In the Target Name field, enter the target name hvcluster1. 6 In the Access List (IP of the initiator) enter the IP address for both Hyper-V servers that you will include in your cluster. The access list is populated with an example address as shown in Figure 7-1, Create iscsi Target Dialog Box. You can add or remove addresses by clicking the + or - button. FIGURE 7-1. Create iscsi Target Dialog Box Scale Computing 26
Connect the Computers to the Networks 7 When you are done, click Create iscsi Target. 8 Create one LUN that is 1GB and call it quorum1. 9 Create another LUN that is 400GB and call it csv1. For details about creating LUNs refer to Chapter 2, Creating and Updating LUNs and Targets. Connect the Computers to the Networks To connect your servers to the network, take the following steps: 1 Use the instructions provided in the Scale Computing Storage Cluster Installation Guide to physically connect your servers to your storage. 2 For configuring to use CSV and Live Migration, you can use a cross over cable in a 2 host cluster. If you have 3 or more hosts, a switch is required. 3 For the LAN (as discussed in Scale Computing documentation), you need a switch. 4 Configure your NICs on the host side. Discover Targets on Hyper-V Servers Make sure you have completed the steps in Preparing Storage on a Scale Computing Cluster. 1 Go to one of the computers in your Hyper-V cluster. 2 Click Start to open the Start Menu. 3 Click iscsi Initiator. The iscsi Initiator Properties Dialog Box opens, as shown in Figure 7-2, iscsi Initiator Properties Dialog Box. Scale Computing 27
Discover Targets on Hyper-V Servers FIGURE 7-2. iscsi Initiator Properties Dialog Box 4 Click Discover Portal. The Discover Target Portal dialog box appears. 5 Enter a virtual IP address for your Scale Computing cluster. 6 Click OK. 7 Navigate to the Targets tab. 8 A list of targets available to your Hyper-V cluster should appear in the Discovered targets table. 9 Use the options on this tab to connect to the target hvcluster1. Scale Computing 28
Initialize a LUN 10 Turn on the MPIO checkbox. The LUNs you created for target hvcluster1 (quorum1, csv1) should now show up as disks if you navigate to the Disk Management section of your Server Manager. Each LUN is presented as a numbered disk. For more information about these new disks, refer to Initialize a LUN. Initialize a LUN This section describes how to take the LUN quorum1 appearing as a disk in the Disk Management section of the Server Manager and initialize it using the following steps: 1 Click Start from one of the servers (nodes) in your Hyper-V cluster. 2 Point to Administrative Tools and click Server Manager. 3 From the menu options on the left side of the Server Manager, click Storage. 4 Click Disk Management. All LUNs from discovered targets appear as disks in this section. Newly made LUNs appear as a numbered disk in the Offline setting. Depending on how you have set up storage, a different disk number may be assigned to a LUN on each server in your Hyper-V cluster. Ensure all servers can see the storage or you will not be able to add it to your cluster. Figure 7-3, Disk with Offline Status shows LUN quorum1 as Disk 1 and LUN csv1 as Disk 2. Scale Computing 29
Initialize a LUN FIGURE 7-3. Disk with Offline Status 5 Click on Disk 1 to select it. 6 Open the Action menu at the top of the Server Manager, point to All Tasks and click Online. Disk 1 s status will change to Not Initialized. 7 Open the Action menu at the top of the Server Manager, point to All Tasks and click Initialize Disk. The Initialize Disk dialog box opens as shown in Figure 7-4, Initialize Disk Dialog Box. Scale Computing 30
Initialize a LUN FIGURE 7-4. Initialize Disk Dialog Box 8 Choose the Master Boot Record (MBR) option. 9 Click OK. Your disk will change status to Basic. 10 Click the storage box next to the disk number, highlighted with a gray box as shown in Figure 7-5, Storage Box Next to Disk Number. FIGURE 7-5. Storage Box Next to Disk Number 11 At the top of the Server Manager, click the Action menu. 12 In the Action menu, point to All Tasks and click New Simple Volume. The New Simple Volume Wizard opens. 13 Click Next. Scale Computing 31
Validate Cluster Configuration 14 Follow the instructions in the Wizard. When you get to the Format Partition screen, select the radio button Format this volume with the following settings. 15 From the drop down next to File system choose NTFS. 16 In the Volume label field type quorum1. 17 Click Next. Complete the rest of the Wizard. 18 The final screen of the Wizard gives you a summary of the choices you made. If you want to alter anything, use the Back button to go through to the appropriate screen and adjust the settings. It is recommended that you wait to initialize the rest of your storage until after creating the cluster. If you initialize multiple LUNs, your Hyper-V failover cluster chooses a LUN to act as a witness disk. It may not be the LUN you wish to use as a witness disk if you initialize all your storage at this time. If you do choose to initialize all your storage at this time, BE SURE TO REVIEW: Initialize a LUN as a Cluster Shared Volume - A key difference is you must format with the GPT rather than MBR option, and you can only complete creation of a CSV once you have a Hyper-V failover cluster created. Change Witness Disk Selection (Optional) - You may need this section if you initialize all your storage now and do not like the LUN your Hyper-V failover cluster chooses as the witness disk. Validate Cluster Configuration Before creating your Hyper-V cluster, run a full validation test of your configuration. This helps you confirm everything meets all the requirements for failover clusters. Take the following steps to validate failover cluster configuration: 1 To open the Start Menu, click Start. 2 Click Administrative Tools. Scale Computing 32
Create the Cluster 3 Click Failover Cluster Manager. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, then click Continue.) 4 In the Failover Cluster Manager screen, confirm that Failover Cluster Manager is selected. 5 In the center pane under Management, click Validate a Configuration. 6 Follow the instructions in the wizard to specify the servers. You must know the names of both servers since they are unclustered at this time. Run all tests to fully validate the cluster before creating a cluster. The Summary page appears after the tests run. 7 While on the Summary page, click View Report and read the test results. To view Help topics that aid you in interpreting the results, click More about cluster validation tests. 8 Make changes as necessary to get your configuration passing the validation test. Create the Cluster To create a cluster from your computers, run the Create Cluster wizard by taking the following steps: 1 Click Start to open the Start Menu. 2 Click Failover Cluster Manager. The Failover Cluster Manager opens. 3 In the center pane of the Failover Cluster Manager, click Create a cluster. 4 A wizard prompts you to enter the servers to include in the new cluster, the name of the cluster (provide a DNS name and add it to DNS), and any IP address information that is not automatically supplied by your Dynamic Host Configuration Protocol (DHCP) settings. You may need to provide a network address, which is a static IP address for your cluster to use. 5 After the wizard runs, the Summary page appears. Click View Report to view a report of the tasks the wizard performed. Scale Computing 33
Change Witness Disk Selection (Optional) Change Witness Disk Selection (Optional) When you first set up a failover cluster, the cluster Wizard chooses a a Witness Disk (quorum disk) for you. However, assume for a moment that it chose a disk you did not want to use. In this case, assume that a disk other than quorum1 was selected as the quorum disk and you would like to change the set up so quorum1 is used as the quorum disk. 1 Click the name of your cluster. It is displayed under the Failover Manager item, one level in, in the menu on the left side of the Server Manager. 2 Click Action at the top of the Server Manager. 3 Point to More Actions and click Configure Cluster Quorum Settings. 4 By default, the Wizard chooses the best settings, so click Next until you arrive at the Configure Storage Witness screen. 5 On the Configure Storage Witness screen, turn on the box next to the disk you want to use as your Witness Disk (in this case, quorum1). 6 Click Next. 7 Navigate through to the end of the Wizard and click Finish. It is recommended that you not change the settings the Wizard selects for you outside of choosing the Witness Disk. Alternatively, you can let your cluster choose the Witness Disk for you, in which case you do not need to worry about this section. Initialize a LUN as a Cluster Shared Volume A CSV is a standard cluster disk containing an NTFS volume that is made accessible for read and write operations by all nodes with the cluster. They are only compatible with Hyper-V, and in most cases they are the best choice for how to store your VMs. CSVs give VMs complete mobility throughout the Hyper-V cluster as any Hyper-V node can be an owner, and Scale Computing 34
Initialize a LUN as a Cluster Shared Volume changing owners is easy. CSV is not required for live migration, but it is recommended. To turn LUN csv1 (Disk 2) into a CSV disk, take the following steps: 1 Click Start on a computer from your Hyper-V cluster. 2 Point to Administrative Tools and click Server Manager. 3 From the menu options on the left side of the Server Manager, click Storage. 4 Click Disk Management. All LUNs from discovered targets appear as disks in this section. Newly made LUNs appear as a numbered disk in the Offline setting. In this case, Disk 2 represents csv1. Depending on how you have set up storage, a different disk number may be assigned to a LUN on each server in your Hyper-V cluster. Ensure all servers can see the storage or you will not be able to add it to your cluster. 5 Click on Disk 2 to select it. 6 Open the Action menu at the top of the Server Manager, point to All Tasks and click Online. Disk 2 s status will change to Not Initialized. 7 Open the Action menu at the top of the Server Manager, point to All Tasks and click Initialize Disk. The Initialize Disk dialog box opens. 8 Choose the GUID Partition Table (GPT) option. 9 Click OK. Your disk will change status to Basic. 10 Click the storage box next to the disk number, highlighted with a gray box. 11 At the top of the Server Manager, click the Action menu. 12 In the Action menu, point to All Tasks and click New Simple Volume. The New Simple Volume Wizard opens. 13 Click Next. 14 Follow the instructions in the Wizard. When you get to the Format Partition screen, select the radio button Format this volume with the following settings. 15 From the drop down next to File system choose NTFS. 16 In the Volume label field type csv1. 17 Click Next. Complete the rest of the Wizard. 18 The final screen of the Wizard gives you a summary of the choices you made. If you want to alter anything, use the Back button to go through to the appropriate screen and adjust the settings. 19 If everything looks correct, click Finish. Scale Computing 35
Initialize a LUN as a Cluster Shared Volume 20 Under Failover Cluster Manager, click the name of your cluster. The Configure panel opens in the center panel as shown in Figure 7-6, Configure Panel. FIGURE 7-6. Configure Panel 21 From the list of choices in the menu click Enable Cluster Shared Volumes. 22 Under the Failover Cluster Manager, a new option will be available called CSV. Click Cluster Shared Volumes. The empty Cluster Shared Volume panel appears in the center pane. 23 From the menu to the right of the Cluster Shared Volume panel under Actions, click Add storage. The Add Storage dialog box appears. Any storage you created, in this case the disk representing quorum1 and the disk representing csv1 will appear. 24 Turn on the box next to Cluster Disk 2. If you want to double check if a disk is the one you want to use, you can click the + next to each disk and see how much storage is allocated, which can help you identify the correct disk/lun. 25 Click OK. You can make as many CSVs as you like using these steps. In the center panel, your new disk will show up labelled Cluster Disk 2. If you click the + beside it, a pathname will appear. The Scale Computing 36
Create a Virtual Machine pathname will say something like C:\ClusterStorage\Volume1. Be aware that the CSV disk is not actually stored on your C: drive. This is just a mount point. The disk actually resides on your Scale Computing cluster and is accessible by all VMs. Create a Virtual Machine To create a VM take the following steps: 1 Open the Hyper-V Manager. Click Start. 2 Point to Administrative Tools, and then click Hyper-V Manager. The Hyper-V Manager opens as shown in Figure 7-7, Hyper-V Manager. Scale Computing 37
Create a Virtual Machine FIGURE 7-7. Hyper-V Manager 3 From the Action pane, click New. 4 Click Virtual Machine as shown in Figure 7-8, Action Pane Menu Choices Under New. The New Virtual Machine Wizard opens. Scale Computing 38
Create a Virtual Machine FIGURE 7-8. Action Pane Menu Choices Under New 5 Click Next. 6 On the Specify Name and Location page, name the VM you are creating and where you want to store it as shown in Figure 7-9, Specify Name and Location Page. 7 Click Store the virtual machine in a different location. You can now browse to the storage you set up earlier and place your VM on one of the disks offered there. If you want to use live migration with your VMs, place the VMs on CSV disks. 8 Click Next. Scale Computing 39
Create a Virtual Machine FIGURE 7-9. Specify Name and Location Page 9 On the Memory page, specify enough memory to run the guest operating system you want to use on the VM. (Example: 1024 MB to run Windows Server 2008 R2). 10 Click Next. 11 On the Networking page, connect the network adapter to an existing virtual network. For more information about the types of virtual networks available, refer to Chapter 5, Network Adapters for Hyper-V. 12 On the Connect Virtual Hard Disk page, click Create a virtual hard disk. You can change the name of a disk here if you wish. 13 Click Next. 14 On the Installation Options page, click Install an operating system from a boot CD/ DVD-ROM. 15 Under Media, specify the location of the media. Scale Computing 40
Revision History 16 Click Finish. If you want to take further steps that make your VM highly available, do not turn on your VM. You must keep it turned off while performing the steps that make it highly available. For information about how to make your VM highly available, refer to Chapter 8, Live Migration. Revision History This section contains information describing how this chapter has been revised. Scale Computing 41
Revision History Scale Computing 42
Configure Automatic Start Action for Your Virtual Machine CHAPTER 8 Live Migration Live migration enables you to move running virtual machines (VMs) from one node of your failover cluster to another without a dropped network connection or perceived downtime. In order to use it you must meet the following requirements: You must use Windows Server 2008 R2 or higher. You must successfully complete all steps provided in this guide prior to this chapter in order for this chapter to be useful. You must have a Hyper-V cluster with failover enabled. You must have Cluster Shared Volumes (CSV) enabled. You must have at least one VM set up. This chapter walks you through setting up for a live migration in the following sections: Configure Automatic Start Action for Your Virtual Machine Make the Virtual Machine Highly Available Bring the Virtual Machine Online Configure Cluster Networks for Live Migration Initiate Live Migration Using Failover Cluster Manager Test a Planned or Unplanned Failover Moving, Modifying, or Removing a Virtual Machine from Your Hyper-V Cluster Configure Automatic Start Action for Your Virtual Machine A VM must have the automatic start action set to Nothing in order to work correctly with live migration. You can configure automatic start action on a VM by taking the following steps: 1 Turn off your VM. It must be off during configuration. Scale Computing 43
Make the Virtual Machine Highly Available 2 Open the Hyper-V Manager. 3 Navigate to a VM you want to configure for Live Migration and click on it. 4 Find the menu on the right side of the Hyper-V Manager that is titled with the name of the VM you clicked. From the choices provided under this menu, click Settings. 5 In the left menu, click Automatic Start Action. 6 Under What do you want this VM to do when the physical computer starts? select Nothing. Make the Virtual Machine Highly Available To make a VM highly available, open the Failover Cluster Manager snap-in and take the following steps: 1 In the Failover Cluster Manager, if the cluster you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager. 2 In the center panel, under the Management section, click Manage a Cluster. A dialog box pops up. 3 From the dialog box, select the cluster you want to manage from the drop-down menu. Click OK. Alternatively, you can expand out the menu under the Failover Cluster Manager and click the cluster you want to use. 4 Once you have selected your cluster, if the console tree under your choice is collapsed, expand the tree under the cluster that you want to configure. 5 From the expanded menu under the cluster, click Services and Applications. 6 On the right side of the screen under the Actions menu, click Configure a Service or Application. 7 If the Before You Begin screen appears, click Next. (You may also want to review the material provided by this screen.) 8 On the Select Service or Application screen, click Virtual Machine. 9 Click Next. 10 On the Select Virtual Machine screen, turn on the check box next to the VM you want to make highly available. Scale Computing 44
Bring the Virtual Machine Online 11 Click Next. 12 Confirm your selection and click Next. 13 The wizard configures the VM for high availability and provides a summary. To view details of the configuration, click View Report. 14 To close the wizard, click Finish. To verify that the VM is highly available, you can check in two places on the console tree. You can expand Services and Applications and the VM should be listed under Services and Applications. Otherwise, you can expand Nodes and select the node on which you created the VM. The VM should be listed under Services and Applications in the Results panel in the center. Bring the Virtual Machine Online To bring the VM online, take the following steps: 1 Under Services and Applications, right-click the VM and then click Bring this service or application online. Once your VM is online, you are ready to configure your cluster network for live migration. Configure Cluster Networks for Live Migration Cluster networks are automatically configured for live migration. You can use Failover Cluster Manager to complete the following procedure: 1 In the Server Manager, from the list of choices in the menu on the left side of the screen, click Failover Cluster Manager. The Failover Cluster Manager panel appears. 2 Click Manage a Cluster. 3 Specify the cluster you want to use. Scale Computing 45
Initiate Live Migration Using Failover Cluster Manager 4 Expand Services and applications. 5 In the menu on the left, select the clustered VM for which you want to configure the network for live migration. 6 Right-click the VM resource displayed in the center panel of the screen and click Properties. 7 Click the Network for live migration tab and select one or more cluster networks to use for live migration. Use the buttons on the right to move the cluster networks up or down to ensure that a private cluster network is the most preferred. The default preference order is as follows: networks that have no default gateway should be located first; networks that are used by CSVs and cluster traffic should be located last. Live migration is attempted in the order of the networks specified in the list of cluster networks. If the connection to the destination node using the first network is not successful, the next network in the list is used until the complete list is exhausted, or there is a successful connection to the destination node using one of the networks. Initiate Live Migration Using Failover Cluster Manager To initiate live migration using the Failover Cluster Manager, take the following steps: 1 Click Failover Cluster Manager. 2 In the center panel displayed, click Manage a Cluster and choose the cluster you want. 3 Expand Nodes. 4 From the menu on the left side of the screen, select the Hyper-V node you want to move a clustered VM to using live migration. 5 Right-click on the VM resource displayed in the center panel (careful not to click a choice from the left side of the screen). 6 In the center of the screen, click Live migrate virtual machine to another node. 7 Select the node you want to move the VM to. When migration is complete, the VM is running on the new node. Scale Computing 46
Test a Planned or Unplanned Failover 8 To verify that the VM successfully migrated, you will see the VM listed under the new node (in Current Owner). Test a Planned or Unplanned Failover For detailed information about how to test a planned or unplanned failover, refer to the Microsoft document Hyper-V: Using Hyper-V and Failover Clustering - http://technet.microsoft.com/en-us/library/cc732181%28ws.10,printer%29.aspx for instructions. Moving, Modifying, or Removing a Virtual Machine from Your Hyper-V Cluster For detailed information about how to move, modify, or remove a VM, refer to the Microsoft document Hyper-V: Using Hyper-V and Failover Clustering - http://technet.microsoft.com/en-us/library/cc732181%28ws.10,printer%29.aspx. Revision History This section contains information describing how this chapter has been revised. Scale Computing 47
Revision History Scale Computing 48
Revision History CHAPTER 9 Glossary For users not familiar with Hyper-V, some of the terms may be confusing. This chapter provides brief definitions of terms you may encounter while reviewing this guide. Cluster Shared Volume (CSV) - An option in Hyper-V that when enabled, allows you to have multiple clustered virtual machines (VMs) distributed across multiple nodes access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. This allows multiple VMs to failover independently of one another and makes managing your environment easier than other configuration options. Hyper-V - A low cost bundle of tools and services you can use to create a virtualized environment. Node and Disk Majority - A term that means the nodes and the witness disk each contain copies of the cluster configuration, and the cluster has quorum as long as a majority of these copies are available. Pass-through Disk - A physical disk attached to a VM. This type of disk has slightly better performance when working with a Hyper-V VM. However, you trade off portability, snapshotting, and thin provisioning when using them. They are best used in situations where you need to create a large disk (2TB or higher). Witness Disk - A witness disk is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. For a two node Hyper-V cluster with failover clustering and CSV enabled, the quorum configuration is Node and Disk Majority, the default for a cluster with an even number of nodes. Revision History This section contains information describing how this chapter has been revised. Scale Computing 49
Revision History Scale Computing 50
Revision History CHAPTER 10 Resources If you want to provide feedback regarding this or other guides, send an email to documentation@scalecomputing.com. If you need more information about how to work with Hyper-V, you may find the following documents useful: Hyper-V Getting Started Guide - A guide from Microsoft that teaches you how to install the Hyper-V role and set up a virtual machine. Hyper-V: Using Hyper-V and Failover Clustering - A guide from Microsoft that provides an overview of hardware and software requirements for working with Hyper-V. Using Live Migration with Cluster Shared Volumes - This Microsoft guide talks about how to use Hyper-V and failover clustering to do live migration of virtual machines. Cluster Shared Volumes Support for Hyper-V - This Microsoft guide discusses how Hyper-V supports clustered shared volumes so you can place all virtual machines on the same LUN while still being able to fail over. Revision History This section contains information describing how this chapter has been revised. Scale Computing 51
Revision History Scale Computing 52