Using EMC Celerra IP Storage with Microsoft Hyper-V over iscsi
|
|
|
- Candice Lamb
- 10 years ago
- Views:
Transcription
1 Best Practices Planning Abstract This white paper provides a detailed overview of the EMC Celerra network-attached storage product with Microsoft Hyper-V. Hyper-V is the next generation hypervisor following Microsoft Virtual Server It is a complete redesign of the bare-metal hypervisor that is built into 64-bit Windows Server This white paper covers the use of Celerra protocols and features in support of the Hyper-V virtualization environment. January 2009
2 Copyright 2008, 2009 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part Number H Best Practices Planning 2
3 Table of Contents Executive summary...4 Introduction...4 Audience... 4 Terminology... 4 Celerra overview...6 Data Movers... 7 Control Station... 7 Basics of multi-protocol support on Celerra...8 Native iscsi... 8 CIFS and NFS... 9 Microsoft Hyper-V...9 Hyper-V storage features Virtual hard disk (VHD) Pass-through disk Hyper-V network concepts Management Console IP storage Virtual machine networking Celerra storage provisioning for Hyper-V Server...16 Naming of storage objects Business continuity (SnapSure) Celerra iscsi configuration for Hyper-V Server...17 LUN configuration options NTFS formatted file system Pass-through disk Microsoft iscsi Software Initiator within a virtual machine Virtual provisioning...23 File system virtual provisioning iscsi virtual provisioning Partition alignment iscsi LUN extension...27 Replication...30 Conclusion...31 References...31 Best Practices Planning 3
4 Executive summary With the release of Microsoft Hyper-V, virtual hardware support was extended to include the use of IP storage devices. This support enables Hyper-V environments to take full advantage of IP block storage using the iscsi protocol. This provides a method to tie virtualized computing to virtualized storage, offering a dynamic set of capabilities within the data center and resulting in improved performance and system reliability. This white paper describes how best to utilize EMC Celerra in a Hyper-V environment. Introduction Virtualization has been a familiar term used in technical conversations for many years now. We have grown accustomed to almost all things virtual, from virtual memory, to virtual networks, to virtual storage. In 2008, Microsoft released Hyper-V, its first bare-metal hypervisor-based technology, built into Windows The goal of Hyper-V is to provide a framework for companies to better partition and utilize their hardware resources, and to compete in the virtual server space with products like VMware ESX Server and Citrix XenServer. With the Hyper-V architecture, hardware-assisted, x64-based systems can run independent virtual environments, with different operating systems and resource requirements, within the same physical server. Several Microsoft products, such as System Center Virtual Machine Manager (SCVMM), Data Protection Manager (DPM), and Performance Resource Optimization (PRO), can virtualize hardware resources. However, this white paper focuses on the Hyper-V Server product due to its support of Celerra IP storage. With Microsoft Hyper-V, Celerra Network Server offers iscsi logical unit numbers (LUNs) as a storage option for creating virtual machines and virtual disks. Celerra also provides advanced scalability and reliability by combining some of the benefits of Fibre Channel with the flexibility and ease of use of IP storage. Audience System administrators, system architects, and anyone who is interested in how Hyper-V can be integrated with iscsi storage will benefit from this white paper. The reader who is familiar with server virtualization products (such as Microsoft Virtual Server), and EMC Celerra will be likely to benefit from this white paper and will be better able to understand the concepts that are presented. Terminology Celerra Replication Service: A service that produces a read-only, point-in-time copy of a source file system. The service periodically updates the copy, making it consistent with the source file system. Child Partition: A partition that has been made by the hypervisor in response to a request from the parent partition. There are a couple of key differences between a child partition and a parent or root partition. Child partitions are unable to create new partitions. Child partitions do not have direct access to devices (any attempt to interact with hardware directly is routed to the parent partition), or memory (when a child partition tries to access memory, the hypervisor/virtualization stack re-map the request to different memory locations). CIFS: Common Internet File System. The file sharing protocol used in Windows that evolved out of the SMB (Server Message Block) protocol. Microsoft introduced a new version of the SMB protocol (SMB 2.0 or SMB2) with Windows Vista in Data Execution Prevention (DEP): A security feature that is intended to prevent an application or service from executing code from a non-executable memory region. Differencing Disk: A type of VHD file that is associated in a parent-child relationship with another disk that you want to leave intact. You can make changes to the data or operating system without affecting the parent disk, so that changes can be easily reverted. Best Practices Planning 4
5 Dynamically Expanding Disk: A type of VHD file that grows as data is stored to the disk, up to the size specified. The file does not shrink automatically when data is deleted. Fail-Safe Network (FSN): A Celerra high-availability feature that extends link failover out into the network by providing switch-level redundancy. An FSN appears as a single link with a single MAC address and potentially multiple IP addresses. Fixed Disk: A type of VHD file that uses the amount of space you specify for the disk size, regardless of how much data is saved to the virtual hard disk Guest operating system: An operating system that runs within a virtual machine or child partition. Hypervisor: A virtualization platform that allows multiple virtual machines to run in isolation on a physical host at the same time. Also known as virtual machine monitor. Hyper-V: The next-generation hypervisor-based server virtualization technology in Windows Its current guest OS support includes Windows and Linux. Hyper-V Manager: A Microsoft Management Console (MMC) based management interface that allows administrators to perform basic operations, such as provisioning or managing virtual machines. It is installed when a Hyper-V Server role is enabled on Windows Server 2008 full installation. The Windows Server Core version does not have a GUI interface. iscsi target: An iscsi endpoint, identified by a unique iscsi name, that executes commands issued by the iscsi initiator. ISO image: A CD/DVD image that can be downloaded and burnt on a CD/DVD-ROM, or mounted as a loopback device. Link aggregation: A high-availability feature based on IEEE802.3ad. Link aggregation uses multiple Ethernet network cables and ports in parallel to increase the aggregated throughput beyond the limits of any one single cable or port, and to increase the redundancy for higher availability. LUN: For iscsi on a Celerra Network Server, a logical unit is an iscsi software feature that processes SCSI commands, such as reading from and writing to storage media. From an iscsi host perspective, a logical unit appears as a disk device. NFS: A distributed file system providing transparent access to remote file systems. NFS allows all network systems to share a single copy of a directory. Parent Partition: A partition that is capable of calling the hypervisor and requesting that new partitions be created. In the first release of Hyper-V, the parent and root partitions are one and the same and there can only be one parent partition. Partition: A basic entity that is managed by the hypervisor. It is an abstract container that consists of isolated processor and memory resources with policies on device access. A partition is a lighter weight concept than a virtual machine and could be used outside of the context of virtual machines to provide a highly isolated execution environment. Pass-through disk: Physical disks/luns that are directly attached to a virtual machine. A passthrough disk bypasses the file system layout at the parent partition. It provides disk performance on par with physical disk, but it does not support some of the functionality of virtual disks, such as virtual machine snapshots. RAID 1: A redundant array of inexpensive disks (RAID). Using RAID provides data integrity by mirroring (copying) data onto another disk in the LUN. This RAID type provides the greatest data integrity at the greatest cost in disk space. RAID 5: Data is striped across disks in large stripes. Parity information is stored so data can be reconstructed if needed. One disk can fail without data loss. Performance is good for reads, but slow for writes. Replication failover: The process that changes the destination file system from read-only to read/write and stops the transmission of replicated data. The source file system, if available, becomes read-only. Root Partition: The first partition on the computer. Specifically, this is the partition that is responsible for initially starting the hypervisor. It is also the only partition that has direct access to memory and devices. Best Practices Planning 5
6 SnapSure : On a Celerra system, a feature providing read-only point-in-time copies of a file system. They are also referred to as checkpoints. System Center Virtual Machine Manager (SCVMM): A member of the System Center suite of server management products that provides unified management of physical and virtual machines, consolidation of underutilized physical servers, and rapid provisioning of new virtual machines by leveraging the expertise and investments in Microsoft Windows Server technology. SCVMM 2008 supports Hyper-V and provides compatibility with VMware VI3 through VirtualCenter integration. Templates: A means to import virtual machines and store them as templates that can be deployed at a later time to create new virtual machines. Virtual Hard Disk (VHD): VHD format is the common virtualization file format that captures the entire virtual machine operating system and the application stack in a single file stored on a file system in the parent partition. Virtual local area network (VLAN) A group of devices physically residing on different network segments but communicating as if they resided on the same network segment. VLANs are configured by management software and are based on logical versus physical connections for increased administrative flexibility. Virtual machine (VM): A virtualized x86 or x64 PC on which a guest operating system and an associated application run. Virtual machine configuration file: An XML file containing a virtual machine configuration that is created by the New Virtual Machine Wizard. It contains a virtual machine ID that uniquely identifies each VM. Virtual network device: A combination of multiple physical devices defined by a single MAC address. Virtual provisioned LUN: An iscsi LUN without reserved space on the file system. File system space must be available for allocation whenever data is added to the LUN. Celerra overview The Celerra NAS product family covers a broad range of configurations and capabilities that scale from the midrange to the high end of networked storage. Although differences exist along the product line, there are some common building blocks. These basic building blocks are combined to fill out a broad, scalable product line with consistent support and configuration options. A Celerra frame provides n+1 power and cooling redundancy and supports a scalable number of physical disks, depending on the model you select and the needs of your solution. For the purposes of this white paper, there are two additional building blocks that are important to discuss: Data Movers Control Stations Data Movers move data back and forth between the LAN and back-end storage (disks). The Control Station is the management station for the system. Celerra is configured and controlled via the Control Station. Figure 1 on page 7 shows how Celerra works. Best Practices Planning 6
7 Management LAN Data LAN Control Station Data Mover Data Mover Back End Storage Celerra Figure 1 Celerra block diagram Data Movers A Celerra has one or more Data Movers installed in its frame. A Data Mover can be thought of as an independent server running EMC s optimized NAS operating system, DART (data access in real time). Each Data Mover has its own network ports and network identity, and also has its own connections to backend storage. In many ways, a Data Mover operates as an independent server, bridging the LAN and the back-end storage disk array. Multiple Data Movers are grouped together as a single system for high availability and ease of use. For high availability, Celerra supports a configuration in which one Data Mover can act as a standby for one or more active Data Movers. When an active Data Mover fails, the standby can be booted quickly to take over the identity and storage of the failed device. For ease of use, all Data Movers in a cabinet are logically grouped together for administration as a single system through the Control Station. Control Station The Control Station is the single point of management and control of a Celerra frame. Regardless of the number of Data Movers or disk drives in the system, all administration of the system is done through the Control Station. Control Stations not only provide the interface to configure Data Movers and back-end storage, but also provide heartbeat monitoring of the Data Movers. It is important to note that if a Control Station is inoperable for any reason, the Data Movers continue to operate normally. However, the Celerra architecture provides an option for a redundant Control Station to support continuous management for an increased level of availability. The Control Station runs a version of the Linux OS that EMC has optimized for Celerra and NAS administration. The diagram in Figure 1 shows a Celerra system with two Data Movers. The Celerra NAS family supports up to 14 Data Movers, depending on the product model. Best Practices Planning 7
8 Basics of multi-protocol support on Celerra Celerra provides access to block and file data using iscsi, CIFS, and NFS. These storage protocols provide standard TCP/IP network services as depicted in the topology diagram of Figure 2. Network Servers (iscsi Initiators or Hosts) EMC Celerra Network Clients (CIFS, NFS, etc.) Figure 2 Celerra iscsi topology Native iscsi Celerra delivers a native iscsi solution. This means that when deployed, all the connected devices have TCP/IP interfaces capable of supporting the iscsi protocol. A simplified view of this topology is shown in Figure 3 on page 9. No special hardware, software, or additional configuration is required. The administrator only needs to configure the two endpoints the iscsi target on the Celerra and the iscsi host initiator on the server. This option is easier to configure and manage than a typical iscsi bridging deployment. A bridging solution requires additional hardware that acts as the bridge between the Fibre Channel network connected to the storage array and the TCP/IP network connected to the host initiator. The addition of the bridge adds hardware cost and configuration complexity that is not necessary when deploying iscsi on Celerra. With a bridging deployment, you must configure the iscsi target on the storage array, the iscsi host initiator on your server, and both corresponding connections on the bridge. Best Practices Planning 8
9 LAN Celerra iscsi Host TCP/IP Figure 3 Native iscsi CIFS and NFS Common Internet File System (CIFS), and Network File System (NFS) are remote file sharing protocols offered by the Celerra network storage system. Celerra provides support for CIFS and NFS servers through one or more Data Movers, allowing CIFS and NFS clients to safely access shared storage across the network. Celerra provides added functionality to the CIFS and NFS services by offering support for snapshot copies as well as integrated replication and backup technologies. The core components providing the CIFS and NFS services are the Celerra Data Mover or blade server. Microsoft Hyper-V Microsoft Hyper-V is virtualization software that provides server consolidation by allowing several instances of similar and dissimilar operating systems to run as virtual machines on one physical machine. This cost-effective, highly scalable virtual machine platform offers advanced resource management capabilities. Hyper-V minimizes the total cost of ownership (TCO) of computing infrastructure by: Increasing resource utilization. Decreasing the number of servers and all associated costs. Maximizing server manageability. There are two requirements on the hardware platform. First, the server must support and enable hardwareassisted virtualization technology such as Intel VT or AMD-V. Second, the hardware must support and enable Data Execution Protection (DEP). This means enabling the XD (execute disable) bit on the Intel platform, or the NX (no execute) bit on AMD. The Hyper-V feature is enabled as a server role in an x64 version of Windows Server 2008 Standard, Enterprise, or Datacenter editions. When Hyper-V is enabled, the Windows 2008 environment becomes the Parent Partition, which is where the guest OS is created, device drivers are installed and updated, and the management interface is accessed. Guest OS is created in the context of a child partition, which interacts with the parent partition using the hypercall API via the VMBus, the communication channel between the virtual service provider (VSP) in the parent and the virtual service client (VSC) in the child. Prior to loading the parent and child partitions, the Hyper-V hypervisor is first loaded directly on top of the hardware, allowing virtual machines to run above the virtualization layer provided by the hypervisor. Figure 4 on page 10 shows the relationship between each component in an architectural diagram. Best Practices Planning 9
10 Figure 4 Architecture of Microsoft Hyper-V The parent partition runs in one of the two variations of Windows 2008: the full installation Server or Server Core, which is a stripped down Windows version that only includes a command line (CLI) interface. Hyper-V servers are managed using the GUI Hyper-V Manager that comes with Windows Server In the case of Server Core, there is no GUI support locally, so Hyper-V Manager has to be started remotely on another management server. In addition to the Hyper-V Manager that performs basic management tasks, advanced features such as Quick Migration, and VM cloning can be performed using an optional Microsoft management product, System Center Virtual Machine Management (SCVMM). This allows an administrator to centrally manage a number of Hyper-V servers, as well as a library of virtualization objects. Note that SCVMM is a product that needs to be purchased and licensed separately. Hyper-V storage features There are two ways to provision storage for virtual machines: virtual hard disk or pass-through disk. Virtual hard disk (VHD) VHD is a disk image format implemented by Microsoft. Its specification is made available to third parties so other hypervisors such as XenServer can leverage this technology. VHDs are created as.vhd files that reside on the native host file system of the Hyper-V Server and support a wide range of storage types including iscsi, or theoretically, any other file systems recognized and accessible by Windows 2008 Server. Best Practices Planning 10
11 There are three types of VHDs that can be created with Hyper-V as follows: Dynamically expanding: This is thin provisioning where the.vhd file grows as data is stored to the disk, up to the size you specify. The.vhd file does not shrink automatically when data is deleted. Fixed size: The.vhd file is pre-allocated to the amount of space you specify for the disk size, regardless of how much data is saved to the virtual hard disk. Differencing: This type of disk is associated in a parent-child relationship with another disk that you want to leave intact. You can make changes to the data or operating system without affecting the parent disk, so that changes can be easily reverted. Depending on the disk type, Microsoft provides a utility, Edit Virtual Hard Disk Wizard, that can perform a variety of maintenance operations. For dynamically expanding disk, it can be compacted, converted, or expanded. Compact will shrink a.vhd file by removing blank space that remains when data is deleted from the disk. Convert will cause a dynamic disk to become a fixed size disk by copying the contents. Expand will simply expand the capacity of the virtual disk. A fixed size disk can only be converted to dynamic disk or expanded. A differencing disk can be compacted or merged where the changes stored in a differencing disk are merged into the parent disk or another disk. A virtual disk can be inspected (choose the Inspect Disk option in the Actions pane in Hyper-V Manager) to determine its type, location, and actual or maximum sizes. Because a differencing disk depends on its parent disk, if the filename of the parent disk is renamed, it will not be able to locate the parent, resulting in a broken chain. An option is given to reconnect the chain by manually identifying the correct parent. Unless care is taken to name the VHD files, it could turn into a nightmare to locate the right parent disk out of a multitude of disks. When a virtual disk is created via the New Virtual Hard Disk Wizard, users are given a choice to create a blank disk, or copy the contents from a physical disk (as in \\.\PHYSICALDRIVE0). There is no option in the wizard to copy only a partition within a physical disk, nor is there a utility to perform a physical disk copy after the wizard is finished. It should be noted that VHD files are limited to 2040 GB in size, 8 GB short of 2 TB. This fact should be taken into consideration when provisioning a single large disk for VM. Consider using a pass-through disk that does not have the imposed limitation. Pass-through disk Pass-through disk is an alternative to VHD whereby VMs have direct access to the physical disk, similar to Raw Device Mapping (RDM) with VMware. Pass-through is only applicable to block devices such as iscsi or FC. There is more information on this disk type in the section LUN configuration options. Best practice: In an I/O intensive environment such as an OLTP/DSS database workload, it is strongly recommended utilizing fixed size VHDs or pass-through disk to achieve best performance. Do not use dynamically expanding disks for a production workload as a performance penalty is imposed as a result of data blocks being allocated on demand. Best Practices Planning 11
12 Hyper-V network concepts The Hyper-V architecture requires multiple network interfaces configured in the parent partition in order to support the connectivity requirements of the management console, storage backend, and virtual machines. The primary network interface usages are: Management Console: This interface provides remote management services to the parent partition of the Hyper-V server. IP Storage: This interface is used for all IP storage traffic to establish iscsi sessions. Virtual machine: Connections will be made available to the guest operating systems via an external virtual network/switch. Best practice: It is best to create a separate gigabit interface on a different subnet for each network usage to isolate traffic (for security and performance reasons). Management Console Remote Desktop is disabled by default in the parent partition. Make sure the feature is enabled and RDP connection is unblocked by the Windows firewall to allow remote administration of the parent partition. IP storage Depending on the choice of storage protocol, EMC recommends slightly different network topologies and HA configurations. When iscsi is used to store virtual hard disks (VHD files) or create pass-through disks, it is best to take advantage of the Multiple Connection per Session (MCS) feature of Microsoft iscsi initiator. For step-by-step instructions on configuring MCS with Celerra, refer to the section Celerra iscsi configuration for Hyper-V Server. MCS provides load-balancing capabilities and eliminates the need of aggregating network ports using port channeling. It requires two network ports on the Hyper-V server, which are to be assigned two separate subnets that match those on the Data Mover side to avoid unnecessary routing. Having two network switches connected through an uplink eliminates a single point of failure in such topology, as illustrated in Figure 5. Hyper-V Server Subnet A Subnet A Celerra Data Mover Subnet B Subnet B Uplink between switches Figure 5 Celerra iscsi networking with Hyper-V Best Practices Planning 12
13 Best practice Use one or more gigabit network interfaces to access IP storage targets. Virtual machine networking Three types of virtual machine networks can be created with Hyper-V: external, internal, and private networks. An external network is the only virtual network type that requires binding to a physical network adapter so that virtual machines can access a public network. Virtual networks are configured using Virtual Network Manager within Hyper-V Manager (see Figure 6). Figure 6 Virtual Network Manager wizard To configure an external network, select an adapter from a list of available physical interfaces. Once chosen, the physical interface would become a pseudo adapter that runs the Microsoft Virtual Network Switch Protocol, as shown in Figure 7 on page 14. The interface is listed in the Network Connections window, but it is hidden from the output of the ipconfig command since TCP/IP is disabled by default for this type of interface. Best Practices Planning 13
14 Figure 7 Virtual Network Switch adapter properties In Figure 7, note that all checkboxes for TCP/IP protocols are cleared. Although IP may be assigned to this interface, it is generally not a good idea. Static IP information will be overwritten and completely removed if the virtual network gets re-created. Best practice Do not share physical network interfaces that are used by external VM networks with the management console or IP storage to avoid conflicts between them. In addition to having a pseudo adapter when a virtual network is created, you will also notice that Windows creates a new interface (see Figure 8 on page 15) through which the VM network traffic travels. Note that the only selected items are link-layer topology discovery-related protocols. Although IP may be assigned to this interface, it should not be used as such for the same reason that is applicable to the pseudo adapter. Best practice It may be difficult to distinguish one interface from another if there are multiple interfaces, especially when new interfaces are added during virtual network creation. Be sure to rename and label each interface in Network Connections to make it easier to identify their designated purposes. Best Practices Planning 14
15 Figure 8 Network adapter properties VLANs It is a recommended practice that the network traffic from Celerra be segmented through a private LAN using either a virtual LAN or a dedicated IP SAN switch. Based upon the throughput requirements for the virtual machines, you may need to configure more interfaces for additional network paths to the Celerra Data Mover. VLANs allow for secure separation and QoS of the Ethernet packets carrying Hyper-V Server I/O. Their use is a recommended configuration practice for Hyper-V Server. Hyper-V supports various VLAN options with 802.1Q. The VLAN tag can be applied to the virtual switch network interface in order to limit the scope of network traffic on that virtual machine network interface. The Celerra network stack also provides support for 802.1Q VLAN tagging. A virtual network interface (VLAN) on Celerra is a logical association that can be assigned to either a physical or a logical network device, such as an aggregated link or Fail-Safe Network interface. VLAN on Celerra occurs above the Layer 2 hardware address so that the Data Mover can support multiple physical interfaces for high availability and load balancing. Since both Celerra and Hyper-V support 802.1Q, an environment in which the storage network has been segmented with VLANs is completely supported. Best practice - Segment network traffic for IP storage through a private LAN using either a VLAN or an independent IP SAN switch. Best Practices Planning 15
16 Jumbo frames Celerra supports jumbo Ethernet frame sizes up to 9,000 bytes. Depending on the NIC capability and its Windows device driver support, jumbo frames should be enabled whenever possible to reduce host CPU utilization and improve network efficiency on the network interface(s) used for IP storage. Refer to the NIC vendor documentation for detail on jumbo frame support and how to configure it in Windows Celerra storage provisioning for Hyper-V Server With the support for IP storage in Hyper-V Server, EMC Celerra has become a preferred platform for provisioning storage used for virtual machines. Celerra iscsi support provides an option for configuring network storage for the Hyper-V servers. Celerra with CLARiiON or Symmetrix back-end arrays provides Hyper-V Server with highly available, RAID-protected storage. The use of Celerra as a storage system does not preclude the use of Fibre Channel devices. Hyper-V Server supports the use of Fibre Channel devices along with iscsi. When Hyper-V Server has been configured using different classes of shared storage, VM migration can be employed to rebalance virtual disks based on the needs of the application. This offers a method to relocate a virtual machine among the different tiers of storage and balance the needs of the application across the storage systems. Storage is configured for the Hyper-V Servers through the Celerra Management interface. Celerra provides mirrored (RAID 1) and striped (RAID 5) options for performance and protection of the devices used to create Hyper-V volumes. Celerra Automatic Volume Manager (AVM) runs an internal algorithm that identifies the optimal location of the disks that make up the file system. Storage administrators are only required to select the storage pool type and desired capacity in order to establish a file system that can be presented to Hyper-V for use as an iscsi storage location. The choice of which storage and RAID algorithm to use is largely based upon the throughput requirements of your applications or virtual machines. RAID 5 provides the most efficient use of disk space with good performance to satisfy the requirements of your applications. RAID 1 provides the best performance at the cost of additional disk capacity to mirror all of the data in the file system. In testing performed within EMC labs, RAID 5 was chosen for both virtual machine boot disk images as well as virtual disk storage used for application data. Understanding the application and storage requirements within the computing environment will help to identify the appropriate RAID configuration for your servers. EMC NAS specialists are trained to translate those requirements into the correct storage configuration. The virtual disks are assigned to a virtual machine and are managed by the guest operating system just like a standard SCSI device. For example, a virtual disk would be assigned to a virtual machine running on a Hyper-V server. In order to make the device useful, a guest operating system would be installed on one of the disks. The format of the virtual disk is determined by the guest OS or install program. One potential configuration would be to present an iscsi LUN from Celerra to a Hyper-V server, which would use the NTFS file system laid out on the iscsi disk to store the VHD files for a newly defined virtual machine. In the case of a Windows guest, the VHD would be formatted as an NTFS file system. Additional virtual disks used for applications could be provisioned from one or more Celerra iscsi LUNs and formatted as NTFS by the Windows guest OS. Naming of storage objects When establishing a Hyper-V environment with Celerra, careful consideration should be given to the naming of the storage objects. Providing descriptive names for the file systems, iscsi targets, and volume labels in Windows can establish valuable information for ongoing administration and troubleshooting of the environment. Prudent use of labels including the storage system from which they are configured is helpful in management and troubleshooting during maintenance. Best practice Incorporating identifying elements (for example, iscsi target names) into the volume label as well as annotating with the name of the Celerra being used will ease future management and configuration tasks. Best Practices Planning 16
17 Business continuity (SnapSure) The use of Celerra checkpoints and Celerra Replicator assist with the ongoing management of the file system. Depending on the state of the guest OS and applications, snapshots provide a crash-consistent image of the file systems that contain operating system and application virtual disks. Celerra snapshots can be used to create point-in-time images of the file systems containing VHD files for OS and application data. The snapshot image can be used for near-line restores of virtual disk objects including individual virtual machines and their snapshots. The snapshot file system provides a secondary image of all Hyper-V file objects and can be integrated into NDMP backup processes or copied to a second Celerra for disaster recovery purposes. Celerra iscsi configuration for Hyper-V Server Unlike Windows 2003 or prior versions that require installing the iscsi software initiator as an add-on component, Windows 2008 has a native iscsi software initiator (accessible from the Control Panel) that can be configured in the Hyper-V parent partition to provision storage for virtual machines. The configuration of iscsi first requires that the Hyper-V Server has a network connection configured for IP storage and iscsi service must be enabled. This Hyper-V IP storage interface configuration used for iscsi was described more fully in the Hyper-V network concepts section of this white paper. To establish an iscsi session, the following steps must be completed: 1. Enable the iscsi client service on the Hyper-V Server and ensure the Windows firewall allows outbound iscsi traffic (TCP port 3260) to pass through. 2. Configure the iscsi software initiator on the Hyper-V Server. 3. Configure iscsi LUNs and mask them to the IQN of the software initiator defined in the parent partition. If using a hardware initiator you will need to identify the IQN of that device for masking. Prior to configuration of the Hyper-V Server for iscsi, you must ensure that the Windows firewall has been modified to enable the iscsi client in the parent partition. This allows the client to establish sessions with the iscsi target on Celerra. Best practice To ensure the iscsi connections are persistent across server reboots, select the checkbox Automatically restore this connection when the computer starts when configuring how the iscsi target is logged on. Best Practices Planning 17
18 Once an iscsi connection is established, the default connection count should be set to one, as shown in Figure 9. To enable multiple connections per session (MCS), click Connections and add a second connection, as shown in Figure 10 on page 19. Notice the two connections are established via two separate subnets and the load-balance policy is set to Round Robin. You will see that the connection count is incremented to two once you click OK to return to the Target Properties page. Figure 9 iscsi Target Properties Best Practices Planning 18
19 Figure 10 Configuring multiple connections per iscsi session Best practice For best performance, configure two iscsi connections per session via two separate subnets, and choose the Round Robin load-balance policy. Once the client has been configured you need to ensure that the Celerra has been configured with at least one iscsi target and LUN. The Celerra Management iscsi Wizard can be used to configure an iscsi LUN, as shown in Figure 11 on page 20. Knowing the IQN of the Hyper-V parent partition or hardware initiator will allow you to mask the LUN to the host for further configuration. LUNs are provisioned through Celerra Manager from the Celerra file system and masked to the iscsi Qualified Name (IQN) of the iscsi software initiator assigned to the Hyper-V parent partition. The IP Storage network interface is used to establish the iscsi session with the Celerra iscsi target. Best Practices Planning 19
20 Figure 11 Celerra Manager New iscsi LUN Wizard LUN mask assignment After completing the configuration steps, open Disk Management in the parent partition and identify the LUNs that have been configured for the Hyper-V server. If the LUNs are not there, you may need to rescan disks to refresh the window. LUN configuration options There are three methods of leveraging iscsi LUNs in a Hyper-V environment: NTFS formatted file system, pass-through disk, and using a software initiator (generally the Microsoft software initiator) within the virtual guest OS. Each of these methods is best used in specific use case examples discussed later in this white paper. NTFS formatted file system Once the Celerra iscsi LUNs are presented to the Hyper-V Server in Disk Management, you may bring the raw disk online, create a simple volume, format it as NTFS, and assign it a drive letter. Be sure to use a meaningful volume label to help identify the LUNs in the future. The new disk drives can now be used to store the VM configuration files, snapshots, and VHD files. Ideal customer use cases for an NTFS formatted file system are as follows: Storage for general virtual machines without I/O-bound workloads like application servers, and Active Directory domain controllers Virtual machines that don t require application-level I/O consistency using Native Hyper-V snapshots Virtual machines that are ideally backed up using VSS-based backup software NTFS is an excellent choice where various virtual machine LUNs do not require specific I/O performance envelopes, or where ease of provisioning and use are paramount. When Celerra iscsi LUNs are used with a Hyper-V Server, advanced Celerra features such as Virtual Provisioned LUNs and Dynamic LUN extension are fully supported. Best Practices Planning 20
21 Pass-through disk Unlike virtual disks that are encapsulated in VHD files on a file system, pass-through disks bypass the file system layer and are presented directly to virtual machines as raw physical devices. Because VM needs exclusive access to the disk, it needs to be taken offline in the parent partition to avoid unexpected behavior as a result of interference between the parent and child partitions. Figure 12 shows how pass-through disk is configured by selecting Physical hard disk in the VM settings. You would notice the option is dimmed if the disk in the parent partition is not offline. Figure 12 Pass-through disk creation in VM settings Pass-through disk allows for SCSI commands to be passed from the guest OS to the SCSI target. The benefit is that management utilities that interface to storage platform-based replication tools can be used to create snaps and replicate iscsi devices at the array level. Also, because pass-through disk does not reside on a file system like VHD, it is not bound by the 2040 GB size limit as with VHD. On the other hand, passthrough disk loses some of the benefits of using VHDs such as VHD snapshots, and dynamically expanding or differencing VHDs. Another limitation is that, unlike VHD, which can be used to create multiple virtual disks within the file system, pass-through disk can be configured only as a single virtual disk. Pass-through disks are ideal when a LUN has specific performance needs. Since pass-through disk has a one-to-one virtual disk to iscsi LUN model, it isn t shared by multiple virtual machines as occurs with Best Practices Planning 21
22 VHD. In addition application-integrated (example includes Microsoft Exchange 2003 and 2007 VSS) replication is required. Ideal customer use cases for pass-through disk are as follows: Exchange and SQL Server log and database LUNs (boot and binaries can be stored in VHD) Where application-integrated replication is required Pass-through disks allow more control of configuration and layout. However, control and performance tuning implies marginally higher complexity than VHDs. Additionally, virtual machines using pass-through disks cannot be backed up using the VSS-based backup method, nor can it be included in a VM-based snapshot. The use of pass-through disks provides the guest OS with the ability to pass SCSI commands directly to the storage device presented by the parent partition. In the case of Celerra, this option is available only if the device is an iscsi LUN. One of the motivations for using this device type is when Celerra-level iscsi replication is required. The primary difference between pass-through disk and a file system that hosts VHDs is the number of devices that are supported by each. While a file system provides a pool of storage that can be used to create many virtual disks and virtual machines, the pass-through device is presented directly to the VM as a single disk device, so it cannot be subdivided. Microsoft iscsi Software Initiator within a virtual machine Using the Microsoft Software Initiator within a guest OS is the same as using it on a physical server. The Microsoft Software Initiator is layered upon the network stack (in this case the guest OS virtual network), handling all iscsi tasks in software, as shown in Figure 13. iscsi SW Initiator via Virtual NIC Celerra iscsi Hyper-V Server IP Storage Figure 13 Microsoft iscsi Software Initiator access to a Celerra iscsi target LUN This use case is similar to the pass-through case in that the iscsi LUN has a one-to-one mapping with the LUN configuration on the physical storage and layout, and the performance envelope can be controlled specifically. However, the primary rationale for this configuration is that this is the only storage model of all three that can support automated handling of the LUN addition and removal within the guest OS. All other methods (NTFS over iscsi, pass through) require manual configuration in Hyper-V Manager to configure storage. This capability is critical for mount hosts (for application-consistent replicas) where LUNs will be automatically and dynamically added and removed. Best Practices Planning 22
23 The following are ideal customer use cases for using the Microsoft iscsi Software Initiator within the virtual machine: Mount host for LUN replicas taken using VSS application interface (for streamed backup) Dynamic presentation of SQL Server databases for test and development use Virtual provisioning With virtual provisioning (also called thin provisioning) provided through Celerra file systems and iscsi LUNs, storage is consumed in a more pragmatic manner. Virtual provisioning allows for the creation of storage devices that do not preallocate backing storage capacity for virtual disk space until the virtual machine application generates some data to the virtual disk. The virtual provisioning model avoids the need to overprovision disks based upon expected growth. Storage devices still represent and support the upper size limits to the host that is accessing them, but in most cases the actual disk usage falls well below the apparent allocated size. The benefit is that, like virtual resources in the Hyper-V Server architecture, storage is presented as a set of virtual devices that share from a pool of disk resources. Disk consumption increases based upon the needs of the virtual machines in the Hyper-V environment. As a way to address future growth, Celerra monitors the available space and can be configured to automatically extend the backing file system size as the amount of free space decreases. File system virtual provisioning The Celerra Manager provides the interface from which to define the virtual provisioned file system used to host iscsi LUNs. Selecting Virtual Provisioning Enabled when defining a new file system will create a file system that consumes only the initial requested capacity (10 GB in Figure 14). An iscsi LUN created within the file system will realize the maximum size of 20 GB and Celerra will transparently allocate space as the capacity usage grows. Figure 14 Celerra virtual provisioned file system creation interface Hyper-V Server also provides an option to create thin provisioned virtual disks. Virtual disks created using the New Virtual Machine Wizard are by default thinly provisioned, or dynamically expanding as Microsoft names it. If you choose not to create dynamically expanding disks for performance reasons, you should answer Attach a virtual hard disk later when prompted in the wizard, and then add a new virtual hard disk in the VM settings. Having dynamically expanding disks means that the VHD file space is being consumed only when an application or user operating within the guest OS requests to write to a virtual disk. The disk request is translated into an allocation request within the Celerra file system. The benefit of this option is that it Best Practices Planning 23
24 preserves the amount of storage space used by the virtual disk. It does not, however, address the issue of overallocation of the file system. The Celerra virtually provisioned file system improves on this concept by allocating only the disk space it needs to satisfy the needs of all of the virtual disks in the file system. If we view a file system as a one-dimensional expression, as illustrated in the top figure in Figure 15, then we see that the non-virtual or allocated file system has a fixed length of 40 GB. In this case we have several iscsi LUNs varying in size from 5 GB to 15 GB. If they are thin provisioned, they may not be allocating the full complement of their defined space, so we have unused space. This space may eventually be used for another LUN or snaps but for the moment it is not used. In the bottom figure of Figure 15, a Celerra virtual provisioned file system is being used. In this case not only is Hyper-V Server preserving disk resources, but the file system is as well. It has a fixed capacity but is only using the amount of space that is being requested by the virtual machines in Hyper-V Server. The remaining disk storage is free for other file systems. So the storage that had been allocated in the example above is free to be used or shared among thin provisioned file systems on Celerra. Celerra virtual provisioning further extends the thin provisioning benefits by allocating only a portion of the disk space and automatically expanding the backing space as the usage requirements increase. This means that Celerra file systems used for Hyper-V Server will consume disk space in a much more efficient manner than with dense provisioning. Figure 15 systems Storage consumption benefits of Celerra virtual (thin) provisioned file Best Practices Planning 24
25 iscsi virtual provisioning With Celerra thin provisioned iscsi LUN, disk consumption occurs only when the guest OS or application issues a SCSI write request to the LUN. When creating a dynamically expanding virtual disk with Hyper-V in a virtual provisioned iscsi LUN, there is very little space consumed even after the guest has formatted the virtual disk. Thin provisioned LUNs initially reserve no space within the Celerra file system. This Celerra feature combined with the thin provisioning of Hyper-V provide a very effective way to preserve disk resources. As additional virtual machines are added and disk space becomes full, Celerra will use the auto-extension feature to increase the size of the file system. When using iscsi, virtual provisioning provides other options that are tied to the thin provisioning option on Hyper-V. The iscsi LUN can be configured as virtually provisioned by specifying the vp option. When using a virtually provisioned iscsi LUN, monitoring file system space is crucial to ensure that enough space is available. Insufficient space on a file system that contains a virtually provisioned LUN results in data unavailability and might cause data loss. Best practice In addition to monitoring file system usage, set an event notification for a file system full condition and enable automatic file system extension with a conservative threshold. If enough disk space is not available for file system extension, data loss might still occur. Using the New Virtual Hard Disk Wizard, you can create a dynamically expanding disk within the thin or virtual provisioned iscsi LUN, which can be used to create a thin provisioned NTFS file system, as shown in Figure 16 on page 26. The NTFS file system will use only the required blocks when allocated for new space requested by the virtual machine, although Hyper-V interprets the storage as fully allocated. The virtual disk is therefore limited in the amount of blocks consumed on the storage system. Best Practices Planning 25
26 Figure 16 Celerra iscsi thin provisioned LUN used as NTFS Best Practices Planning 26
27 When completed and formatted, the amount of consumed space is very minimal, as illustrated in the Celerra Manager File System Properties screen in Figure 17. This interface is used to provide the amount of space consumed within a file system. In this case, the 10 GB LUN has allocated only 146 MB of space after being formatted as an NTFS file system on a Windows guest OS. Figure 17 File System Properties screen Partition alignment Starting in Windows 2008, disk partitions are aligned automatically with a 1 MB offset along the proper boundaries using Disk Management or the diskpart utility. This provides a performance improvement in heavy I/O environments. It is also a good practice to align the disk partitions for the virtual disks within the guest OS. Use the fdisk or partition alignment tools such as diskpart on Windows to ensure that the starting disk sector is aligned to 64k boundaries. iscsi LUN extension Storage space needed by the Hyper-V Server can be increased by either adding additional iscsi LUNs or extending the size of an existing LUN. The LUN extension is accomplished through Celerra Manager. In Figure 18, an existing 100 GB iscsi LUN is being increased in size to provide additional space to the server. The maximum extension size in this scenario is 1.9 TB, and the iscsi LUN will be expanded to 110 GB with a 10 GB extension. Figure 18 Celerra iscsi LUN extension window Best Practices Planning 27
28 After a refresh in Disk Management on the Hyper-V Server, you should see an unallocated space as a result of the iscsi LUN extension. If the LUN does not appear in this list you may need to rescan the iscsi bus. Because Windows 2008 creates simple volumes by default, you can simply right-click the existing NTFS file system and choose Extend Volume to adjust the partition boundary as well as expand the file system, as shown in Figure 19. Figure 19 Extend iscsi disk volume in Disk Management Best Practices Planning 28
29 The Extend Volume Wizard in Figure 20 shows the existing device properties as well as the additional capacity of the iscsi LUN that can be extended. Once the wizard is finished, you should see a 110 GB file system, as shown in Figure 21. Figure 20 Extend Volume Wizard Figure 21 Disk Management output after volume extension Note: Use caution when extending the iscsi LUN that is running a guest OS. The LUN extension process may affect running systems as the underlying device is modified. It may require that you shut down the guest OS prior to extending the device in the parent partition. Best Practices Planning 29
30 Replication Celerra provides a framework that can be used with Hyper-V for both local and remote replication, as shown in Figure 22. The solution is IP-based and establishes an asynchronous copy of the Hyper-V storage device that can be used to restart the Hyper-V environment locally, or at a remote location when control of the Hyper-V storage device is failed over to the remote Celerra. Network Switch Celerra Celerra Replicator Celerra Hyper-V Server 1 iscsi IP Storage iscsi IP Storage Hyper-V Server 2 Hyper-V Server 3 Figure 22 Celerra replication framework Simple local and remote replication of Hyper-V components is supported on Celerra for iscsi. iscsi uses block-level replication based on the Celerra IP Replicator feature that replicates only changed file system or block data for efficient WAN use. Periodic updates between the primary and secondary Celerra provide a crash-consistent iscsi LUN. To use the iscsi LUN on the secondary Celerra, the destination LUN must be set for write access. This is to satisfy the SCSI reservation commands issued to the virtual disks when the guest OS is started. The options for setting the file system or LUN to read-write are as follows: Fail over the LUN from primary to secondary Disassociate the replication relationship between the primary and secondary file systems Best Practices Planning 30
31 Conclusion Leveraging the full capabilities of Microsoft Hyper-V requires shared storage. As virtualization is used for production applications, storage performance and total system availability become critical. The EMC Celerra platform is a high-performance, high-availability IP storage solution that is ideal for supporting Hyper-V Server deployments. This white paper addressed many of the Hyper-V and Celerra considerations. This white paper discussed the use of Celerra iscsi LUNs to support the creation of local file systems or pass-through disks. Celerra iscsi LUNs can be used as follows: Used to create NTFS file systems in the parent partition Exclusively and directly accessed by a VM as raw devices via pass-through disks Accessed through the iscsi software initiator in the guest OS Aided by the flexibility and advanced functionality of the Celerra products, Hyper-V Server and Celerra network storage platforms provide a very useful set of tools to both establish and support your virtual computing environment. References There is a considerable amount of additional information available on the Windows Server 2008 website and discussion boards as well as numerous other websites focused on hardware virtualization. Please see for additional resources and user guides. Best Practices Planning 31
Using EMC Celerra IP Storage with Windows Server 2008 R2 Hyper-V over iscsi
Best Practices Planning Abstract This white paper provides a detailed overview of EMC Celerra Network Server with Microsoft Hyper-V R2, the bare-metal hypervisor that is built into the 64-bit Windows Server
Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays
TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication
EMC Virtual Infrastructure for Microsoft SQL Server
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
EMC Celerra Unified Storage Platforms
EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC
EMC Backup and Recovery for Microsoft Exchange 2007 SP2
EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Using EMC CLARiiON with Microsoft Hyper-V Server
Using EMC CLARiiON with Microsoft Hyper-V Server Applied Technology Abstract This white paper provides a detailed overview of using EMC CLARiiON storage products with Microsoft Hyper-V. Hyper-V is the
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
EMC Replication Manager for Virtualized Environments
EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased
VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning
VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning Abstract This white paper provides insight into the virtualization of desktop systems using the EMC Celerra IP storage
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
EMC Business Continuity for Microsoft SQL Server 2008
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
Sanbolic s SAN Storage Enhancing Software Portfolio
Software to Simplify and Share SAN Storage Sanbolic s SAN Storage Enhancing Software Portfolio Overview of Product Suites www.sanbolic.com Version 2.0 Page 2 of 10 Contents About Sanbolic... 3 Sanbolic
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...
What s New with VMware Virtual Infrastructure
What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management
MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX
White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.
EMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2
Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer
Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
A Dell PowerVault MD3200 and MD3200i Technical White Paper Dell
Implementing Hyper-V A Dell PowerVault MD3200 and MD3200i Technical White Paper Dell THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING
DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
EMC Integrated Infrastructure for VMware
EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000
Fibre Channel HBA and VM Migration
Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 FC0054605-00 A Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 S Information furnished in this
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
Bosch Video Management System High Availability with Hyper-V
Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements
EMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Microsoft Windows Server 2008 R2 Hyper-V on EMC VNXe Series
Microsoft Windows Server 2008 R2 Hyper-V on EMC VNXe Series www.emc.com EMC Unified Storage Solutions Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 EMC Corporation.
How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled
Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision
Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION Network Appliance, Inc. March 2007 TABLE OF CONTENTS 1 INTRODUCTION... 3 2 BACKGROUND...
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
Software to Simplify and Share SAN Storage Sanbolic s SAN Storage Enhancing Software Portfolio
Software to Simplify and Share SAN Storage Sanbolic s SAN Storage Enhancing Software Portfolio www.sanbolic.com Table of Contents About Sanbolic... 3 Melio File System... 3 LaScala Volume Manager... 3
VMware vsphere 5.1 Advanced Administration
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
Maximizing Your Investment in Citrix XenDesktop With Sanbolic Melio By Andrew Melmed, Director of Enterprise Solutions, Sanbolic Inc. White Paper September 2011 www.sanbolic.com Introduction This white
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March
The Benefits of Virtualizing
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER
VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER CORPORATE COLLEGE SEMINAR SERIES Date: April 15-19 Presented by: Lone Star Corporate College Format: Location: Classroom instruction 8 a.m.-5 p.m. (five-day session)
Hyper-V backup implementation guide
Hyper-V backup implementation guide A best practice guide for Hyper-V backup administrators. www.backup-assist.ca Contents 1. Planning a Hyper-V backup... 2 Hyper-V backup considerations... 2 2. Hyper-V
HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design: Designing XenServer Network Configurations www.citrix.com Contents About... 5 Audience... 5 Purpose of the Guide... 6 Finding Configuration Instructions... 6 Visual Legend... 7
Microsoft Hyper-V Server 2008 R2 Getting Started Guide
Microsoft Hyper-V Server 2008 R2 Getting Started Guide Microsoft Corporation Published: July 2009 Abstract This guide helps you get started with Microsoft Hyper-V Server 2008 R2 by providing information
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
VMware vsphere 5.0 Boot Camp
VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Best Practices for SAP on Hyper-V
Microsoft Collaboration Brief April 2010 Best Practices for SAP on Hyper-V Authors Josef Stelzel, Sr. Developer Evangelist, Microsoft Corporation [email protected] Bernhard Maendle, Architect, connmove
VIRTUALIZATION 101. Brainstorm Conference 2013 PRESENTER INTRODUCTIONS
VIRTUALIZATION 101 Brainstorm Conference 2013 PRESENTER INTRODUCTIONS Timothy Leerhoff Senior Consultant TIES 21+ years experience IT consulting 12+ years consulting in Education experience 1 THE QUESTION
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture
EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using NFS and DNFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published
Dell Solutions Overview Guide for Microsoft Hyper-V
Dell Solutions Overview Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:
VMware Data Recovery. Administrator's Guide EN-000193-00
Administrator's Guide EN-000193-00 You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product
Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011
Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011 Contents Introduction... 2 Roadmap... 3 What is in this document... 3 Test Environment...
BEST PRACTICES GUIDE: VMware on Nimble Storage
BEST PRACTICES GUIDE: VMware on Nimble Storage Summary Nimble Storage iscsi arrays provide a complete application-aware data storage solution that includes primary storage, intelligent caching, instant
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Quick Start - Virtual Server idataagent (Microsoft/Hyper-V)
Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data
VMware vsphere Backup and Replication on EMC Celerra
VMware vsphere Backup and Replication on EMC Celerra Applied Technology Abstract NAS is a common storage platform for VMware vsphere environments. EMC Celerra can present storage to VMware ESX servers
CXS-203-1 Citrix XenServer 6.0 Administration
Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,
VMware Site Recovery Manager with EMC RecoverPoint
VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright
Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization
The Drobo family of iscsi storage arrays allows organizations to effectively leverage the capabilities of a VMware infrastructure, including vmotion, Storage vmotion, Distributed Resource Scheduling (DRS),
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
Configuring Celerra for Security Information Management with Network Intelligence s envision
Configuring Celerra for Security Information Management with Best Practices Planning Abstract appliance is used to monitor log information from any device on the network to determine how that device is
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager
EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced
Server and Storage Virtualization with IP Storage. David Dale, NetApp
Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
EMC BACKUP-AS-A-SERVICE
Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.
Virtualizing your Datacenter
Virtualizing your Datacenter with Windows Server 2012 R2 & System Center 2012 R2 Part 2 Hands-On Lab Step-by-Step Guide For the VMs the following credentials: Username: Contoso\Administrator Password:
Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide
Direct Storage Access Using NetApp SnapDrive Installation & Administration Guide SnapDrive overview... 3 What SnapDrive does... 3 What SnapDrive does not do... 3 Recommendations for using SnapDrive...
Introduction to Hyper-V High- Availability with Failover Clustering
Introduction to Hyper-V High- Availability with Failover Clustering Lab Guide This lab is for anyone who wants to learn about Windows Server 2012 R2 Failover Clustering, focusing on configuration for Hyper-V
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions
Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V
Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised
EMC VNXe HIGH AVAILABILITY
White Paper EMC VNXe HIGH AVAILABILITY Overview Abstract This white paper discusses the high availability (HA) features in the EMC VNXe system and how you can configure a VNXe system to achieve your goals
vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
LANDesk White Paper. LANDesk Management Suite for Lenovo Secure Managed Client
LANDesk White Paper LANDesk Management Suite for Lenovo Secure Managed Client Introduction The Lenovo Secure Managed Client (SMC) leverages the speed of modern networks and the reliability of RAID-enabled
IP Storage in the Enterprise Now? Why? Daniel G. Webster Unified Storage Specialist Commercial Accounts
1 IP Storage in the Enterprise Now? Why? Daniel G. Webster Unified Storage Specialist Commercial Accounts 2 10Gigabit Ethernet in the News Every where you look 10Gb Ethernet products Changing the face
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT
Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization
Contents. Introduction...3. Designing the Virtual Infrastructure...5. Virtual Infrastructure Implementation... 12. Summary... 14
Contents Introduction...3 Leveraging Virtual Infrastructure...4 Designing the Virtual Infrastructure...5 Design Overview...5 Design Considerations...5 Design Decisions...9 Virtual Infrastructure Implementation...
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
Virtualization with Windows
Virtualization with Windows at CERN Juraj Sucik, Emmanuel Ormancey Internet Services Group Agenda Current status of IT-IS group virtualization service Server Self Service New virtualization features in
Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera
EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
