How To Manage A Hyperv With Hitachi Universal Storage Platform Family



Similar documents
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

The Benefits of Virtualizing

Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

SAN Conceptual and Design Basics

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Dell High Availability Solutions Guide for Microsoft Hyper-V

Veritas Storage Foundation High Availability for Windows by Symantec

TECHNICAL PAPER March Datacenter Virtualization with Windows Server 2008 Hyper-V Technology and 3PAR Utility Storage

High Availability with Windows Server 2012 Release Candidate

VERITAS Storage Foundation 4.3 for Windows

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Technology Insight Series

ENTERPRISE VIRTUALIZATION ONE PLATFORM FOR ALL DATA

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

HBA Virtualization Technologies for Windows OS Environments

Windows Server 2008 R2 Hyper-V Live Migration

June Blade.org 2009 ALL RIGHTS RESERVED

Windows Server 2008 R2 Hyper-V Live Migration

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

Maximizing SQL Server Virtualization Performance

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems

Best Practices for SAP on Hyper-V

Best Practices for Microsoft SQL Server on Hitachi Universal Storage Platform VM

Hitachi Adaptable Modular Storage 2000 Family and Microsoft Exchange Server 2007: Monitoring and Management Made Easy

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Symantec Storage Foundation for Windows

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Violin Memory Arrays With IBM System Storage SAN Volume Control

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft

Virtual SAN Design and Deployment Guide

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

What s new in Hyper-V 2012 R2

VMware Virtual Machine File System: Technical Overview and Best Practices

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Deploying Microsoft Office SharePoint Server 2007 with the Hitachi Universal Storage Platform Family

Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

SOLUTION BRIEF KEY CONSIDERATIONS FOR LONG-TERM, BULK STORAGE

SQL Server Virtualization

VERITAS Business Solutions. for DB2

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

EMC XTREMIO EXECUTIVE OVERVIEW

Windows Host Utilities Installation and Setup Guide

Microsoft Exchange Solutions on VMware

Server and Storage Virtualization with IP Storage. David Dale, NetApp

What s New with VMware Virtual Infrastructure

Lab Validation Report. By Steven Burns. Month Year

Virtualization across the organization

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager

SQL Server Storage Best Practice Discussion Dell EqualLogic

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau

EMC Celerra Unified Storage Platforms

HITACHI VIRTUAL STORAGE PLATFORM Architecture Guide. Hitachi Data Systems

Dionseq Uatummy Odolorem Vel

Using EMC CLARiiON with Microsoft Hyper-V Server

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Symantec Storage Foundation High Availability for Windows

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Windows Host Utilities 6.0 Installation and Setup Guide

EMC PowerPath Family

Monitoring and Managing Microsoft Exchange Server 2007 on the Adaptable Modular Storage 2000 Family

Virtualizing Microsoft SQL Server 2008 Using VMware vsphere 4 on the Hitachi Adaptable Modular Storage 2000 Family

Best Practices for Managing Storage in the Most Challenging Environments

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

WHITE PAPER Optimizing Virtual Platform Disk Performance

Veritas Cluster Server from Symantec

FlexArray Virtualization

Virtualizing Exchange

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure Components of the VMware infrastructure...2

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Virtualization of the MS Exchange Server Environment

VERITAS Volume Management Technologies for Windows

Compellent Storage Center

Consolidate and Virtualize Your Windows Environment with NetApp and VMware

Virtualizing Exchange Server 2007 Using VMware vsphere 4 on the Hitachi Adaptable Modular Storage 2500

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

How To Get A Storage And Data Protection Solution For Virtualization

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

Windows Server Performance Monitoring

IBM System Storage DS5020 Express

Transcription:

Hitachi Universal Storage Platform Family Best Practices with Hyper-V Best Practices Guide By Rick Andersen and Lisa Pampuch April 2009

Summary Increasingly, businesses are turning to virtualization to achieve several important objectives, including increase return on investment, decreasing total cost of operation, improving operational efficiencies, improving responsiveness and becoming more environmentally friendly. While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization requires that IT administrators adopt a new way of thinking about storage infrastructure and application deployment. Improper deployment of storage and applications can have catastrophic consequences due to the highly consolidated nature of virtualized environments. The Hitachi Universal Storage Platform family brings enterprise-class availability, performance and ease of management to organizations of all sizes that are dealing with an increasing number of virtualized business-critical applications. This paper is intended for use by IT administrators who are planning storage for a Hyper-V deployment. It provides guidance on how to configure both the Hyper-V environment and a Hitachi Universal Storage Platform V or Hitachi Universal Storage Platform VM family storage system to achieve the best performance, scalability and availability.

Contributors The information included in this document represents the expertise, feedback and suggestions of a number of skilled practitioners. The authors recognize and sincerely thank the following contributors and reviewers of this document:

Table of Contents Hitachi Product Family... 2 Hitachi Universal Storage Platform Features... 2 Hitachi Storage Navigator Software... 2 Hitachi Performance Monitor Software... 3 Hitachi Virtual Partition Manager Software... 3 Hitachi Universal Volume Manager Software... 3 Hitachi Dynamic Provisioning Software... 3 Hyper-V Architecture... 4 Windows Hypervisor... 4 Parent and Child Partitions... 5 Integration Services... 5 Emulated and Synthetic Devices... 6 Hyper-V Storage Options... 6 Disk Type... 6 Disk Interface... 7 I/O Paths... 7 Basic Hyper-V Host Setup... 8 Basic Storage System Setup... 9 Fibre Channel Storage Deployment... 9 Storage Provisioning... 15 Storage Virtualization and Hyper-V... 16 Hitachi Dynamic Provisioning... 17 Storage Partitioning... 21 Hyper-V Protection Strategies... 23 Backups... 23 Storage Replication... 23 Hyper-V Quick Migration... 23 Hitachi Storage Cluster Solution... 24 Hyper-V Performance Monitoring... 25 Windows Performance Monitor... 25 Hitachi Performance Monitor Feature... 25 Hitachi Tuning Manager Software... 26

Hitachi Universal Storage Platform Family Best Practices with Hyper-V Best Practices Guide By Rick Andersen and Lisa Pampuch Increasingly, businesses are turning to virtualization to achieve several important objectives: Increase return on investment by eliminating underutilization of hardware and reducing administration overhead Decrease total cost of operation by reducing data center space and energy usage Improve operational efficiencies by increasing availability and performance of critical applications and simplifying deployment and migration of those applications In addition, virtualization is a key tool companies use to improve responsiveness to the constantly changing business climate and to become more environmentally friendly. While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization requires that IT administrators adopt a new way of thinking about storage infrastructure and application deployment. Improper deployment of storage and applications can have catastrophic consequences due to the highly consolidated nature of virtualized environments. The Hitachi Universal Storage Platform family brings enterprise-class availability, performance and ease of management to organizations of all sizes that are dealing with an increasing number of virtualized businesscritical applications. The Hitachi Universal Storage Platform with Hitachi Dynamic Provisioning software supports both internal and external virtualized storage, simplifies storage administration and improves performance to help reduce overall power and cooling costs. The storage virtualization technology offered by the Universal Storage Platform readily complements the power and streamlined operations of Hyper-V environments for rapid deployment of virtual machines. With the Universal Storage Platform, the Hyper-V infrastructure can be tied to a virtualized pool of storage. This functionality allows virtual machines under Hyper-V to be configured with a virtual amount of storage, leading to more efficient utilization of storage resources and reduced storage costs. The Universal Storage Platform virtualization architecture offers significant storage consolidation benefits that complement the server consolidation benefits provided by the Hyper-V environment. The Universal Storage Platform is able to present the storage resources of both Hitachi storage and many heterogeneous third-party storage systems all as one unified storage pool. This allows storage administrators to allocate storage resources into multiple storage pools for the needs of each virtual machine under Hyper-V. This paper is intended for use by IT administrators who are planning storage for a Hyper-V deployment. It provides guidance on how to configure both the Hyper-V environment and a Hitachi Universal Platform Storage V or Hitachi Universal Storage Platform VM storage system to achieve the best performance, scalability and availability. 1

Hitachi Product Family Hitachi Data Systems is the most trusted vendor in delivering complete storage solutions that provide dynamic tiered storage, common management, data protection and archiving, enabling organizations to align their storage infrastructures with their unique business requirements. Hitachi Universal Storage Platform Features The Hitachi Universal Storage Platform V is the most powerful and intelligent enterprise storage system in the industry. The Universal Storage Platform V and the smaller footprint Universal Storage Platform VM are based on the Universal Star Network architecture. These storage systems deliver proven and innovative controllerbased virtualization, logical partitioning and universal replication for open systems and mainframe environments. With this architecture as its engine, the Hitachi Universal Storage Platform V redefined the storage industry. It represents the world's first implementation of a large scale, enterprise-class virtualization layer combined with thin provisioning software. It delivers unprecedented performance, supporting over 4.0 million I/O per second (IOPS), up to 247PB of internal and external virtualized storage capacity and 512GB of directly addressable cache. The Hitachi Universal Storage Platform VM blends enterprise-class functionality with a smaller footprint to meet the business needs of entry level enterprises and fast growing mid-sized organizations, while supporting distributed or departmental applications in large enterprises. With the Hitachi Universal Storage Platform VM, smaller organizations can enjoy the same benefits as large enterprises in deploying and managing their storage infrastructure in a way never possible before. It supports over 1.2 million I/O per second (IOPS), up to 96PB of internal and external virtualized storage capacity and 128GB of directly addressable cache. An integral component of the Hitachi Services Oriented Storage Solutions architecture, the Hitachi Universal Storage Platform V and Universal Storage Platform VM provide the foundation for matching application requirements to different classes of storage. These storage systems deliver critical services such as these: Virtualization of storage from Hitachi and other vendors into one pool Thin provisioning through Hitachi Dynamic Provisioning for nondisruptive volume expansion Security services, business continuity services and content management services Load balancing to improve application performance Nondisruptive dynamic data migration from Hitachi and other storage systems Use control unit virtualization to support massive consolidation of storage services on a single platform Hitachi Storage Navigator Software Hitachi Storage Navigator software is the integrated interface for the Universal Storage Platform family firmware and software features. Use it to take advantage of all of the Universal Storage Platform s features. Storage Navigator software provides both a Web-accessible graphical management interface and a commandline interface to allow ease of storage management. Storage Navigator software is used to map security levels for SAN ports and virtual ports and for inter-system path mapping. It is used for RAID-level configurations, for LU creation and expansion, and for online Volume Migrations. It also configures and managers Hitachi Replication products. It enables online microcode updates and other system maintenance functions and contains tools for SNMP integration with enterprise management systems. 2

Hitachi Performance Monitor Software Hitachi Performance Monitor software provides detailed, in-depth storage performance monitoring and reporting of Hitachi storage systems including drives, logical volumes, processors, cache, ports and other resources. It helps organizations ensure that that they achieve and maintain their service level objectives for performance and availability, while maximizing the utilization of their storage assets. Performance Monitor software s in-depth troubleshooting and analysis reduce the time required to resolve storage performance problems. It is an essential tool for planning and analysis of storage resource requirements. Hitachi Virtual Partition Manager Software Hitachi Virtual Partition Manager software logically partitions Universal Storage Platform V and Universal Storage Platform VM cache, ports and disk capacity, including capacity on externally attached storage systems. The software enables administrators to create Hitachi Virtual Storage. Each machine is an isolated group of storage resources, with their own storage partition administrator. Logical partitioning guarantees data privacy and quality of service (QoS) for host virtualized and non-virtualized environments sharing the same storage platform. Hitachi Universal Volume Manager Software Hitachi Universal Volume Manager software provides for the virtualization of a multi-tiered storage area network comprised of heterogeneous storage systems. It enables the operation of multiple storage systems connected to a Hitachi Universal Storage Platform system as if they are all in one storage system and provides common management tools and software. The shared storage pool comprised of external storage volumes can be used with storage system-based software for data migration and replication, as well as any host-based application. Combined with Hitachi Volume Migration software, Universal Volume Manager provides an automated data lifecycle management solution across multiple tiers of storage. Hitachi Dynamic Provisioning Software Hitachi Dynamic Provisioning software provides the Universal Storage Platform V and Universal Storage Platform VM with thin provisioning services. Thin provisioning gives applications access to virtual storage capacity. Applications accessing virtual, thin provisioned volumes are automatically allocated physical disk space, by the storage system, as they write data. This means volumes use enough physical capacity to hold application data, and no more. All thin provisioned volumes share a common pool of physical disk capacity. Unused capacity in the pool is available to any application using thin provisioned volumes. This eliminates the waste of overallocated and underutilized storage. Hitachi Dynamic Provisioning software also simplifies storage provisioning and automates data placement on disk for optimal performance. Administrators do not need to micro-manage application storage allocations or perform complex, manual performance tuning. In addition, physical storage resources can be added to the thin provisioning pool at any time, without application downtime. In Hyper-V environments, Hitachi Dynamic Provision software provides another benefit: wide striping, which greatly improves performance and eliminates the need for administrators to tune virtual machine volume placement across spindles. 3

Hyper-V Architecture Microsoft Hyper-V is a hypervisor-based virtualization technology from Microsoft that is integrated into Windows Server 2008 x64 editions of the operating system. Hyper-V allows a user to run multiple operating systems on a single physical server. To use Hyper-V in Windows Server 2008, enable the Hyper-V role on the Microsoft Windows Server 2008 server. Figure 1 illustrates Hyper-V architecture. Figure 1. Hyper-V Architecture The Hyper-V role provides the following functions: Hypervisor Parent and child partitions Integration services Emulated and synthetic devices Windows Hypervisor The Windows Hypervisor, a thin layer of software that allows multiple operating systems to run simultaneously on a single physical server is the core component of Hyper-V. The Windows Hypervisor is responsible for the creation and management of partitions that allow for isolated execution environments. As shown in Figure 1, the Windows Hypervisor runs directly on top of the hardware platform, with the operating systems running on top. 4

Parent and Child Partitions To run multiple guest virtual machines with isolated execution environments on a physical server, Hyper-V technology uses a logical entity called a partition. These partitions are where the operating systems and its applications execute. Hyper-V defines two kinds of partitions, parent and child. Parent Partition Each Hyper-V installation consists of one parent partition, which is a virtual machine that has special or privileged access. Some documentation might also refer to parent partitions as host partitions. This document uses the term parent partition. The parent partition is the only virtual machine with direct access to hardware resources. All of the other virtual machines, which are known as child partitions, go through the parent partition for device access. To create the parent partition, enable the Hyper-V role in Server Manager and restart the server. After the system restarts, the Windows Hypervisor is loaded first, and then the rest of the stack is converted to become the parent partition. The virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions that house the guest operating systems. Child Partition Hyper-V executes a guest operating system and its associated applications in a virtual machine, or child partition. Some documentation might also refer to child partitions as guest partitions. This document uses the term child partition. Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, which are referred to as virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition. The VMBus is a logical channel that enables inter-partition communication. The parent partition runs Virtualization Service Providers (VSPs), which connect to the VMBus and handle device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client (VSC), which redirects the request to VSPs in the parent partition via the VMBus. This entire process is transparent to the guest OS. Integration Services Integration services are made up of two services that are installed on the guest OS that improve performance while running under Hyper-V: enlightened I/O and integration components. The version of the guest OS deployed determines which of these two services can be installed on the guest OS. Enlightened I/O Enlightened I/O is a Hyper-V feature that allows virtual devices in a child partition to use host resources better because VSC drivers in these partitions communicate directly with VSPs directly over the VMBus for storage, networking and graphics subsystems access. Enlightened I/O is a specialized virtualization-aware implementation of high-level communication protocols like SCSI that take advantage of VMBus directly, bypassing any device emulation layer. This makes the communication more efficient, but requires the guest OS to support Enlightened I/O. At the time of this writing, Windows 2008, Windows Vista and SUSE Linux are the only operating systems that support Enlightened I/O, allowing them to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware. Integration Components Integration components (ICs) are sets of drivers and services that enable guest operating systems to use synthetic devices, thus creating more consistent child partition performance. By default, guest operating systems only support emulated devices. Emulated devices normally require more overhead in the hypervisor to perform the emulation and do not utilize the high-speed VMBus architecture. By installing integration components on the supported guest OS, you can enable the guest to utilize the high-speed VMBus and utilize synthetic SCSI devices. 5

Emulated and Synthetic Devices Hardware devices that are presented inside of a child partition are called emulated devices. The emulation of this hardware is handled by the parent partition. The advantage of emulated devices is that most operating systems have built-in device drivers for them. The disadvantage is that emulated devices are not designed for virtualization and thus have lower performance than synthetic devices. Synthetic devices are optimized for performance in a Hyper-V environment. Hyper-V presents synthetic devices to the child partition. Synthetic devices are high performance because they do not emulate hardware devices. For example, with storage, the SCSI controller only exists as a synthetic device. For a list of guest operating systems that support synthetic SCSI devices, see the Hyper-V Planning and Deployment Guide. Hyper-V Storage Options Hyper-V deployment planning requires consideration of three key factors: the type of disk to deploy and present to child partitions, the disk interface and the I/O path. Disk Type The Hyper-V parent partition can present two disk types to guest operating systems: virtual hard disks (VHD) and pass-through disks. Virtual Hard Disks Virtual hard disks (VHDs) are files that are stored on the parent hard disks. These disks can either be SAN attached or local to the Hyper-V server. The child partition sees these files as its own hard disk and uses the VHD files to perform storage functions. Three types of VHD disks are available for presentation to the host: Fixed VHD The size of the VHD is fixed and the LU is fully allocated at the time the VHD is defined. Normally this allows for better performance than dynamic or differencing VHDs. This is due to less fragmentation since the VHD is always pre-allocated and the parent partition file system does not need to incur the overhead required to extend the VHD file, since all the blocks have been pre-allocated. A fixed VHD has the potential for wasted or unused disk space. Consider also that after the VHD is full, any further write operations fail even though additional free storage might exist on the storage system. Dynamic VHD The VHD is expanded by Hyper-V as needed. Dynamic VHDs occupy less storage as compared to fixed VHDs, but at the cost of slower throughput. The initial size that the disk can expand to is set at creation time and the writes will fail when the VHD is fully expanded. Note that this dynamic feature only applies to expanding the VHD. In other words, the VHD does not automatically decrease in size when data is removed. However, Dynamic VHDs can be compacted under the Hyper-V virtual hard disk manager to free any unused space. Differencing VHD VHD that involves both a parent and child disk. The parent VHD disk contains the baseline disk image with the guest operating systems and most likely an application and data associated with that application. After the VHD parent disk is configured for the guest, a differencing disk is assigned as a child to that partition. As the guest OS executes, any changes made to the parent baseline VHD are stored on the child differencing disk. Differencing VHDs are good for test environments but performance can degrade because all I/O must access the parent VHD disk as well as the differencing disk. This causes increased CPU and disk I/O utilization. Because dynamic VHDs have more overhead, best practice is to use fixed VHDs in most circumstance. For heavy application workloads such as Exchange or SQL, create multiple fixed VHDs and isolate applications files such as database and logs on their own VHDs. Pass-through Disks A Hyper-V pass-through disk is a physical disk or LU that is mapped or presented directly to the guest OS. Hyper-V pass-through disks normally provide better performance than VHD disks. 6

After the pass-through disk is visible to and offline within the parent partition, it can be made available to the child partition using the Hyper-V Manager. Pass-through disks have the following characteristics: Must be in the offline state from the Hyper-V parent perspective, except in the case of clustered or highly available virtual machines. Presented as raw disk to the parent partition Cannot be dynamically expanded Do not allow the capability to take snapshots or utilize differencing disks Disk Interface Hyper-V supports both IDE and SCSI controllers for both VHD and pass-through disks. The type of controller you select is the disk interface that the guest operating system sees. The disk interface is completely independent of the physical storage system. Table 1 shows a summary of disk interface capabilities and restrictions. Table 1. Disk Interface Considerations Disk Interface IDE SCSI Considerations All child partitions must boot from an IDE device. A maximum of four IDE devices are available for each child partition. Virtual DVD drives can only be created as an IDE device. Best choice for all volumes based on I/O performance. Requires that Integration Services be installed on the child partition. Can define a maximum of four SCSI controllers per child partition. Restrictions None. A maximum of one device per IDE controller for a maximum of four devices per child partition. None. None. Guest OS specific. A maximum of 64 devices per SCSI controller for a maximum of 256 devices per child partition. I/O Paths The storage I/O path is the path that a disk I/O request generated by an application within a child partition must take to a disk on the storage system. Two storage configurations are available, based on the type of disk selected for deployment. VHD Disk Storage Path With VHD disks, all I/O goes through two complete storage stacks, once in the child partition and once in the parent partition. The guest application disk I/O request goes through the storage stack within the guest OS and onto the parent partition file system. Pass-through Disk Storage Path When using the pass-through disk feature, the NTFS file system on the parent partition can be bypassed during disk operations, minimizing CPU overhead and maximizes I/O performance. With pass-through disks, the I/O traverses only one file system, the one in the child partition. Pass-through disks offer higher throughput because only one file system is traversed, thus requiring less code execution. When hosting applications with high storage performance requirements, deploy pass-through disks. 7

Basic Hyper-V Host Setup Servers utilized in a Hyper-V environment must meet certain hardware requirements. For more information, see the Hyper-V Planning and Deployment Guide. Note: Best practice is to install the Integration components on any child partition to be hosted under Hyper-V. The integration components install enlightened drivers to optimize the overall performance of a child partition. Enlightened drivers provide support for the synthetic I/O devices, which significantly reduces CPU overhead for I/O when compared to using emulated I/O devices. In addition, it allows the synthetic I/O device to take advantage of the unique Hyper-V architecture not available to emulated I/O devices, further improving the performance characteristics of synthetic I/O devices. For more information, see the Hyper-V Planning and Deployment Guide. Multipathing Hitachi recommends the use of dual SAN fabrics, multiple HBAs and host-based multipathing software when deploying business critical Hyper-V Server applications. Two or more paths from the Hyper-V Server connecting to two independent SAN fabrics is essential for ensuring the redundancy required for critical applications. The Universal Storage Platform V supports up to 224 Fibre Channel ports and the Universal Storage Platform VM supports up to 48 Fibre Channel ports that support both direct connect as well as multiple paths with the use of a Fibre Channel switch. Unique port virtualization technology dramatically expands connectivity from Windows Server to the Universal Storage Platform. Each physical Fibre Channel port supports 1024 virtual ports. Multipathing software such as Hitachi Dynamic Link Manager and Microsoft Windows Server 2008 native MPIO are critical components of a highly available system. Multipathing software allows the Windows operating system to see and access multiple paths to the same LU, enabling data to travel down any available path for increased performance or continued access to data in the case of a failed path. Hitachi Dynamic Link Manager includes the following load balancing algorithms that are especially suited for Hitachi storage systems: Round robin Extended round robin Least I/Os Extended least I/Os Least blocks Extended least blocks The introduction of additional load balancing algorithms increase the choices that are available for improving the performance of your Hyper-V environment. Conduct testing before establishing which of these algorithms is better suited for your Hyper-V environment. Hitachi Global Link Manager software consolidates, simplifies and enhances the management, configuration and reporting of multipath connections between servers and storage systems. Hitachi Global Link Manager software manages all of the Hitachi Dynamic Link Manager installations in the environment. Use it to configure multipathing on the Hyper-V hosts, monitor all the connections to the Universal Storage Platform V or Universal Storage Platform VM storage system, and to report on those connections. Global Link Manager also enables administrators to configure load-balancing on a per-lu level. Hitachi Global Link Manager software also integrates with the Hitachi Storage Command Suite of products and is usually installed on the same server as Hitachi Device Manager. 8

Key Considerations: Hitachi Dynamic Link Manager software can only be used on the Hyper-V parent partition. Use the most current Hitachi supported HBA drivers. Select the proper HBA queue depth using the formula described below. Use at least two HBAs and place them on different buses within the server to distribute the workload over the server s PCI bus architecture Use at least two Fibre Channel switch fabrics to provide multiple independent paths to the Universal Storage Platform VM to prevent configuration errors from bringing down the entire SAN infrastructure. Queue Depth Settings on the Hyper-V Host Queue depth settings determine how many command data blocks can be sent to a port at one time. Setting queue depth too low can artificially restrict an application s performance, while setting it too high might cause a slight reduction in I/O. Setting queue depth correctly allows the controllers on the Hitachi storage system to optimize multiple I/Os to the physical disk. This can provide significant I/O improvement and reduce response time. Applications that are I/O intensive can have many concurrent, outstanding I/O requests. For that reason, better performance is generally achieved with higher queue depth settings. However, this must be balanced with the available command data blocks on each front-end port of the storage system. The Universal Storage Platform V and Universal Storage Platform VM have a maximum of 2048 command data blocks available on each front-end port. This means that at any one time up to 2048 active host channel I/O commands can be queued for service on a front-end port. The 2048 command data blocks on each front-end port are used by all LUs presented on the port, regardless of the connecting server. When calculating queue depth settings for Hyper-V Server HBAs, you must also consider queue depth requirements for other LUs presented on the same front-end ports to all other servers. Hitachi recommends setting HBA queue depth on a per-target basis rather than per-port basis. To calculate queue depth, use the following formula: 2048 total number of LUs presented through the front-end port = HBA queue depth per host For example, suppose that four servers share a front-end port on the storage system, and between the four servers, 16 LUs are assigned through the shared front-end port and all LUs are constantly active. The maximum dynamic queue depth per HBA port is 128, that is: 2048 command data blocks 16 LUs presented through the front-end port = 128 HBA queue depth setting Basic Storage System Setup The Universal Storage Platform has no system parameters that need to be set specifically for a Hyper-V environment. The Universal Storage Platform V supports up to 224 Fibre Channel ports and the Universal Storage Platform VM supports up to 48 Fibre Channel ports. Fibre Channel Storage Deployment When deploying Fibre Channel storage on a Universal Storage Platform V system in a Hyper-V environment, it is important to properly configure the Fibre Channel ports and to select the proper type of storage for the child partitions that are to be hosted under Hyper-V. 9

Fibre Channel Front-end Ports Provisioning storage on two Fibre Channel front-end ports is sufficient for redundancy on the Universal Storage Platform. This results in two paths to each LU from the Hyper-V host's point of view. For higher availability, ensure that the target ports are configured to two separate fabrics to make sure multiple paths are always available to the Hyper-V server. Hyper-V servers that access LUs on Universal Storage Platform storage systems must be properly zoned so that the appropriate Hyper-V parent and child partitions can access the storage. With the Universal Storage Platform, zoning is accomplished at the storage level by using host storage domains (HSDs). Zoning defines which LUs a particular Hyper-V server can access. Hitachi Data Systems recommends creating a HSD group for each Hyper-V server and using the name of the Hyper-V server in the HSD for documentation purposes. Figure 2 illustrates using host storage domains for zoning of the Hyper-V servers and assignment of the LUs. Figure 2. Hitachi Storage Navigator LU Path and Security Settings 10

Host Modes To create host groups for Hyper-V parent partitions, choose 0C[Windows] or 2C[Windows extension] from the Host Mode drop-down menu. Host Mode 2C[Windows extension] allows the storage administrator to expand a LU using Logical Unit Size Expansion (LUSE) while the LU is mapped to the host. Figure 3. Host Mode Selecting Child Partition Storage It is important to correctly select the type of storage deployed for the guest OS that is to be virtualized under Hyper-V. Consider also whether VHD or pass-through disks are appropriate. The following questions can help you make this determination: Is the child partition s I/O workload heavy, medium or light? If the child partition has a light workload, you might be able to place all the storage requirements on one VHD LU. If the child partition is hosting an application such as SQL or Exchange, allocate files that are accessed heavily, such as log and database files, to individual VHD LUs. Attach each individual LU to its own synthetic controller. What is the maximum size LU required to support the child partition? If the maximum LU is greater that 2040GB, you must either split the data or utilize pass-through disks. This is due to the size limitation of 2040GB for a VHD LU. 11

Dedicated VHD Deployment Figure 4 shows dedicated VHDs for the application files and the mapping within the Universal Storage Platform V or Universal Storage Platform VM storage system to the mapping within the Hyper-V parent partition, and the child partition. Note that this scenario uses synthetic SCSI controller interface for the application LUs. Figure 4. Dedicated VHD Connection Key Considerations: For better performance and easier management of child partitions, assign a single set of LUs. To enable the use of Hyper-V quick migration of a single child partition, deploy dedicated VHDs. To enable multiple child partitions to be moved together using quick migration, deploy shared VHDs. To achieve good performance for heavy I/O applications, deploy dedicated VHDs. 12

Shared VHD Deployment This scenario utilizes a shared VHD disk, with that single VHD disk hosting multiple child partitions. Figure 5 shows a scenario where Exchange and SQL child partitions share a VHD disk on a Universal Storage Platform and SharePoint and BizTalk child partitions also share a VHD disk on the Universal Storage Platform. Figure 5. Shared VHD Connection Key Considerations: It is important to understand the workloads of individual child partitions when hosting them on a single shared VHD. It is critical to ensure that the RAID groups on the Universal Storage Platform system that are used to host the shared VHD LUs can support the aggregate workload of the child partitions. For more information, see the Number of Child Partitions per VHD, per RAID Group section of this paper. If using quick migration to move a child partition, understand that all child partitions hosted within a shared VHD move together. Whether the outage is due to automated recovery from a problem with the child partition or because of a planned outage, all the child partitions in the group are moved. 13

Pass-through Deployment This scenario uses pass-through disks instead of VHD disks. A dedicated VHD LU is still required to host virtual machine configuration files. Do not share this VHD LU with other child partitions on the Hyper-V host. Figure 6 shows a scenario in which virtual machine configuration files, guest OS binaries, the page file and SQL Server application libraries are placed on the VHD LU, and the application files are deployed as passthrough disks. Figure 6. Pass-through Connection 14

Key Considerations: For higher throughput, deploy pass-through disks. Pass-through disks normally provide higher throughput because only the guest partition file system is involved. To achieve an easier migration path, deploy pass-through disks. Pass-through disks can provide an easier migration path because the LUs used by a physical machine on a SAN can be moved easily to a Hyper-V environment, and allow a new child partition access to the disk. This scenario is especially appropriate for partially virtualized environments. To support multi-terabyte LUs, deploy pass-through disks. Pass-through disks are not limited in size, so a multi-terabyte LU is supported. Pass-through disks appear as raw disks and offline to the parent. If snapshots are required, remember that pass-through disks do not support Hyper-V snapshot copies. Storage Provisioning Capacity and performance cannot be considered independently. Performance always depends on and affects capacity and vice versa. That s why it s very difficult or impossible in real-life scenarios to provide best practices for the best LU size, the number of child partition that can run on a single VHD and so on without knowing capacity and performance requirements. However, several factors must be considered when planning storage provisioning for a Hyper-V environment. Size of LU When determining the right LU size, consider the factors listed in Table 2. These factors are especially important from a storage system perspective. In addition, the individual child partition s capacity and performance requirements (basic virtual disk requirements, virtual machine page space, spare capacity for virtual machine snapshots, and so on) must also be considered. Table 2. LU Sizing Factors Factor Guest base OS size Guest page file size Virtual machine files Application data required by the guest machine Data replication Comment The guest OS resides on the boot device of the child partition. Recommended size is 1.5 times the amount of RAM allocated to the child partition. Define the size the same as the size of the child partition memory plus 200MB. Storage required by the application files such as database and logs. Using more but smaller LUs offers better flexibility and granularity when using replication within a storage system (Hitachi ShadowImage Replication software, Hitachi Copy-on- Write Snapshot software) or across storage systems (Hitachi Universal Replicator, TrueCopy Synchronous or Extended Distance software). Number of Child Partitions per VHD LU, per RAID Group If you decide to run multiple child partitions on a single VHD LU, understand that the number of child partitions that can run simultaneously on a VHD LU depends on the aggregated capacity and performance requirements of the child partitions. Because all LUs on a particular RAID group share the performance and capacity offered by the RAID group, Hitachi Data Systems recommends dedicating RAID groups to a Hyper-V host or a group of Hyper-V hosts (for example, a Hype-V failover cluster) and not assigning LUs from the same RAID group to other non-hyper-v hosts. This prevents the Hyper-V I/O from affecting or being affected by other applications and LUs on the same RAID group and makes management simpler. 15

Follow these best practices: Create and dedicate RAID groups to your Hyper-V hosts. Always present LUs with the same H-LUN if they are shared with multiple hosts. Create VHDs on the LUs as needed. Monitor and measure the capacity and performance usage of the RAID group with Hitachi Tuning Manager software and Hitachi Performance Monitor software. Monitoring and measuring the capacity and performance usage of the RAID group results in one the following cases: If all of the capacity offered by the RAID group is used but performance of the RAID group is still good, add RAID groups and therefore more capacity. In this case, consider migrating the LUs to a different RAID group with less performance using Hitachi Volume Migration or Hitachi Tiered Storage Manager. If all of the performance offered by the RAID group is used but capacity is still available, do not use the remaining capacity by creating more LUs because this leads to even more competition on the RAID group and overall performance for the child partitions residing on this RAID group is affected. In this case, leave the capacity unused and add more RAID groups and therefore more performance resources. Also consider migrating the LUs to a different RAID group with better performance. Consider using Hitachi Dynamic Provisioning to dynamically add RAID groups to the storage pool that the Hyper-V LUs reside in. This can add additional performance and capacity dynamically to the Hyper-V environment. For further information about Hitachi Dynamic Provisioning, the Hitachi Dynamic Provisioning section in this document. In a real environment, it is not possible to use 100 percent of both capacity and performance of a RAID group, but the usage ratio can be optimized by actively monitoring the systems and moving data to the appropriate storage tier if needed using Hitachi Modular Volume Migration or Hitachi Tiered Storage Manager. An automated solution using these applications from the Hitachi Storage Command Suite helps to reduce the administrative overhead and optimize storage utilization. Storage Virtualization and Hyper-V As organizations implement server virtualization with Hyper-V, the need for storage virtualization becomes more evident. The Hitachi Universal Storage Platform offers built-in storage virtualization that allows other storage systems (from Hitachi and from third parties) to be attached or virtualized behind the Hitachi Universal Storage Platform. From a Hyper-V parent point of view, virtualized storage is accessed through the Hitachi Universal Storage Platform and appears like internal, native storage capacity. The virtualized storage systems immediately inherit every feature available on the Hitachi Universal Storage Platform (data replication, Hitachi Dynamic Provisioning, and so on) and enables management and replication using Hitachi software. The virtualized storage that is attached behind the Hitachi Universal Storage Platform will allow for the implementation of a tiered storage configuration in the Hyper-V environment. This gives Hyper-V parent and child partitions access to a wide range of storage with different price, performance and functionality profiles. Storage allocated to each guest machine under Hyper-V can be migrated between different tiers of storage according to the needs of the application. Data can be replicated locally, or remotely, to accommodate business continuity needs. For example, utilizing Hitachi Tiered Storage Manager, virtual machines under Hyper-V can be moved or replicated between different tiers of storage with no disruption to the applications running in the Hyper-V child partitions. 16

To virtualize storage systems behind a Hitachi Universal Storage Platform for a Hyper-V infrastructure environment, use the following high-level checklist. Although the process itself is conceptually simple and usually only requires logical reconfiguration tasks, always check and plan this process with your Hitachi Data Systems representative. 1. Quiesce I/O to and unmap the LUs from the Hyper-V hosts on the storage system to be virtualized. 2. Reconfigure the SAN zoning as needed. 3. Map the LUs to the Hitachi Universal Storage Platform using the management tools available on the storage system to be virtualized. 4. Map the (virtualized) LUs to the Hyper-V parent partition. Hitachi Dynamic Provisioning Storage can be provisioned to a Hyper-V infrastructure environment using Hitachi Dynamic Provisioning. Virtual DP volumes have a defined size, viewed by the Hyper-V hosts as any other normal volume and initially do not allocate any physical storage capacity from the HDP pool volumes. Data is written and striped across the HDP pool volumes in a fixed size that is optimized to achieve both performance and storage area savings. Hitachi Dynamic Provisioning provides support for both thin provisioning and wide striping in a Hyper-V environment. Figure 7 provides an overview of Hitachi Dynamic Provisioning on the Universal Storage Platform V and Universal Storage Platform VM. Figure 7. Hitachi Dynamic Provisioning Concept Overview 17

Thin Provisioning With the use of thin provisioning capabilities provided within the Universal Storage Platform V or Universal Storage Platform VM, you can provision virtual capacity to a virtual machine application only once and then purchase physical capacity only as virtual machine applications truly require it for written data. Capacity for all virtual applications is drawn automatically and as needed from the Universal Storage Platform s shared storage pool to eliminate allocated but unused capacity and simplify storage administration. Table 3. Hitachi Dynamic Provisioning Capacity Allocation Configuration Step Map an HDP volume to Hyper-V Parent Create a VHD file on the HDP volume. Install an operating system in the virtual machine on the VHD volume. Deploy a virtual machine from a template Delete data within the virtual machine. Delete the virtual machine and delete the virtual machine's disk file VHD file. Effect on Capacity Allocation on the HDP Pool Volumes This process does not allocate any physical capacity on the HDP pool volumes. This process does allocate some physical capacity on the HDP pool volumes to write VHD metadata. This process does allocate capacity on the HDP pool volumes depending on the file system being used in the virtual machine and the amount of data written to the virtual machine's VHD file. This process does allocate the whole capacity of the virtual machine's disk file on the HDP pool volumes. The capacity remains allocated on the HDP pool volumes but might be reused by the virtual machine. The capacity remains allocated on the HDP pool volumes The use of thin provisioning within a Hyper-V environment can yield greater utilization of storage assets, and also simplify storage administration tasks when allocating and managing storage for Hyper-V guest machines. Figure 8 illustrates the significant savings that Hitachi thin provisioning can yield in a Hyper-V configuration versus the traditional model of storage provisioning in a shared storage environment. 18

Figure 8. Thin Provisioning Savings in a Hyper-V Environment 19

VHD Types and Thin Provisioning Fixed VHDs normally provide better performance than utilizing expanding or dynamic VHDs. With thin provisioning on the Universal Storage Platform, deploying fixed VHDs for Hyper-V guest machines provides the performance benefits of fixed VHDs with the added storage savings of dynamic VHDs. This is because space is only allocated to the guest machine VHDs as required. For more information about VHD types and their attributes, see the Microsoft TechNet article Frequently Asked Questions: Virtual Hard Disks in Windows 7. Performance Factors with Hitachi Dynamic Provisioning Another benefit in a Hyper-V environment is the use of Hitachi Dynamic Provisioning to utilize wide striping. Wide striping comes from the allocation of chunks across all the drives in a storage pool known as a HDP pool, which might be a hundreds of drives or more. Spreading an I/O across that many more physical drives greatly magnifies performance by parallelizing the I/O across all the spindles in the pool, and can also eliminate the administrative requirement to tune the placement of virtual machine volumes across spindles. Performance design and consequent disk design recommendations for Hitachi Dynamic Provisioning are similar to static provisioning. But for Hitachi Dynamic Provisioning, the requirement is on the HDP pool, rather than the array group. In addition, the pool performance requirement (number of IOPS) is the aggregate of all applications using the same HDP pool. Pool design and use depend on the performance requirements for the applications running on the virtual machines under Hyper-V. The volume performance feature is an automatic result from the manner in which the individual HDP pools are created. A pool is created using up to 1024 LDEVs (pool volumes) that provide the physical space, and the pool s 42MB allocation pages are assigned on demand to any of the Hitachi Dynamic Provisioning volumes (DP-VOLS) connected to that pool. Each individual 42MB pool page is consecutively laid down on a whole number of RAID stripes from one pool volume. Other pages assigned over time to that DP- VOL randomly originate from the next free page from other pool volumes in that pool. As an example, assume that an HDP pool is assigned 24 LDEVs from 12 RAID-1+ (2D+2D) array groups. All 48 disks contribute their IOPS and throughput power to all of the DP-VOL assigned to that pool. If more random read IOPS horsepower is desired for that pool, it can be created with 64 LDEVs from 32 RAID-5 (3D+1P) array groups, thus providing 128 disks of IOPS power to that pool. You can also increase the capacity of an HDP pool by adding array groups to it, thus re-leveling the wide striping along the pool and contributing its IOPS and throughput power to all of the DP-VOL assigned to that HDP pool. This is a powerful feature that can be used in combination with Hyper-V for rapidly deploying virtual machines and their associated storage capacity and performance requirements. Up to 1024 such LDEVs can be assigned to a single pool. This can represent a considerable amount of I/O power under (possibly) just a few DP-VOLs. This type of aggregation of disks was only possible previously by the use of somewhat complex host-based volume managers (such as Veritas VxVM) on the servers. One alternative available on both the Universal Storage Platform V and Universal Storage Platform VM is to use the LUSE feature, which provides a simple concatenation of LDEVs. Unlike Hitachi Dynamic Provisioning, however, the LUSE feature is mostly geared towards solving capacity problems only rather then both capacity and performance capability problems. Key Considerations: Hitachi Dynamic Provisioning volumes (DP-VOLS) are assigned to Hyper-V servers in the same method as used for static provisioning. Always utilize the quick formatting option when formatting Hitachi Dynamic Provisioning volumes, because it is a thin-friendly operation. Slow formatting is equally efficient on Windows 2003 but tests on Windows 2008 show that slow format writes more data. Slow formatting with RAID systems offers no benefit because all devices are preformatted. 20

Do not defragment file systems on Hitachi Dynamic Provisioning volumes, including those containing database or transaction log files. In a Hitachi Dynamic Provisioning environment, defragmentation of NTFS is rarely space efficient with any data. Consider using separate HDP pools for virtual machines that contain databases and logs that operate at high transaction levels at the same time. This still provides capacity savings while ensuring the highest level of performance required by your Hyper-V environment. Storage Partitioning Ensure application quality of service by partitioning storage resources with Hitachi Virtual Storage Machine technology. Hitachi Virtual Partition Manager software enables the logical partitioning of ports, cache and disk (parity groups) into Virtual Storage Machines on the Hitachi Universal Storage Platform. Partitions allocate separate, dedicated, secure storage resources for specific users (departments, servers, applications and so on). Administrators can control resources and execute business continuity software within their assigned partitions, secured from affecting any other partitions. Partitions can be dynamically modified to meet quality of service requirements. Overall system priorities, disk space and tiers of storage can be optimized for application QoS based on changing business priorities. For example, with Virtual Storage Machines, you can align the storage configuration with the Hyper-V server infrastructure for test and production deployment lifecycles so that production workloads are not affected by other non-production array-based activity. Figure 9 demonstrates the unique connectivity, partitioning and security features available with Hitachi Virtual Storage Machines. 21