Sizing guide for Microsoft Hyper-V on HP server and storage technologies



Similar documents
HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

HP reference configuration for entry-level SAS Grid Manager solutions

QuickSpecs. Models HP Smart Array E200 Controller. Upgrade Options Cache Upgrade. Overview

SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software

Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms

HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler

SPEED your path to virtualization.

How To Write An Article On An Hp Appsystem For Spera Hana

HP iscsi storage for small and midsize businesses

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

Table of contents. Matching server virtualization with advanced storage virtualization

Certification: HP ATA Servers & Storage

HP Converged Infrastructure Solutions

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

Use cases and best practices for HP StorageWorks P2000 G3 MSA FC/iSCSI Combo Controller

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

QuickSpecs. HP Smart Array 5312 Controller. Overview

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0

HP StorageWorks P2000 G3 and MSA2000 G2 Arrays

HP ProLiant Storage Server family. Radically simple storage

CONVERGED VIRTUAL STORAGE

HP recommended configuration for Microsoft Exchange Server 2010: ProLiant DL370 G6 supporting GB mailboxes

Introducing logical servers: Making data center infrastructures more adaptive

Microsoft Exchange Server 2007 deployment scenarios for midsize businesses

SAN Conceptual and Design Basics

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

QuickSpecs. What's New. Models. HP ProLiant Essentials Performance Management Pack version 4.4. Overview

QuickSpecs. What's New. Models. ProLiant Essentials Server Migration Pack - Physical to ProLiant Edition. Overview

Windows Server 2008 R2 for Itanium-Based Systems offers the following high-end features and capabilities:

QuickSpecs. Models HP ProLiant Storage Server iscsi Feature Pack. Overview

Models Smart Array 6402A/128 Controller 3X-KZPEC-BF Smart Array 6404A/256 two 2 channel Controllers

QuickSpecs. What's New HP 3TB 6G SAS 7.2K 3.5-inch Midline Hard Drive. HP SAS Enterprise and SAS Midline Hard Drives. Overview

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

HP recommended configurations for Microsoft Exchange Server 2013 and HP ProLiant Gen8 with direct attached storage (DAS)

EMC Business Continuity for Microsoft SQL Server 2008

Redundancy in enterprise storage networks using dual-domain SAS configurations

RAID 5 rebuild performance in ProLiant

Enhancing the HP Converged Infrastructure Reference Architectures for Virtual Desktop Infrastructure

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

HP ProLiant Essentials Performance Management Pack 3.0 Support Matrix

HP Proliant BL460c G7

VMware Best Practice and Integration Guide

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

OPTIMIZING SERVER VIRTUALIZATION

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

The Benefits of Virtualizing

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

HP StorageWorks EBS Solutions guide for VMware Consolidated Backup

HP VMware ESXi 5.0 and Updates Getting Started Guide

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

HP Smart Array Controllers and basic RAID performance factors

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

How to register. Who should attend Services, both internal HP and external

Abstract Introduction Overview of Insight Dynamics VSE and Logical Servers... 2

Ready, steady... go virtualization. It s a quick and easy step to lower costs with HP and Microsoft

Evaluation of Enterprise Data Protection using SEP Software

Legal Notices Introduction... 3

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

Virtual SAN Design and Deployment Guide

VTrak SATA RAID Storage System

HP StorageWorks P4000 G2 SAN Solutions

HP OneView Administration H4C04S

QuickSpecs. What's New. At A Glance. Models. HP StorageWorks SB40c storage blade. Overview

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Annex 9: Private Cloud Specifications

QuickSpecs. What's New HP 1.2TB 6G SAS 10K rpm SFF (2.5-inch) SC Enterprise 3yr Warranty Hard Drive

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

HP P4000 G2 LeftHand SAN Solutions

Infortrend ESVA Family Enterprise Scalable Virtualized Architecture

Implementing the HP Cloud Map for SAS Enterprise BI on Linux

HP StorageWorks Data Protection Strategy brief

Server and Storage Virtualization with IP Storage. David Dale, NetApp

QuickSpecs. HP Virtual Desktop Infrastructure with VMware View Overview

Solution guide. HP Just Right IT. Technology made easy for your growing business

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

EMC Backup and Recovery for Microsoft SQL Server

Building a Microsoft Windows Server 2008 R2 Hyper-V failover cluster with HP Virtual Connect FlexFabric

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Summary. Key results at a glance:

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

QuickSpecs. What's New HP Virtual Connect Enterprise Manager v7.3 is the latest software version with added new features including: Models

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

SMB Direct for SQL Server and Private Cloud

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

QuickSpecs. Models. HP Dynamic Smart Array B320i Controller. Overview

Using Integrated Lights-Out in a VMware ESX environment

Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers

Transcription:

Sizing guide for Microsoft Hyper-V on HP server and storage technologies Executive summary... 2 Hyper-V sizing: server capacity considerations... 2 Application processor utilization requirements... 3 Application memory usage requirements... 5 Application network throughput requirements... 6 HP ProLiant server component capacity sizing matrix... 7 Hyper-V sizing: storage considerations... 9 HP storage solutions for Hyper-V... 10 ProLiant server local storage... 10 HP StorageWorks 2000fc Modular Smart Array... 10 HP P4500 LeftHand Virtualization SAN... 11 HP StorageWorks 4400 Enterprise Virtual Array (EVA4400)... 11 HP StorageWorks 6100 Enterprise Virtual Array (EVA6100)... 12 HP StorageWorks 8100 Enterprise Virtual Array (EVA8100)... 12 HP StorageWorks 6400 Enterprise Virtual Array (EVA6400)... 13 HP StorageWorks 8400 Enterprise Virtual Array (EVA8400)... 14 HP StorageWorks XP24000/XP20000 Disk Arrays... 15 Appendix A: Hyper-V processor settings... 16 Appendix B: Hyper-V memory settings... 28 Appendix C: Hyper-V network settings... 30 Appendix D: HP Unified Infrastructure Management... 35 Appendix E: HP Virtual Connect Flex-10 10Gb technology... 36 For more information... 37

Executive summary Sizing a Microsoft Hyper-V environment can seem like a daunting task. While the actual process can be very complicated, there are steps you can take to better understand how to successfully accomplish this task. This white paper discusses these steps and presents a sizing methodology for Microsoft Hyper-V environments. A static list of recommended HP server and storage hardware for running applications on top of Microsoft s virtualization software technology, called Hyper-V, is also included. This paper presents basic sizing guidance for deploying applications into a Microsoft Hyper-V virtualized architecture. It focuses on Hyper-V sizing from a generic or horizontal view, and does not focus on any specific application running on top of Hyper-V. Instead, it provides guidance on the key factors on which to focus for any application running on Hyper-V. The major Hyper-V consideration discussed in this white paper is the number of virtual machines (VMs) that can be run on a physical host server. This information should be used as sizing guidance and is not a detailed blueprint for implementing a Hyper-V architecture. Additional Hyper-V configuration recommendations, including a detailed server and storage bill of materials (BOM), can be obtained by downloading and running the HP sizing tool for Hyper-V located on the HP ActiveAnswers website. Target audience: This paper is for a technical audience needing assistance with sizing a Hyper-V environment using HP server and storage technology. You should have a good technical understanding of the performance and capacity requirements of the applications you plan on running on top of Hyper-V as well as a deep understanding of Hyper-V. For additional details about Hyper-V, refer to the HP ActiveAnswers website for Microsoft Hyper-V. This white paper is based on testing performed in the fall of 2008. Hyper-V sizing: server capacity considerations When attempting to determine how many virtual machines (VMs) are needed to properly run an application or a set of applications on Hyper-V, it is first necessary to calculate how many VMs can be run on a single physical host server. Once you have identified how many VMs can run simultaneously on a single server you can calculate how many servers you need to support your Hyper-V application infrastructure. The first step in this process is identifying the optimum performance requirements of the three major server components your applications will consume: processor utilization, memory usage, and network throughput. Note: While there are other server components that may affect performance, this paper focuses on the three server components mentioned above as they can have the biggest impact on server performance. Understanding your application s performance requirements before moving to a virtualized environment is the first part in sizing a Hyper-V architecture. 2

Application processor utilization requirements Understanding how much of a physical host server s processor a given application will need is the primary requirement for determining the number of VMs that can be run simultaneously on a physical server. Certainly, the other two server components discussed in this paper, memory usage and network throughput, are important server resources in deciding the number of VMs to run on a physical server; however, these two server resources can be scaled-up within a physical server much more readily than the server s processor resources. Once all of the processor sockets within a physical server are populated, the only way to scale-up processor power is to install faster processors, if the server motherboard will even support such an upgrade. Therefore, the first and most important step in considering how many VMs can simultaneously run on a server is determining the server s processor utilization capacity. This section discusses a simple, basic application processor utilization capacity measuring process. HP has published a free downloadable Hyper-V sizing tool that is more sophisticated and complex and requires the user to enter a good amount of application performance and capacity details. The Hyper-V sizing tool produces rich, configurable Hyper-V configurations based on granular user application and Hyper-V usage requirements. The Hyper-V sizing tool also produces a server and storage bill of materials (BOM) that details the server and storage hardware recommendations. The HP sizing tool for Hyper-V can be found on the HP ActiveAnswers website. Most applications running on a physical server do not use the full measure of a server s processor capacity. Measuring your application s current processor requirements or estimating your future applications requirements is important in sizing your Hyper-V architecture. For example, in Hyper-V you can theoretically create up to 512 VMs and simultaneously run up to 128 VMs on a single server. If you were to try and start all of them all at the same time, however, one of two things will most likely happen; either most of the VMs will not start because they do not have processor resources, or they will all start, but the applications running on the VMs will have very poor performance because of inadequate processor resources. In order to prevent such a situation from occurring, HP recommends following the guidelines below: 1. Identify the optimum amount of processing power, measured in processor utilization percentage, that your application will need in order to run properly on a single physical server. This data point can be obtained by recording the average processor utilization performance measurements from each physical server the application is currently running on. Take an average of these server processor utilization measurements to derive a single optimum processor utilization data point to be used in the calculation. If this data point is not available, then an informed estimate will need to be made. 2. Keep a minimum reserve of 30% processor utilization for the host server. (Use a maximum of 70% processor utilization capacity for the VMs.) This will allow for the occasional spike in processor utilization encountered by most applications. 3. Divide the maximum processor utilization capacity by the optimum processor utilization you want your application to run at. 4. Multiply the result by 90% to calculate the approximate number of VMs that can run on a single physical host server. (This step accounts for a 10% overhead for going from physical to virtual machines.) Examples of this process: Example 1: 1. You estimate that the optimum processor utilization for your application is 2%. 2. You reserve 30% of the total processor capacity for the physical host server. 3

3. The approximate number of VMs that can be run simultaneously = (100-30) / 2 * 90% = ~31 VMs. Example 2: 1. You estimate that the optimum processor utilization for your application is 5%. 2. You reserve 30% of the total processor capacity for the physical host server. 3. The approximate number of VMs that can be run simultaneously = (100-30) / 5 * 90% = ~12 VMs. Example 3: 1. You estimate that the optimum processor utilization for your application is 10% 2. You reserve 30% of the total processor capacity for the physical host server 3. The approximate number of VMs that can be run simultaneously = (100-30) / 10 * 90% = ~6 VMs. Example 4: 1. You estimate that the optimum processor utilization for your application is 20%. 2. You reserve 30% of the total processor capacity for the physical host server. 3. The approximate number of VMs that can be run simultaneously = (100-30) / 20 * 90% = ~3 VMs. Note: If the application requires more than 35-40% processor utilization resources, it is probably not a candidate for virtualization and should run alone on a physical server. Note: The processing capacity of newer generation servers will usually be greater than the processing capacity of older generation servers. This difference should be taken into account when calculating processor capacity requirements. You should adjust the above formula to match your specific application infrastructure requirements. Please see Appendix A for additional information on Hyper-V processor settings. Note: Please see Appendix D for information on HP server and storage management tools. As previously mentioned, the Hyper-V 2008 sizing tool, located on the HP ActiveAnswers website provides comprehensive detailed Hyper-V sizing solutions complete with solution configuration bill of materials. 4

Application memory usage requirements Now that you have an idea on the number of VMs that can run simultaneously on a single physical server, you need to ensure that the physical host server has enough memory to properly support the applications running on each of the VMs. Most HP ProLiant servers have a memory capacity sufficiently capable of supporting applications running in a virtualized environment. If a single application needs more memory than can be supported by a single HP ProLiant server, then a case can be made that it probably should not be running in a virtualized environment anyway. Most applications suitable for running in a virtualized environment do not use the full memory capacity of the host server. To calculate how much memory a physical host server should have: 1. Collect the current memory usage data of every legacy application running on current physical servers. (If you are planning to install new applications or cannot collect current server memory usage data, then an informed estimate will need to be made.) 2. Sum the collected memory usage data points into a single memory usage requirement data point. 3. Divide this sum by the number of servers it was collected from to derive a mean average. 4. Multiply this mean average by the number of VMs to be run simultaneously on the Hyper-V host server to derive a total memory usage data point for the host server. 5. A minimum of 2GB (2,147,483,648 bytes) should then be added to this data point to account for the memory requirements of the host server. 6. Multiply this new number by 120% to account for any overhead encountered by the memory subsystem for managing a virtual environment. (20% is being added to the total.) 7. Divide this number by 1GB (1,073,741,824 bytes) to convert to GBs. Example: 1. The collected sum of all the legacy application servers = 13,223,258,863 bytes. (This example uses 6 servers.) 2. The mean average = 13,223,258,863 /6 = 2,203,876,477 bytes. 3. Multiply this mean average by the number of VMs to be run simultaneously on the host server. (This example uses 6 VMs.): 2,203,876,477 * 6 = 13,223,258,863 bytes 4. Add this result to the minimum 2GB reserved for the host server: 13,223,258,863 + 2,147,483,648 = 15,370,742,511 bytes. 5. Multiply this result by 120%: 120% * 15,370,742,511 = 18,444,891,013 bytes. 6. Convert to GBs: 18,444,891,013 / 1,073,741,824 = ~17GB of RAM needed by the host server to support six VMs. If the calculated required memory exceeds what a single physical host server can support, then the number of VMs that can be run simultaneously will need to be reduced or additional servers will be required. (Memory usage requirements then become the deciding factor as opposed to processor utilization requirements.) Please see Appendix B for details on Hyper-V memory settings. Note: Please see Appendix D for information on HP server and storage management tools. 5

Application network throughput requirements The third major server component that needs to be taken into consideration when sizing a Hyper-V environment is the network throughput requirements for all of the VMs simultaneously running on a single physical host server. This is basically a question of how many NIC ports you need on the physical server to support the number of already determined VMs. For the purposes of this sizing exercise, the discussion will be limited to HP 1Gb technology. HP also offers 10Gb network technology in addition to the 1Gb network technology that most customers are currently using. It is to your advantage to move to a 10Gb network infrastructure as soon as possible. To calculate how much network bandwidth a Hyper-V host server should have: 1. Collect the current network bandwidth usage data from each network NIC port running on current physical servers. (If you are planning on installing new applications or cannot collect current server network bandwidth usage data, then an informed estimate will need to be made.) 2. Take the mean average of all these NIC ports network bandwidth usage measurements and record this calculation as a single network throughput usage requirement data point. 3. Multiply this data point by the number of VMs to be run simultaneously on the Hyper-V host server to derive a total network throughput data point. 4. An additional 20% should then be added to the total network throughput data point to account for the network throughput usage requirements of the Hyper-V host server. 5. Divide this number by 1Gb (1,073,741,824 bits) to calculate the raw number of 1Gb NIC ports needed. 6. Adjust the previous calculation by multiplying it by 130%. (It is a recommended best practice for a NIC port not to exceed 70% utilization.) 7. This final calculation is the number of 1Gb NIC ports needed by the Hyper-V host server. Example: 1. The mean average NIC port utilization from all your legacy application servers NIC ports = 290,072,000 bits/sec. (This is your single network throughput usage requirement data point.) 2. Multiply this data point by the number of VMs you are going to run simultaneously on the Hyper-V host server: 290,072,000 * 6 = 1,740,432,000 bits/sec. (This is the total network throughput data point.) 3. Multiply this number by 120% (adding 20% reserve throughput for the host server): 1,740,432,000 * 120% = 2,088,518,400 bits/sec 4. Divide this number by 1Gb (1,073,741,824): 2,088,518,400 / 1,073,741,824 = ~ 2.0Gb/s 5. Multiply this calculation by 130% to follow the 70% best practice rule: 2.0 * 130% = ~ 2.6Gbs (Three 1Gb NIC ports are required on the Hyper-V host server to support all 6 VMs). Note: Additional NIC ports should be configured for administration and management networks in addition to those NIC ports needed to support Hyper-V VMs. Please see Appendix C for details on Hyper-V network settings. 6

HP Virtual Connect Flex-10 10Gb technology for HP BladeSystem helps alleviate network bandwidth throughput and management. Please see Appendix E for additional information on Flex-10 technology. Note: Please see Appendix D for information on HP server and storage management tools. HP ProLiant server component capacity sizing matrix The tables below list the technical specification of the HP ProLiant servers that currently support Hyper-V. These tables should be used as guidance when deciding what type and how many servers are needed to support your Hyper-V application infrastructure. It is recommended that it be used in conjunction with the sizing guidance provided earlier in this white paper. Server specifications are current as of March 2009; see the HP website for up to date information. http://www.hp.com/go/proliant Table 1. HP ProLiant DL rack mount servers: Density optimized servers for optimum performance and scalability ProLiant Server Model Processor Specs QC = Quad Core Memory Capacity Embedded NIC ports DL160 G5 Max speed = 3.33GHz 64GB 667MHz DDR2 2 x 1GbE ports DL160 G6 Max speed = 2.9GHz (Nehalem EP) 144GB DDR3 RDIMM 2 x 1GbE ports DL160 G5p Max speed = 3.33GHz 128GB 667MHz DDR2 2 x 1GbE ports DL165 G5 Max speed = 2.7GHz 64GB 667 MHz DDR2 2 x 1GbE ports DL165 G5p Max speed = 2.7GHz 128GB 667 MHz DDR2 2 x 1GbE ports DL360 G5 Max speed = 3.33GHz 64GB 667 MHz DDR2 2 x 1GbE ports DL360 G6 Max speed = 2.9GHz (Nehalem EP) 144GB DDR3 RDIMM 2 x 1GbE ports DL365 G5 Max speed = 2.7GHz 32GB 800 MHz DDR2 2 x 1GbE ports DL380 G5 Max speed = 3.33GHz 64GB 667 MHz DDR2 2 x 1GbE ports DL380 G6 Max speed = 2.9GHz (Nehalem EP) 144GB DDR3 RDIMM 4 x 1GbE ports 7

ProLiant Server Model Processor Specs QC = Quad Core Memory Capacity Embedded NIC ports DL385 G5 Max speed = 2.3GHz 64GB 667 MHz DDR2 2 x 1GbE ports DL385 G5p Max speed = 2.7GHz 128GB 667 MHz DDR2 4 x 1GbE ports DL580 G5 4 x HC (6-core) Max speed = 2.67GHz 256GB 667 MHz DDR2 2 x 1GbE ports DL585 G5 4 x QC Max speed = 2.8GHz 256GB 800 MHz DDR2 2 x 1GbE ports Table 2. HP ProLiant ML servers: expandable tower servers for small- to medium-sized businesses ProLiant Server Model Processor Specs QC = Quad Core Memory Capacity Embedded NIC ports ML350 G5 Max speed = 3.16GHz 32GB 667 MHz DDR2 1 x 1GbE port ML350 G6 Max speed = 2.9GHz (Nehalem EP) 144GB DDR3 RDIMM 2 x 1GbE ports ML370 G5 Max speed = 3.33GHz 64GB 667 MHz DDR2 2 x 1GbE ports ML370 G6 Max speed = 2.9GHz (Nehalem EP) 144GB DDR3 RDIMM 4 x 1GbE ports Table 3. HP BladeSystem c-class ProLiant server blades: Premier server performance, scalability, and manageability ProLiant Server Model Processor Specs QC = Quad Core Memory Capacity Embedded NIC ports BL260c G5 Max speed = 3.00GHz 48GB 667 MHz DDR2 2 x 1GbE ports BL460c G5 Max speed = 3.00GHz 32GB 667 MHz DDR2 2 x 1GbE ports BL460c G6 Max speed = 2.9GHz (Nehalem EP) 96GB DDR3 RDIMM 2 x 10GbE Flex-10 ports* BL465 G5 Max speed = 2.7GHz 64GB 667 MHz DDR2 2 x 1GbE ports BL480c Max speed = 3.33GHz 64GB 667 MHz DDR2 4 x 1GbE ports 8

ProLiant Server Model Processor Specs QC = Quad Core Memory Capacity Embedded NIC ports BL490c G6 Max speed = 2.9GHz (Nehalem EP) 144GB DDR3 RDIMM 2 x 10GbE Flex-10 ports* BL495 G5 Max speed = 2.7GHz 128GB 667 MHz DDR2 2 x 10GbE Flex-10 ports* BL680c G5 4 x HC (6-core) Max speed = 2.40GHz 128GB 667 MHz DDR2 4 x 1GbE ports BL685c G5 4 x QC Max speed = 2.7GHz 128GB 667 MHz DDR2 4 x 1GbE ports * See Appendix F for information on the benefits of using HP 10GbE Flex-10 technology. Additional information about Microsoft support for HP ProLiant servers can be found at: Windows support for HP ProLiant Servers Windows Server 2008 Hyper-V sizing: storage considerations Understanding your application storage requirements is the second part in sizing a Hyper-V environment. While the Hyper-V server capacity requirements assisted in determining the number of VMs that could be run on a physical host server, the Hyper-V storage requirements assists in selecting the appropriate HP storage technology. While the type of storage hardware has little or no impact on determining the actual number of VMs that can run simultaneously on a physical server, the type of storage technology selected, and how that storage technology is configured does affect the performance of the applications running on the VMs. (Performance is measured by the number of IOps (I/O operations per second) and data throughput.) This guideline cannot and therefore does not offer specific storage configurations for every possible storage configuration. The sheer number of recommendations that is possible when you take into consideration all the potential user requirements is nearly endless. Instead, the guideline offers suggestions based on the measured peak performance of selected HP storage technologies starting with localized server storage for the low-end solutions through the high-end StorageWorks solutions that HP offers. The bulleted list below is a list of performance factors that should be taken into consideration when selecting, installing, and configuring storage solutions from HP. Throughput of the storage device (IOps and MBs) Number of disk spindles used Disk technology (SAS, FC, SSD) Disk speed Disk size RAID level configured Number of volumes configured Number of volumes per host Number of virtual disks configured (if appropriate) Block size (512, 4K, 16K, 32K) Data access method (random or sequential) 9

HP storage solutions for Hyper-V ProLiant server local storage HP ProLiant server models offer a range of disk storage capacity ranging from one-disk spindle configurations up to 16-disk spindle configurations. General Performance = ~ 700-1,100 IOps General Throughput = ~ 12-20 MBs 1 The advantages for running Hyper-V VMs on local server storage are: Less cost (No additional external storage to purchase) Less storage technology expertise required to manage the storage The disadvantages for running Hyper-V VMs on local server storage are: Less performance (as compared to using HP external storage solutions) Limited capacity Very limited scalability Poor manageability Limited high availability No reporting capabilities Limited back-up and mirroring capabilities No snapshot capabilities No clustering support HP StorageWorks 2000fc Modular Smart Array A mid-market storage solution with a high-end capacity of 29.7 TB (SAS) High end = 99 disk spindles RAID 50 2 15K SFF (SAS) Max Performance = ~ 5,000 IOps Max Throughput = ~ 500MBs Good Performance / Price ratio Scalability Optional Snapshot capability Clustering support Manageability HP StorageWorks Storage Mirroring Failover support Additional information can be found at: HP StorageWorks Modular Smart Array. 1 Generic performance numbers. The actual performance of local disk storage is determined by the number of disk spindles, the type of disk spindles, the speed of the disk spindles, and the RAID configuration of the disk arrays. 2 RAID 50 combines the block striping and parity of RAID 5 with the straight block striping of RAID 0, yielding higher performance than RAID 5 through the addition of RAID 0, particularly during writes. 10

HP P4500 LeftHand Virtualization SAN Scalable data storage systems that simplify management, reduce customer costs, and optimize virtual environments. A mid-market storage solution with an initial capacity of 10.8 TB (scales to 216 TB Max) 24 x 15K SAS Drives (scales to 480) Performance = ~ 3,400 IOps (scales to 68,000 IOps) G2 Model Throughput = ~ 250 MBs Good Performance / Price ratio All inclusive SAN functionality Simplified scalability and management using SAN/iQ software packages Two levels of built in RAID Built on Enterprise class hardware Dual active/active load balanced controllers 5 GB cache Four 1GbE network ports (scales to 40) Redundant power and cooling Additional information can be found at: Lefthand Networks HP StorageWorks 4400 Enterprise Virtual Array (EVA4400) A mid-market to low-end Enterprise storage solution with a capacity of 96 TB Disk spindle capacity = 96 disk spindles (FATA) Performance = ~ 105,000 IOps Throughput = ~ 1150MBs Excellent Performance Designed for No-Single-Point-of-Failure High data capacity Easy installation and upgrade maintenance Scalability Local / Remote Data Replication capabilities with HP Business Copy EVA and HP Continuous Access EVA software Easy array management and configuration StorageWorks Command View software Cluster Server Support Failover Support Virtual RAID Arrays (Vraid 0, Vraid 1, Vraid 0+1, Vraid 5, Vraid 0+5, Vraid 6 & Cross Vraid Snaps) EVA ISCI Connectivity Option Additional information can be found at: HP StorageWorks EVA Arrays. 11

HP StorageWorks 6100 Enterprise Virtual Array (EVA6100) A low-end Enterprise storage solution with a capacity of 112 TB Disk spindle capacity = 112 disk spindles (FATA) Performance = ~ 115,000 IOps Throughput = ~ 1150MBs Good performance Highly scalable Excellent reliability and availability Easy array management and configuration StorageWorks Command View software Support for HP StorageWorks Business Copy EVA (Snapshot, and Vsnap (virtually capacity free shapshots), Snapclone, MirrorClone and Cross Vraid snapshots and Snapclone) Dual redundant controller operation for increased fault tolerance Clustered Server Support Mirrored Write-Back Cache support Read-Ahead and Adaptive Read Caching support Virtual RAID Arrays (Vraid 0, Vraid 1, Vraid 0+1, Vraid 5, Vraid 0+5 & Cross Vraid Snaps) Additional information can be found at: HP StorageWorks EVA Arrays. HP StorageWorks 8100 Enterprise Virtual Array (EVA8100) Enterprise storage solution with a capacity of 240 TB Disk spindle capacity = 240 disk spindles (FATA) Performance = ~ 170,000 IOps Throughput = ~ 1800MBs Superior performance Highly scalable Excellent reliability and availability Easy array management and configuration StorageWorks Command View software Support for HP StorageWorks Business Copy EVA (Snapshot, and Vsnap (virtually capacity free shapshots), Snapclone, MirrorClone and Cross Vraid snapshots and Snapclone) Dual redundant controller operation for increased fault tolerance Clustered Server Support Mirrored Write-Back Cache support Read-Ahead and Adaptive Read Caching support Virtual RAID Arrays (Vraid 0, Vraid 1, Vraid 0+1, Vraid 5, Vraid 0+5 & Cross Vraid Snaps) Additional information can be found at: HP StorageWorks EVA Arrays. 12

HP StorageWorks 6400 Enterprise Virtual Array (EVA6400) Enterprise storage solution with a capacity of 216TB Disk spindle capacity = 216 disk spindles (FATA) The performance characteristics of the HP StorageWorks EVA6400 can be found at: http://www.hp.com/go/eva Business Critical Solution Excellent reliability and availability Supports up to 18 drive enclosures Support for integrated application (block) and file storage solution Support for dual-ported 4 Gb/s FC disk drives, 2 Gb/s dual-ported Fibre Attached Technology Adapted (FATA) drives, and dual ported Solid State Disks (SSD) Supports up to 2048 LUNs (up to 256 per HBA) ranging in size from 1GB to 32TB per virtual disk, in 1GB increments Virtual disk data load leveling (non-disruptive background activity) Redundant FC-AL loops from each controller to dual disk ports HP StorageWorks Continuous Access EVA remote replication (synchronous and asynchronous) Remote replications between current EVA generations Migration support via remote replications between current and earlier EVA generations Support for HP StorageWorks Business Copy EVA (Snapshot, and Vsnap (virtually capacity free snapshots), Snapclone, MirrorClone and Cross Vraid snapshots and Snapclone). Dual redundant controller operation for increased fault tolerance Battery-Back-Up for controller cache memory Asynchronous Disk Swap (hot swap) Clustered Server Support Mirrored Write-Back Cache Support Read-Ahead and Adaptive Read Caching Support Virtual RAID Arrays (Vraid 0, Vraid 1, Vraid 0+1, Vraid 5, Vraid 0+5, Vraid 6 & Cross Vraid Snaps) Support for local replication between Vraid types using Vsnap or Snapclone within a disk group or using Snapclone across disk groups (and Cross Vraid Snapshot and Snapclone) Supports connection of up to 256 hosts Selective Storage Presentation and SAN-based Data Zoning (through switches) HP StorageWorks Command View EVA GUI Interface for management and monitoring 13

HP StorageWorks 8400 Enterprise Virtual Array (EVA8400) Enterprise storage solution with a capacity of 324 TB Disk spindle capacity = 324 disk spindles (FATA) The performance characteristics of the HP StorageWorks EVA8400 can be found at: http://www.hp.com/go/eva Business Critical solution Top line reliability and availability Supports up to 27 drive enclosures Support for integrated application (block) and file storage solution Support for dual-ported 4 Gb/s FC disk drives, 2 Gb/s dual-ported Fibre Attached Technology Adapted (FATA) drives, and dual ported Solid State Disks (SSD) Up to 22GB Cache Supports up to 2048 LUNs (up to 256 per HBA) ranging in size from 1GB to 32TB per virtual disk, in 1GB increments Virtual disk data load leveling (non-disruptive background activity) Redundant FC-AL loops from each controller to dual disk ports HP StorageWorks Continuous Access EVA remote replication (synchronous and asynchronous) Remote replications between current EVA generations Migration support via remote replications between current and earlier EVA generations Support for HP StorageWorks Business Copy EVA (Snapshot, and Vsnap (virtually capacity free snapshots), Snapclone, MirrorClone and Cross Vraid snapshots and Snapclone). Dual redundant controller operation for increased fault tolerance Battery-Back-Up for controller cache memory Asynchronous Disk Swap (hot swap) Clustered Server Support Mirrored Write-Back Cache Support Read-Ahead and Adaptive Read Caching Support Virtual RAID Arrays (Vraid 0, Vraid 1, Vraid 0+1, Vraid 5, Vraid 0+5, Vraid 6 & Cross Vraid Snaps) Support for local replication between Vraid types using Vsnap or Snapclone within a disk group or using Snapclone across disk groups (and Cross Vraid Snapshot and Snapclone) Supports connection of up to 256 hosts Selective Storage Presentation and SAN-based Data Zoning (through switches) HP StorageWorks Command View EVA GUI Interface for management and monitoring 14

HP StorageWorks XP24000/XP20000 Disk Arrays High-end Enterprise storage solution Storage capacity = 1.13 PB (RAW), Maximum Capacity (Usable) = 851 TB 1152 disk spindles (Max) Support for Fibre Channel, SATA, and Solid State Drives (SSD) The performance characteristics of HP StorageWorks XP Disk Arrays can be found at: HP StorageWorks XP24000/XP20000 Disk Arrays Ultimate Business critical storage solution All components fully redundant (No single point of failure) XP Continuous Access Software Continental Cluster Software A 3 Data Center Solution (3DC) External Storage Disaster Recovery Enhanced Security features Dynamic Partitioning (up to 32 partitions) Seamless Scalability all main components can be added without shutting down critical operations Power and Cooling efficiency Additional information about Microsoft Hyper-V support for HP StorageWorks can be found at: Microsoft Hyper-V Support for HP StorageWorks Additional information about HP Storage offerings can be found at: HP Storage technologies Important This guide provides a general sizing overview. This is not to be used as a final sizing recommendation. There are many customer-specific details that can impact the application of the general sizing information presented in this guide. HP recommends proof-of-concept testing in a non-production environment using the actual target application as a matter of best practice for all application deployments. Testing the actual target application in a test/staging environment identical to, but isolated from, the production environment is the most effective way to estimate systems behavior. 15

Appendix A: Hyper-V processor settings It is important to understand the various Hyper-V configuration settings before attempting to design your virtualized environment. Hyper-V VM configuration settings are set from within the individual VM Settings option. The Hyper-V processor configuration settings are located under Server Manager Roles Hyper-V Hyper-V Manager (specific VM) Processor. See Figures 1 and 2 below. Figure 1. Hyper-V Role under Server Manager 16

Figure 2. Hyper-V processor configurations setting screen How your application consumes host server processor capacity while running on a Hyper-V VM is directly related to how you configure the processor settings of that VM. The processor configuration area within Hyper-V has 5 configuration settings that need to be understood and configured properly for your VM to behave the way you expect it to behave. 1. Number of logical processors setting: This setting defines how many logical processors you wish to assign to your VM. A logical processor is a representation of the processing capacity of the physical host server. For example: A two-socket server with quad-core processors would have 8 logical processors for Hyper-V to use. If the application you are hosting on this VM is a singlethreaded application, then you probably want to leave this setting = 1. If your application can take advantage of multiple processors in a physical server environment, then you should change this setting to equal either 2 logical processors or 4 logical processors. As of the date this paper was written, Hyper-V could only support up to 4 logical processors on a single VM. (See Figure 2 above.) 17

2. Virtual machine reserve percentage setting: Allows you to reserve a percentage of the host server s processor capacity for this specific VM. This setting ensures you have at least a set percentage of host processors resources reserved for this VM. Please be aware that the amount of processing capacity that is reserved when using this setting is directly related to the number of logical processors assigned to the VM in the previous step. For example, if you configure this setting to 100% on a system that has 8 cores, you will see that Hyper-V reserves 12% of the host processor resources (with the logical processor setting = 1). This happens because you have set the logical processor to = 1 which equates to 1 of the 8 total cores which means that each of the eight cores can have a maximum of 12% of the host processor resources (100% / 8 cores * 1 logical processor = 12%). See Figure 3. Note: With these settings, you will not be able to run more than 8 VMs simultaneously. If you try to start a ninth VM with these settings, Hyper-V will complain that there are not enough processor resources, and the VM will fail to start. (8LP * 12% = 96%) 18

Figure 3. Virtual machine reserve settings with 1 logical processor 19

If you change the number of logical processors to = 2, then Hyper-V will reserve 24% of the host processor resources for this VM. (100% / 8 cores * 2 logical processor = 24%). See Figure 4. (You can only run 4 VMs with these setting: 100% / 24% = 4 VMs). Figure 4. Virtual machine reserve settings with 2 logical processors 20

If you change the number of logical processors to = 4, then Hyper-V will reserve 48% of the host processor resources for this VM. (100% / 8 cores * 4 logical processor = 48%). See Figure 5. (You can only run 2 VMs with these settings: 100% / 48% = 2 VMs). Figure 5. Virtual machine reserve settings with 4 logical processors 21

If you change the virtual machine reserve to less than 100%, it has the effect of reserving less than 12% for each core in an eight core server. For example if you change this setting to 50% and have the logical processors set to 1, the total host server processor capacity reserved for this VM will be = 6%. This means that you can run 16 VMs with this setting (6% * 16VMs = 96%). See Figure 6. Figure 6. Virtual machine reserve at 50% 22

3. Virtual machine limit percentage setting: This setting prevents a VM from using more than a set amount of host processor resources. This setting has the same goal as the virtual machine reserve setting but accomplishes it using a different methodology. If you set this to = 100%, and have the logical processor setting = 1, then you are telling Hyper-V that in a two-socket, eight-core server that each core is limited to up to 12% of the host server s processing capacity (100% / 8 cores = 12%). In Figure 7, the virtual machine reserve is set to zero (0) which means that the VM is not formally reserving any of the host server s processor resources, which means that the VM can use whatever processor resources it can get from the host server. However, the virtual machine limit percentage has been set to 100, which means that Hyper-V will limit the amount of processor resources to 12%. Figure 7. Virtual machine reserve set to zero. Virtual machine limit set to 100. 23

Changing the number of logical processors assigned to this VM has the same effect as it did with the virtual machine reserve setting. See Figure 8. Figure 8. Changing the number of logical processors with Virtual machine limit 24

Note: If you set both the virtual machine reserve and the virtual machine limit = zero (0) on all your VMs, you could theoretically run up to 128 VMs simultaneously, which is the Hyper-V limit. Hyper-V will not complain about the number of VMs you have running because the host server s processing capacity will be dynamically shared. This of course means that all 128 VMs will be sharing the host server s processor capacity resulting in extremely poor performance. See Figure 9. Figure 9. Both virtual machine reserve and virtual machine limit = zero 25

4. Relative weight setting: As discussed above, if you set the virtual machine reserve and the virtual machine limit = zero (0), and have no other controls in place, all VMs will share the host server s processor capacity equally. The relative weight setting allows you to assign weighted values to your VMs to ensure that the VMs running critical applications will get the processor capacity they need when they need it. Setting a VM s relative weight setting only works for those VMs where the virtual machine reserve and the virtual machine limit = zero (0). A relative weight setting = 100 is the highest value that can be assigned. See Figure 10. Figure 10. Relative weight setting 26

5. Limit processor functionality setting: When this setting is checked, Hyper-V will check for CPUID code that is not compatible with older guest operating systems (Windows NT for example). Unless you have a problem running an older guest OS, you should leave this box unchecked. See Figure 11. Figure 11. Limit processor functionality setting 27

Appendix B: Hyper-V memory settings Configuring how much memory your VM guest operating system is going to be configured with is a simple task in Hyper-V. The Hyper-V memory allocation setting is located under Server Manager Roles Hyper-V Hyper-V Manager (specific VM) Memory. See the two Hyper-V memory screenshots below. Figure B-1. Hyper-V memory allocation setting 28

Figure B-2. Setting RAM for a VM While the type of application you are running on the guest operating system will determine your specific memory needs, HP recommends that you should configure each Hyper-V guest operating system with no less than 2GB of RAM. In addition, you should reserve no less than 2GB for use by the physical host server. 29

Appendix C: Hyper-V network settings Configuring your Hyper-V network settings requires you to first set up all the virtual networks that all of your VMs will use to connect to each other or to networks outside the Hyper-V environment. The Hyper-V virtual network configuration settings are located under Server Manager Roles Hyper-V Hyper-V Manager Virtual Network Manager. See below. Figure C-1. Hyper-V virtual network settings 30

There are three types of virtual networks you can configure in Hyper-V: 1. External virtual network: This type of network allows the VM to connect to the physical network on the host server. This type of network should be used to connect VMs to networks that do not reside within the host server. Figure C-2. External virtual network 31

2. Internal virtual network: This type of network allows the VM to connect to other VMs on the host server as well as the host server itself only. It does not allow the VM to connect to the host server s physical network. Figure C-3. Internal virtual network 32

3. Private virtual network: This type of network only allows the VM to connect to other VMs on the host server. Figure C-4. Private virtual network 33

After all the necessary virtual networks are configured, you then configure the VMs for the specific network they will use: Figure C-5. Configure VMs for the specific network they will use 34

Appendix D: HP Unified Infrastructure Management Insight Dynamics VSE for ProLiant and BladeSystem Plan power and capacity Provision infrastructure Balance physical and virtual resources Ensure cost-effective availability Insight Control Manage server deployment Effective health monitoring and management Control your infrastructure from any network connection Optimize your power usage HP Insight Software Bundled Suites Insight Control Insight Dynamics VSE Insight Foundation for ProLiant HP Insight Software Individual Products Integrated Lights-Out (ilo) Systems Insight Manager (SIM) Virtual Connect Enterprise Manager Links: HP Unified Infrastructure Management HP Adaptive Infrastructure HP Insight Dynamics VSE HP Virtualization with Microsoft HP Insight Control and ProLiant Essentials management software HP Systems Insight Manager HP ProLiant Support Pack HP Integrated Lights-Out (ilo) HP Virtual Connect Enterprise Manager HP Storage Essentials HP Software HP ActiveAnswers (technical white papers and sizing tools) 35

Appendix E: HP Virtual Connect Flex-10 10Gb technology HP Virtual Connect Flex-10 technology for HP BladeSystem provides the means to divide and tune your 10Gb Ethernet network bandwidth at the server edge. It allows you to carve and manage the capacity of a 10Gb Ethernet connection into multiple NIC ports and adds the unique ability to tune each Ethernet connection to fit your virtual server workloads. Virtual Connect Flex-10 technology builds additional flexibility into each server blade to add four times more NICs without adding additional hardware. It allows you to fine-tune your bandwidth and cut network hardware costs up to 75%. Virtual Connect allows you to connect and pre-assign all of the LAN MAC addresses and SAN WWNs that the server pool might ever need. Virtual Connect Flex-10 technology allows you to choose how many NICs are configured for each server and lets you set the bandwidth of each NIC for optimum bandwidth utilization. Figure E.1. Virtual Connect Flex-10 The management advantages and cost savings from using Flex-10 technology can be significant. The ability to adjust bandwidth for partitioned data flow is more cost efficient and easier to manage. The fact that Virtual Connect Flex-10 is hardware based but designed to complement VC technologies, means that multiple FlexNICs are added without the additional processor overhead or latency associated with virtualization or soft switches. Significant infrastructure savings are also realized since additional server NICs and associated switches may not be needed. Each dual-port Flex-10 NIC supports up to 8 FlexNICs and each Flex-10 Interconnect Module can support up to 64 FlexNICs. Flex-10 links: Virtual Connect technology What's new with HP Blades? HP Virtual Connect Flex-10 10Gb product overview HP server blade online community IDC Technical Brief Next-Generation Technology for I/O and Blade Servers 36

For more information Additional information on HP tools, solutions, software, hardware, and services can be found at: HP ActiveAnswers (Technical white papers and sizing tools) HP's Unified Infrastructure Management HP Adaptive Infrastructure HP Storage Essentials HP Servers HP Software HP Storage technologies HP Product Configurator HP Business and IT Services offerings How to buy HP offerings To help us improve our documents, please provide feedback at http://h20219.www2.hp.com/activeanswers/us/en/solutions/technical_tools_feedback.html. Technology for better business outcomes Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. 4AA2-5052ENW, March 2009.