Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage



Similar documents
HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

HP reference configuration for entry-level SAS Grid Manager solutions

Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

HP recommended configuration for Microsoft Exchange Server 2010: ProLiant DL370 G6 supporting GB mailboxes

Sizing guide for Microsoft Hyper-V on HP server and storage technologies

RAID 5 rebuild performance in ProLiant

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

How To Write An Article On An Hp Appsystem For Spera Hana

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

HP recommended configurations for Microsoft Exchange Server 2013 and HP ProLiant Gen8 with direct attached storage (DAS)

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

HP Smart Array Controllers and basic RAID performance factors

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

White Paper. Recording Server Virtualization

Building a Microsoft Windows Server 2008 R2 Hyper-V failover cluster with HP Virtual Connect FlexFabric

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

HP StorageWorks P4000 G2 SAN Solutions

StarWind Virtual SAN Best Practices

The Benefits of Virtualizing

SPEED your path to virtualization.

7 Real Benefits of a Virtual Infrastructure

Legal Notices Introduction... 3

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Maximizing SQL Server Virtualization Performance

OPTIMIZING SERVER VIRTUALIZATION

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V

Enhancing the HP Converged Infrastructure Reference Architectures for Virtual Desktop Infrastructure

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

VMware Best Practice and Integration Guide

Table of contents. Matching server virtualization with advanced storage virtualization

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Virtualizing Microsoft Exchange 2010 with HP StoreVirtual 4500 G2 and VMware vsphere 5.0

HP 85 TB reference architectures for Microsoft SQL Server 2012 Fast Track Data Warehouse: HP ProLiant DL980 G7 and P2000 G3 MSA Storage

Implementing the HP Cloud Map for SAS Enterprise BI on Linux

Use cases and best practices for HP StorageWorks P2000 G3 MSA FC/iSCSI Combo Controller

Performance brief for Oracle Enterprise Financial Management 8.9 (Order-to-Cash Counter Sales) on HP Integrity BL870c server blades

Desktop virtualization: Implementing Virtual Desktop Infrastructure (VDI) with HP. What is HP Virtual Desktop Infrastructure?

EMC Virtual Infrastructure for Microsoft SQL Server

Bosch Video Management System High Availability with Hyper-V

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

Windows Server 2008 R2 Hyper-V Live Migration

HP Converged Infrastructure Solutions

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Microsoft Exchange Server 2003 Deployment Considerations

P4000 SAN/iQ software upgrade user guide

QuickSpecs. What's New. Models. ProLiant Essentials Server Migration Pack - Physical to ProLiant Edition. Overview

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

New Advanced RAID Level for Today's Larger Storage Capacities: Advanced Data Guarding

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Windows Server 2008 R2 Hyper-V Live Migration

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

Live Migration across data centers and disaster tolerant virtualization architecture with HP Cluster Extension and Microsoft Hyper-V

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide

HP SiteScope. HP Vertica Solution Template Best Practices. For the Windows, Solaris, and Linux operating systems. Software Version: 11.

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

HP StorageWorks EBS Solutions guide for VMware Consolidated Backup

Lab Validation Report

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

EMC Unified Storage for Microsoft SQL Server 2008

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

QuickSpecs. HP Smart Array 5312 Controller. Overview

High Availability (HA) Aidan Finn

Qsan Document - White Paper. Performance Monitor Case Studies

Microsoft Exchange Solutions on VMware

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Deployments and Tests in an iscsi SAN

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

HP Server Management Packs for Microsoft System Center Essentials User Guide

Live Migration across data centers and disaster tolerant virtualization architecture with HP StorageWorks Cluster Extension and Microsoft Hyper-V TM

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

WHITE PAPER Optimizing Virtual Platform Disk Performance

QuickSpecs. Models HP ProLiant Storage Server iscsi Feature Pack. Overview

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Atlantis USX Hyper- Converged Solution for Microsoft SQL 2014

StarWind Virtual SAN for Microsoft SOFS

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration

HP P4000 SAN Solutions

Transcription:

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3 Testing tool... 4 Test bed deployment... 4 Iometer configuration... 4 Test topology... 11 Architectural diagrams and component specifications... 11 HP StorageWorks P4500 SAN... 11 HP ProLiant BL490c G6 server... 12 Windows Server 2008 Hyper-V R2 virtual machines (VM)... 12 AD/DC server... 12 Iometer controller... 12 Management server... 12 (2) Hyper-V R2 host servers... 13 Storage layout... 13 Test results... 14 Single volume on 2 or 4 P4500 cluster nodes with RAID5... 14 Single volume on 2 or 4 P4500 cluster nodes with RAID10... 16 Single volume on 2 or 4 P4500 cluster nodes Disk I/O comparison... 19 Single CSV on 2 or 4 P4500 cluster nodes with RAID5... 20 Single CSV on 2 or 4 P4500 cluster nodes with RAID10... 23 Single CSV on 2 or 4 P4500 cluster nodes Disk I/O comparison... 26 Single volume vs. single CSV on 2 or 4 P4500 cluster nodes... 28 Scale-out test with multiples CSVs on 4 P4500 cluster nodes with RAID5... 31 Single CSV vs. multiple CSVs on 4 P4500 cluster nodes with RAID5... 34 Scale-out test with a single volume or a single CSV on 2 or 4 P4500 cluster nodes... 36 Test analysis summary... 40 Recommendations... 41 For more information... 42

Executive summary The surge of interest in virtualization technologies over the past few years has caused an increased need for knowledge about virtualized environment. That interest is further heightened due to the Microsoft Windows Server 2008 R2 Hyper-V server role. This paper focuses on the performance characterization of the disk sub-system for HP StorageWorks P4500 21.6TB SAS Multi-site SAN Solution (HP P4500 SAN), addressing questions a customer may have about deploying Microsoft s Hyper-V R2 virtual machines (VMs) on HP ProLiant BL490c G6 Virtualization Blades (ProLiant BL490c G6) with HP P4500 iscsi SAN storage device for backend storage. Target audience: The intended audience includes, but is not limited to, individuals or companies who are interested in the use of Hyper-V R2 virtualization technology for consolidation and migration of servers to ProLiant BL490c G6 servers with HP P4500 SAN storage solutions. This white paper describes testing performed in April 2010. Introduction Server virtualization is paving the way for IT organizations to provide significant benefit in helping companies to control cost, increase productivity, and improve efficiency by increasing server utilization, reducing the number of servers, reducing energy consumption and eventually reducing the overall cost in a datacenter environment. However, server virtualization and consolidation pose some unique storage challenges such as complex data management and inefficient storage utilization. Without the appropriate storage capacity planning, the benefits of server virtualization and consolidation will be minimal at best. HP StorageWorks P4500 21.6TB SAS Multi-site SAN Solutions deliver enterprise functionality that enhances virtual environments, simplifies management, and reduces costs. Easy to deploy, scale and maintain, the HP P4500 SANs ensure that crucial business data remains available. Its innovative approach to storage provides unique double fault protection across the entire SAN, reducing vulnerability without driving up costs the way traditional SANs did. The HP P4500 SAN Solutions are optimized for database and e-mail applications, as well as virtualized servers. For high availability and disaster recovery, the HP P4500 SAN eliminates single points of failure across the SAN with an innovative approach to data availability. Built on a storage clustering architecture, the HP P4500 SAN allows you to scale capacity and performance linearly without incurring downtime or performance bottlenecks, or forcing expensive upgrades. The following sections of this paper describe the testing methodology, test execution, test tools used, hardware/software configuration details, test results and recommendations. It provides detailed information on how HP hardware was configured for the testing, how Microsoft Hyper-V R2 and the virtual machines were configured, and how the testing tool was configured to test the performance capabilities of Hyper-V R2 VMs with the HP P4500 SAN in various test scenarios. 2

Important This paper is not a report of benchmark testing result. This paper is intended to present the performance characterization testing result for the ProLiant BL490c G6 virtualization blade servers and Hyper-V R2 virtual machines on HP P4500 SAN solution. The goal of the paper is to document the disk I/O performance of Hyper-V R2 on various HP P4500 SAN configurations with different simulated workloads. This document is not intended to report how many virtual machines can fit or run on a single volume. Nor is this a How to guide for setting up Windows 2008 R2, Hyper-V R2 and HP P4500 SAN. As with any laboratory testing, the performance metrics quoted in this paper are idealized. In a production environment, these metrics may be impacted by a variety of factors. HP recommends proof-of-concept testing in a non-production environment using the actual target application as a matter of best practice for all application deployments. Testing the actual target application in a test/staging environment identical to, but isolated from, the production environment is the most effective way to estimate system behavior. Test methodology The performance characterization test for the HP P4500 SAN and Hyper-V R2 was based on 4 principal variables: the number of storage nodes in an HP P4500 SAN cluster, RAID configuration for each storage node, HP P4500 SAN data replication level via Network RAID and simulated workloads. The number of storage nodes in an HP P4500 cluster consisted of either 2 or 4 RAID configuration for each storage node is either RAID5 or RAID10 HP P4500 cluster data protection level consisted of Network RAID10 (2-Way Mirror). Network RAID10 was used for all testing to maintain VM high availability The simulated workload was as follows: 1. Simulating a heavily used file share (64K, 50% Random, 50% R/W) 2. Simulating an Microsoft Exchange database (8K, 100% Random, 60%/40% R/W) 3. Simulating a data warehouse database (256K, 100% Sequential, 80%/20% R/W) 4. Simulating Microsoft Exchange Log files (8K, 100% Sequential, 100% W) This project detailed several methods of configuring Hyper-V and the HP P4500 SAN to work effectively together. Some questions were addressed in this project including: What is the performance impact of having multiple Hyper-V R2 Virtual Machines (VMs) within a single 2-storage-node cluster volume vs. within a single 4-storage-node cluster volume? What is the performance impact of having VMs on a single HP P4500 cluster volume? A single RAID5 volume A single RAID10 volume What is the performance characterization of multiple VMs within a single Windows Server 2008 R2 Cluster Shared Volume (CSV)? What is the performance characterization of multiple CSVs and storing one VM per CSV? 3

Testing tool The simulated work load was generated using the Iometer test tool, which is available at www.iometer.org. This document covers the technical configuration and testing methodologies used to conduct the Hyper-V R2 and HP P4500 SAN testing in more detail. Iometer is used to configure the workload, set operating parameters, and start and stop tests. Iometer controls the Dynamo clients, collects the performance data, and summarizes the data into a results file. Dynamo is a workload generator that resides on the System Under Test (SUT). At Iometer s command, Dynamo generates I/O operations, records the resulting performance data, and returns the data to Iometer. Test bed deployment HP obtained a copy of Iometer and unzipped it on the ProLiant BL490c G6 system used as the Iometer console. Executing Iometer.exe for the first time actually performs the installation much like a setup.exe for many application installations. HP engineers selected to install the desktop icon and used it to start Iometer. The Iometer configuration settings used for testing are described later in the Iometer configuration section. HP engineers created multiple Iometer data files (iobw.tst) to be used by the System Under Test (SUT) and placed a copy in the root directory of all systems to be tested. The test file was created using the Iometer Disk Targets tab. A copy of Dynamo.exe was placed on all systems to be tested. The test was conducted in two phases. The first phase tested multiple VMs running on a single ProLiant BL490c G6 Hyper-V R2 host server. The second phase tested multiple VMs running on a two-node Windows Failover Cluster. Two BL490c G6 Hyper-V R2 host servers were configured as the two Windows Failover Clustering nodes. In both phases, only the VMs were the SUTs. To be more specific, in the first phase test, one HP P4500 SAN volume was created and assigned to a single ProLiant BL490c G6 host server. Multiple VMs were created on this host server. All VM configuration and Virtual Hard Disk (VHD) files were stored on this dedicated HP P4500 SAN volume which was attached to the BL490c G6 host server. In the second phase test, when tested with a single CSV, one HP P4500 SAN volume was created and assigned to the Windows Failover Cluster nodes. This volume was added to the Windows Cluster Shared Volumes (CSV). Multiple VMs were created on this Windows Failover Cluster. All VM configuration and VHD files were stored on this dedicated CSV. When tested with multiple CSVs, HP created multiple HP P4500 volumes, added these volumes to the Windows Failover Cluster, and enabled these volumes as multiple Windows CSV volumes. HP also created multiple VMs on the Windows Failover Cluster and stored one VM per CSV volume. Cluster Shared Volumes (CSV) is a Windows Server 2008 R2 Failover Clustering feature that enables multiple Windows cluster nodes to gain read/write access to the same LUN at the same time. CSV provides the critical functionality that allows each Hyper-V R2 VM to failover independently by using the same LUN within a Windows Failover Cluster. To get more detail about CSV, please visit http://technet.microsoft.com/en-us/library/ff182346(ws.10).aspx Iometer configuration The screen shots in Figures 1-5 below represent the basic Iometer configuration settings used for testing. Iometer offers many configuration options that are not covered in this paper. 4

The Iometer Disk Targets tab shown in Figure 1 highlights the SUT computer names or IP addresses and the disk drives that were used for testing. Although not highlighted in Figure 1, the Maximum Disk Size parameter would be used to create the required Iometer data file used for testing. HP elected to create multiple data files (iobw.tst) and placed a copy on each SUT prior to testing. Figure 1. SUT computer names and disk drives to be used 5

Figure 2 highlights the Assigned Access Specifications options selected for the HP testing. Block size and other parameters varied according to the workload. Figure 2. Iometer Access Specifications tab used to specify block size, read/write percentage and random or sequential distribution 6

Figure 3 highlights the custom Access Specifications used in the HP testing for Hyper-V R2. HP used the name Simulating a heavily used file share (64K 50% Random, 50% R/W) to simulate a work load that consisted of a 64KB block size, 50% read, 50% write and 50% random, and 50% Sequential access. Figure 3. The Iometer Edit Access Specification tab is used to define block size (transfer request size), read/write percentage and random or sequential distributions 7

Figure 4 is a snap-shot for one of the test runs and highlights the Total I/Os per Second and its associated value. With the exception of the Update Frequency (seconds) option, which HP set to 10, HP used default settings. Figure 4. Iometer Results Display tab can be used to monitor real-time performance. Additional configuration options are available for each of the displays 8

Figure 5 highlights the test settings HP used. The Run Time or duration of each test was set to 2 minutes. The Ramp Up Time, sometimes referred to as warm-up, was set to 3 seconds. The Cycle Options selected was Cycle # Outstanding I/Os run step outstanding I/Os on all disks at a time. The # of Outstanding I/Os was set to Start at 1 and End at 40 with a Linear Stepping of 2. Depending on the number of storage nodes and the RAID level used, the # of Outstanding I/Os Start and End settings will need to be varied to assist in determining the IOPs that a given configuration is capable of producing. Figure 5. Iometer Test Setup tab is used to set the Run Time, Ramp Up Time, Cycling Options and # of Outstanding I/Os options 9

Figure 6 highlights the Dynamo command line for one SUT, in this case, a single VM. Prior to running Dynamo, the Iometer console should be running. The /i parameter identifies the Iometer controller, the /m parameter identifies the Manager Network Name and the /n parameter identifies the Manager Name. Use /? to show Dynamo command line syntax and available parameters. Figure 6. One instance of Dynamo running on a virtual machine 10

Test topology Architectural diagrams and component specifications Figure 7 below provides an architectural diagram of the test bed HP used for the Hyper-V R2 VM performance test on the HP P4500. While these diagrams depict specific HP server and storage models, these architecture diagrams can be used for all other HP server and storage technologies that support Microsoft Hyper-V R2. Figure 7. Architectural diagram of test bed HP StorageWorks P4500 SAN The HP StorageWorks P4500 SAN solution was used to test external storage performance. Each storage node in the HP P4500 SAN cluster had two physical network interfaces (NICs). A logical network interface was created for each node and configured with Adaptive Load Balancing (ALB) network interface bonding through HP P4500 Centralized Management Console (CMC). The logical NIC data transmissions were automatically balanced through both physical NICs and incorporated with fault tolerance features as well. HP created one HP P4500 SAN cluster. One or multiple HP P4500 SAN volumes were created and presented to the Hyper-V R2 host servers for various test scenarios. 11

HP ProLiant BL490c G6 server In the test, two HP ProLiant BL490c G6 Virtualization blade servers were deployed as the Hyper-V R2 host servers, which were used to create, configure, manage and host the VMs. In some test scenarios, these two servers were clustered. To gain the access to the HP P4500 volumes, HP DSM for MPIO was installed on each ProLiant BL490c G6 host server. iscsi Initiator was configured with multi-path enabled for each host as well. Windows Server 2008 Hyper-V R2 virtual machines (VM) In the test, each VM was configured with one virtual processor, 1GB memory, one 800GB fixed size VHD and one virtual network adapter. The virtual network adapter on each VM was connected to one NIC port of the host system to accept commands from Iometer through the external virtual network. AD/DC server Table 1 below details the HP ProLiant system that was used as the Active Directory/Domain Controller (AD/DC) server. Table 1. A ProLiant DL360 G5 server used for AD/DC was running Windows Server 2008 Server Model ProLiant DL360 G5 Operating System and Patch Level Windows Server 2008 Processor/Cores Memory (2) Dual-Core Intel Xeon 2.66 GHz 4GB Iometer controller Table 2 below details the HP ProLiant system that was used as the Iometer console. Table 2. A ProLiant BL490c G6 server used for work load generation was running Windows Server 2003 R2 Server Model Operating System and Patch Level Processor/Cores Memory ProLiant BL490c G6 Windows Server 2003 R2 (2) Quad-Core Intel Xeon 2.93 GHz 8GB Management server Table 3 below details a ProLiant BL460c server used for HP management that was running Windows Server 2003 R2 Table 3. HP ProLiant system used as management console Server Model Operating System and Patch Level Processor/Cores Memory Additional Network Adapter ProLiant BL460c Windows Server 2003 R2 (2) Quad-Core Intel Xeon 3.16 GHz 8GB (1) NC373m PCI Express Dual Port Multifunction Gigabit Server Adapter 12

(2) Hyper-V R2 host servers Two Virtualization Blades running Windows Server 2008 R2 x64 had the Hyper-V Role installed. Each of the host servers had the following specifications: Table 4. Hyper-V server specifications Server Model Operating System and Patch Level Processor/Cores Memory Additional Network Adapter ProLiant BL490c G6 Windows Server 2008 R2 x64 Hyper-V (2) Quad-Core Intel Xeon 2.93 GHz 32 GB (1) NC373m PCI Express Dual Port Multifunction Gigabit Server Adapter Storage layout The HP P4500 SAN solution with 4 nodes consisting of 48 HP 450GB 3G SAS 15K 3.5" DP ENT HDD drives was used for storage of the VM files. Storage node hardware RAID levels 5 and 10 were used for the performance test. 13

Test results Single volume on 2 or 4 P4500 cluster nodes with RAID5 Action 1: 5 VMs / 2 Storage Nodes / 24 Spindles / Storage Nodes RAID5 Create one 4TB volume with storage nodes RAID5, Network RAID10 data protection level and full provisioning Create 5 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Run all simulated workloads described in Test methodology Action 2: 10 VMs / 4 Storage Nodes / 48 Spindles / Storage Nodes RAID5 Create one 8TB volume with storage nodes RAID5, Network RAID10 data protection level and full provisioning Create 10 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Run all simulated workloads described in Test methodology 14

Results for number of IOPs: The increase of IOPs throughput for the workload of the simulating a data warehouse database scenario was minimal when going from 2 storage nodes to 4 storage nodes. Figure 8. Workloads performed on different number of storage node with RAID5 configuration 15

Results for MBps: The increase of MBps throughput for the workload of the simulating a data warehouse database scenario was minimal when going from 2 storage nodes to 4 storage nodes. Figure 9. Throughput in MBps with RAID5 configuration Single volume on 2 or 4 P4500 cluster nodes with RAID10 Action 3: 3 VMs / 2 Storage Nodes / 24 Spindles / Storage Nodes RAID10 Create one 2.4TB volume with storage nodes RAID10, Network RAID10 data protection level and full provisioning Create 3 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Run all simulated workloads described in Test methodology Action 4: 6 VMs / 4 Storage Nodes / 48 Spindles / Storage Nodes RAID10 Create one 4.8TB volume with storage nodes RAID10, Network RAID10 data protection level and full provisioning 16

Create 6 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Run all simulated workloads described in Test methodology Results for number of IOPs: The IOPs throughput for the simulating an MS Exchange database and the simulating MS Exchange Log files workloads increased when going from 2 storage nodes to 4 storage nodes. The simulating a heavily used file share workload decreased and the simulating a data warehouse database workload was flat when going from 2 storage nodes to 4 storage nodes. Figure 10. Workloads performed on different number of storage node with RAID10 configuration 17

Results for MBps: The MBps throughput for the simulating an MS Exchange database and the simulating MS Exchange Log files workloads increased when going from 2 storage nodes to 4 storage nodes. The simulating a heavily used file share workload decreased and the simulating a data warehouse database workload was flat when going from 2 storage nodes to 4 storage nodes. Figure 11. Throughput in MBps with RAID10 configuration 18

Single volume on 2 or 4 P4500 cluster nodes Disk I/O comparison Results for number of IOPs: With 4 storage nodes configuration, the simulating a heavily used file workload changed minimally when changing the nodes configuration from RAID5 to RAID10. The simulating a data warehouse database workload changes are minimal in 2 and 4 nodes, RAID5 and RAID10 tests. Figure 12. Workloads performed on different number of storage node with different RAID configurations 19

Results for MBps: With a 4-storage-nodes configuration, the simulating a heavily used file workload changed minimally when changing the nodes configuration from RAID5 to RAID10. The simulating a data warehouse database workload changes are minimal in 2 and 4 nodes, RAID5 and RAID10 tests. Figure 13. Throughput in MBps with different RAID configurations Single CSV on 2 or 4 P4500 cluster nodes with RAID5 Action 5: 5 VMs / 2 Storage Nodes / 24 Spindles / Storage Nodes RAID5 Create one 4TB volume with storage nodes RAID5, Network RAID10 data protection level and full provisioning Create 5 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Add the 4TB volume to the Windows Failover Cluster CSV Run all simulated workloads described in Test methodology Action 6: 10 VMs / 4 Storage Nodes / 48 Spindles / Storage Nodes RAID5 20

Create one 8TB volume with storage nodes RAID5, Network RAID10 data protection level and full provisioning Create 10 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Add 8TB volume to the Windows Failover Cluster CSV Run all simulated workloads described in Test methodology 21

Results for number of IOPs: The impact on IOPs throughput for the workload of the simulating a data warehouse database scenario was minimal when going from 2 storage nodes to 4 storage nodes. Figure 14. Workloads performed on different number of storage nodes with RAID5 configurations 22

Results for MBps: The impact on MBps throughput for the workload of the simulating a data warehouse database scenario was minimal when going from 2 storage nodes to 4 storage nodes. Figure 15. MBps throughput with RAID5 configuration Single CSV on 2 or 4 P4500 cluster nodes with RAID10 Action 7: 3 VMs / 2 Storage Nodes / 24 Spindles / Storage Nodes RAID10 Create one 2.4TB volume with storage nodes RAID10, Network RAID10 data protection level and full provisioning Create 3 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Add the 2.4TB volume to the Windows Failover Cluster CSV Run all simulated workloads described in Test methodology Action 8: 6 VMs / 4 Storage Nodes / 48 Spindles / Storage Nodes RAID10 Create one 4.8TB volume with storage nodes RAID10, Network RAID10 data protection level and full provisioning 23

Create 6 VMs with 800GB fixed VHD file for each VM on the volume. The total VHD file size should not fill to the full capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Add the 4.8TB volume to the Windows Failover Cluster CSV Run all simulated workloads described in Test methodology Results for number of IOPs: The IOPs throughput for all workloads was increased when going from 2 storage nodes to 4 storage nodes except the simulating a heavily used file share workload. Figure 16. Workloads performed on different number of storage node with RAID10 configurations 24

Results for MBps: The MBps throughput for all workloads was increased when going from 2 storage nodes to 4 storage nodes except the simulating a heavily used file share workload. Figure 17. MBps throughput with RAID10 configuration 25

Single CSV on 2 or 4 P4500 cluster nodes Disk I/O comparison Results for number of IOPs: In either the 2 or 4 storage nodes tests, the simulating MS Exchange Log files workload changes are flat when changed from RAID5 to RAID10. Figure 18. Workloads performed on different storage node numbers with different RAID configurations 26

Results for MBps: In either the 2 or 4 storage nodes tests, the simulating MS Exchange Log files workload changes were flat when changed from RAID5 to RAID10. Figure 19. MBps throughput with different RAID configurations 27

Single volume vs. single CSV on 2 or 4 P4500 cluster nodes Results for number of IOPs: The IOPs throughput difference between a single volume and a single CSV is minimal within the 2 storage nodes with RAID5 configuration. The same rule applies to the 4 storage nodes case. Figure 20. Workloads performed on different storage node numbers with RAID5 configuration 28

Results for MBps: The MBps throughput difference between a single volume and a single CSV is minimal within the 2 storage nodes with a RAID5 configuration. The same rule applies to the 4 storage nodes case. Figure 21. MBps throughput with RAID5 configuration 29

Results for number of IOPs: The IOPs throughput difference between a single volume and a single CSV is minimal within the 2 storage nodes with RAID10 configuration. The same rule applies to the 4 storage nodes case. Figure 22. Workloads performed on different storage node numbers with RAID10 configuration 30

Results for MBps: The MBps throughput difference between a single volume and a single CSV is minimal within the 2 storage nodes with RAID10 configuration. The same rule applies to the 4 storage nodes case. Figure 23. MBps throughput with RAID10 configuration Scale-out test with multiples CSVs on 4 P4500 cluster nodes with RAID5 Action 9: 10 VMs / 4 Storage Nodes / 48 Spindles / Storage Nodes RAID5 Create 10 802GB volumes with storage nodes RAID5, Network RAID10 data protection level and full provisioning Create 10 VMs with 800GB fixed VHD file for each VM. Assign each 802GB volume to 1 VM. The VHD file size should not fill to capacity of the volume in order to leave overhead for memory swapping and VM configuration files Create Iometer file size equal to approximately 75% of the VHD file size. In this case, the Iometer file size will be 600GB. If you create the Iometer file size filled to capacity of the VHD, then the VM OS will complain that you are running out of space Add 10 802GB volumes to the Windows Failover Cluster CSV Run all simulated workloads described in Test methodology Results for number of IOPs: The IOPs throughput for 4 storage nodes test was flat with RAID5 configuration and 4 storage nodes except when running the simulating a Microsoft Exchange Log files workload. 31

32 Figure 24. Workloads performed scale-out test on 4 storage nodes with RAID5 configuration

Results for MBps: The MBps throughput trend for 4 storage nodes test was flat with RAID5 configuration and 4 storage nodes except when performing the simulating a Microsoft Exchange Log files workload. Figure 25. MBps throughput with RAID5 configuration 33

Single CSV vs. multiple CSVs on 4 P4500 cluster nodes with RAID5 Results for number of IOPs: The simulating a heavily used file test performed better on the single CSV. The simulating Exchange Log files test performed better on a single CSV within a 2 storage nodes configuration. But within the 4 storage nodes configuration, because the simulating Exchange Log files reached near to the maximum 1Gbps bandwidth shown in Figure 27, the difference of a single CSV vs. multiple CSVs was minimal. Figure 26. Workloads performed on different number of storage node with RAID5 configuration 34

Results for MBps: The simulating a heavily used file workload performed better on the single CSV. The simulating Exchange Log files workload performed better on a single CSV within 2 storage nodes. But within 4 storage nodes, because the simulating Exchange Log files reached near to the maximum 1Gbps bandwidth shown in Figure 27, the difference of a single CSV vs. multiple CSVs was minimal. Figure 27. MBps throughput with RAID5 configuration 35

Scale-out test with a single volume or a single CSV on 2 or 4 P4500 cluster nodes Results for number of IOPs: The IOPs throughput for the 2-storage-nodes test was flat with a RAID5 configuration. The IOPs throughput for 4 storage nodes test was flat with RAID5 configuration as well. The throughput was increased when changing from 2 storage nodes to 4 storage nodes. Figure 28. Workloads performed on different storage node numbers with RAID5 configuration 36

Results for MBps: The MBps throughput for the 2-storage-nodes test was flat with RAID5 configuration. The MBps throughput for 4-storage-nodes test was flat with RAID5 configuration as well. The throughput was increased when changing from 2 storage nodes to 4 storage nodes. Figure 29. MBps throughput with RAID 5 configuration 37

Results for number of IOPs: The IOPs throughput for the 2-storage-nodes test was flat with a RAID5 configuration. The IOPs throughput for the 4-storage-nodes test was flat with RAID5 configuration as well. The throughput was increased when changing from 2 storage nodes to 4 storage nodes. Figure 30. Workloads performed on different storage node numbers with RAID5 configuration 38

Results for MBps: The MBps throughput for the 2-storage-nodes test was flat with a RAID5 configuration. The MBps throughput for 4-storage-nodes test was flat with a RAID5 configuration as well. The throughput was increased when changing from 2 storage nodes to 4 storage nodes. Figure 31. MBps throughput with RAID 5 configuration 39

Test analysis summary From the test result of each simulating workload, the following findings were obtained: In the simulating heavily used file share test: In both single volume and single CSV tests, if configured with RAID10, changing from 2 storage nodes to 4 storage nodes, the I/O throughput will be slightly decreased in terms of IOPs and MBps. The reason is still under investigation. If configured with RAID5, changing from 2 to 4 storage nodes will increase the throughput. If within 2 storage nodes, changing from RAID5 to RAID10 will slightly increase the I/O throughput; within 4 storage nodes, the I/O throughput will remain flat when changing the RAID level. A single volume performs at a similar throughput level as a single CSV volume, if the volume and the CSV are configured with the same RAID level within the same number of storage node cluster A single CSV has better performance than multiple CSV volumes if the single CSV and the multiple CSVs are configured with the same RAID level within the same number of storage nodes in a cluster In the simulating the Exchange database test: In both single volume and single CSV tests, increasing the number of the storage nodes within an HP P4500 SAN cluster or changing the storage node from RAID5 to RAID10 will increase the I/O throughput. A single volume performs at a similar throughput level as a single CSV volume, if the volume and the CSV are configured with the same RAID level within the same number of storage nodes in a cluster. A Single CSV has similar performance as multiple CSV volumes if the single CSV and the multiple CSVs are configured with the same RAID level within the same number of storage nodes in a cluster. In the simulating the data warehouse database test: in all test cases, increasing from 2 storage nodes to 4 storage nodes, changing from RAID5 to RAID10, or changing from a single CSV to multiple CSVs, the changes of the I/O throughput are minimal In the simulating the Exchange Log files test: In a single volume and 4-storage-node test, changing the storage nodes from RAID5 to RAID10 will slightly decrease the I/O throughput. In a single CSV test, changing the RAID from RAID5 to RAID10 will produce a similar I/O throughput in both the 2-storage-node and 4-storage-node tests. In other single volume or single CSV tests, the performance will be slightly better if increasing the number of storage nodes A single CSV performs slightly better than multiple CSV if the single CSV and the multiple CSVs are configured with the same RAID level within a 2 storage nodes cluster. But within a 4-storagenodes configuration, the multiple CSVs performed marginally better than a single CSV 40

Recommendations Based on the test results provided in this document, HP makes the following general purpose recommendations: Adding more storage nodes to an existing HP P4500 SAN cluster will generally improve the throughput performance Besides the number of storage nodes, you need to consider other factors such as storage node hardware RAID level and data replication level 1. If capacity and redundancy have the same priority in your environment, HP recommends RAID5 for storage nodes combined with 2-way data replication level to protect your VM files and data. With RAID5 and 2-way data replication, the HP P4500 cluster provides more storage capacity than a RAID10 configuration with 2-way data replication 2. If VM files and data safety is on the top of your priority list, RAID10 combined with 2-way data replication will give you better protection. Configured with HP P4500 redundancy and high availability features, using an HP P4500 Multi-site SAN solution provides better protection to all critical VMs and data for your business. The performance difference between HP P4500 volumes and CSV volumes created on HP P4500 SAN is minimal. Using HP P4500 SAN volumes with or without Windows Failover Cluster CSV will not impact your volume s throughput performance. Therefore using Windows Failover Cluster to provide high availability will be the logical step to protect your VMs running environment If the HP P4500 cluster storage node number is certain, for example, 4 cluster storage nodes, the scale-out capability of the multiple volumes/cvss is minimal in most of the test cases. To improve the overall throughput of multiple volumes/csvs, you need to add more storage nodes to the existing HP P4500 cluster. 41

For more information HP ActiveAnswers, www.hp.com/solutions/activeanswers HP Hyper-V Virtualization, www.hp.com/solutions/activeanswers/hyper-v HP StorageWorks products, www.hp.com/go/storageworks HP StorageWorks P4000 G2 products, www.hp.com/go/p4000 HP BladeSystem, www.hp.com/go/bladesystem HP BladeSystem c-class server blades, www.hp.com/servers/cclass HP ProCurve products, www.procurve.com To help us improve our documents, please provide feedback at http://h20219.www2.hp.com/activeanswers/us/en/solutions/technical_tools_feedback.html. Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. 4AA1-9557ENW, Created June 2010