Dell iscsi Storage for VMware

Similar documents
Power Comparison of Dell PowerEdge 2950 using Intel X5355 and E5345 Quad Core Xeon Processors

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

EMC Virtual Infrastructure for Microsoft SQL Server

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Dell High Availability Solutions Guide for Microsoft Hyper-V

EMC Business Continuity for Microsoft SQL Server 2008

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Frequently Asked Questions: EMC UnityVSA

Configuration Maximums VMware Infrastructure 3

EMC Integrated Infrastructure for VMware

Enhancing the Dell iscsi SAN with Dell PowerVault TM Tape Libraries and Chelsio Unified Storage Router iscsi Appliance

SAN Conceptual and Design Basics

Use cases and best practices for HP StorageWorks P2000 G3 MSA FC/iSCSI Combo Controller

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

EMC CLARiiON CX3 Series FCP

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

Configuration Maximums VMware vsphere 4.0

Setup for Failover Clustering and Microsoft Cluster Service

Oracle Database Scalability in VMware ESX VMware ESX 3.5

EMC Unified Storage for Microsoft SQL Server 2008

7 Real Benefits of a Virtual Infrastructure

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Pivot3 Reference Architecture for VMware View Version 1.03

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

SAP CRM Benchmark on Dual-Core Dell Hardware

Setup for Failover Clustering and Microsoft Cluster Service

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations

Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Setup for Failover Clustering and Microsoft Cluster Service

HP StorageWorks P2000 G3 and MSA2000 G2 Arrays

VMware ESX 2.5 Server Software Backup and Restore Guide on Dell PowerEdge Servers and PowerVault Storage

AX4 5 Series Software Overview

EMC Integrated Infrastructure for VMware

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

EMC Backup and Recovery for Microsoft SQL Server

Configuration Maximums VMware vsphere 4.1

Deploying Exchange Server 2007 SP1 on Windows Server 2008

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization

The Methodology Behind the Dell SQL Server Advisor Tool

LAMP Performance Characterization and Sizing on VMware ESX Server and Dell PowerEdge 1855 Blade Servers

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Microsoft Exchange Server 2003 Deployment Considerations

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

Reference Architecture for a Virtualized SharePoint 2010 Document Management Solution A Dell Technical White Paper

Upgrade to Microsoft Windows Server 2012 R2 on Dell PowerEdge Servers Abstract

Configuration Maximums

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

Contents. Introduction...3. Designing the Virtual Infrastructure...5. Virtual Infrastructure Implementation Summary... 14

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

HP iscsi storage for small and midsize businesses

Compellent Storage Center

Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III

EMC Celerra Unified Storage Platforms

VTrak SATA RAID Storage System

The functionality and advantages of a high-availability file server system

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

Setup for Failover Clustering and Microsoft Cluster Service

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

VMware vsphere 5.1 Advanced Administration

EMC Backup and Recovery for Microsoft SQL Server

A Comparison of VMware and {Virtual Server}

VMware vsphere 5.0 Boot Camp

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

NOTICE ADDENDUM NO. TWO (2) JULY 8, 2011 CITY OF RIVIERA BEACH BID NO SERVER VIRTULIZATION/SAN PROJECT

Optimizing Large Arrays with StoneFly Storage Concentrators

SUN ORACLE DATABASE MACHINE

Private cloud computing advances

VMware ESX Server Configuration and Operation Using a PS Series Group

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Setup for Failover Clustering and Microsoft Cluster Service

White Paper. Dell Reference Configuration

Remote Installation of VMware ESX Server Software Using Dell Remote Access Controller

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

QNAP in vsphere Environment

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!

Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform

iscsi SAN for Education

Transcription:

Dell iscsi Storage for VMware By Dave Jaffe Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter May 2007

Contents Executive Summary... 4 Introduction... 5 The Dell iscsi Storage Arrays... 7 The PowerVault NX1950... 7 The Dell/EMC FC/iSCSI Arrays... 7 The Dell/EMC AX150i... 8 VMware and iscsi... 10 ESX Configuration... 10 Connecting the ESX Server Host to the iscsi Array... 11 Configure iscsi Software Initiator in ESX... 11 Configure Target Storage Array and Create the LUNs... 12 Format Storage... 12 The Test Setup... 13 iscsi Performance Comparison... 13 Results... 15 iscsi Performance Comparison... 15 Conclusions and Recommendations... 17 Acknowledgements... 18 Tables Table 1: CX3 FC/iSCSI Array Features all figures are for two Storage Processors... 8 Table 2: Dell iscsi Storage array Features... 9 Table 3: iscsi Performance Data... 15 Figures Figure 1: The Dell PowerVault NX1950 High Availability storage array... 7 Figure 2: The Dell/EMC CX3-10c, 20c and 40c with One Disk Array Enclosure (Standby Power Supply not shown)... 8 Figure 3: The Dell/EMC AX150i (Uninterruptible Power Supply not shown)... 8 Figure 4: ESX Two NIC Configuration for iscsi... 10 Figure 5: ESX Four NIC Configuration for iscsi... 11 Figure 6: Test Setup (AX150i, CX3-20c and CX3-40c not shown)... 14 May 2007 Page 2 Dell Enterprise Technology Center

Figure 7: iscsi Performance... 15 For Further Information and Discussion Visit our Dell TechCenter wiki at http://www.delltechcenter.com/page/iscsi+and+vmware for more details on how to attach Dell iscsi storage to VMware, to discuss this paper, and to contribute your expertise on iscsi for VMware. Talk Back Tell us how the Dell Scalable Enterprise Technology Center can help your organization better simplify, utilize, and scale enterprise solutions and platforms. Send your feedback and ideas to Enterprise_Techcenter@dell.com or visit us at dell.com/techcenter. May 2007 Page 3 Dell Enterprise Technology Center

Section 1 Executive Summary New iscsi storage arrays from Dell Inc. provide cost-effective, easy-to-deploy shared storage solutions for applications like the VMware Infrastructure 3 server virtualization software. In this paper features and performance of the Dell PowerVault NX1950, the Dell/EMC CX3 line of Fibre Channel/iSCSI combination storage arrays, and the Dell/EMC AX150i are compared, instructions for using with VMware are given, and recommendations are made when to use which. May 2007 Page 4 Dell Enterprise Technology Center

Section 2 Introduction Server virtualization programs such as VMware run best when the datacenter is organized into farms of servers that are connected to shared storage. By placing the virtual machines virtual disks on storage area networks accessible to all the virtualized servers, the virtual machines can be migrated from one server to another within the farm for purposes of load balancing or failover. VMware Infrastructure 3 uses the VMotion live migration facility in its Distributed Resource Scheduling feature and provides a High Availability component which takes advantage of shared storage to quickly boot up a virtual machine on a different ESX server after the original ESX server fails. Shared storage is key to enabling VMotion because, when a virtual machine is migrated from one physical server to another, the virtual machine s virtual disk doesn t actually move. Only the virtual disk s ownership is changed while it continues to reside in the same place. The traditional networking technology employed in such storage area networks uses the Fibre Channel (FC) interface. Products such as the Dell/EMC line of Fibre Channel storage arrays provide excellent performance, reliability and functionality but can require specialized hardware and skills to set up and maintain. The Fibre Channel storage network starts with the fabric, which involves the use of FC host bus adapters (HBAs) in each server, connected by fiber cables to one or more FC switches, which in turn can network multiple storage arrays, each supporting a scalable number of high speed disk enclosures. An application s request for an input or output (IO) to storage originates as a SCSI (Small Computer System Interface) request, is packed into an FC packet by the FC HBA, and sent down the fiber cable to the FC switch for dispatch to the storage array that contains the requested data, similar to the way Internet Protocol (IP) packets are sent over Ethernet. For smaller IT shops or for those that are just starting out in the virtualization arena, an alternate shared storage paradigm is emerging that employs iscsi (Internet SCSI) to connect the servers to the storage. In this case the communication between the server and the data storage uses standard Ethernet network interface cards (NICs), switches and cables. SCSI IO requests are packed into standard Internet protocol packets and routed to the iscsi storage through Ethernet switches and routers. With iscsi customers can leverage existing networking expertise and equipment to simplify their implementation of a storage network. Like Fibre Channel, iscsi supports block-level data transmission for all applications, not just VMware. For security iscsi provides CHAP authentication as well as IPSEC encryption. An excellent introduction to iscsi can be found in Travis Vigil s article in the May 2007 Power Solutions, iscsi: Changing the Economics of Storage, Part 1-Understanding iscsi in Enterprise Environments (http://www.dell.com/downloads/global/power/ps2q07-20070335-vigil.pdf). Servers can communicate with iscsi storage using two methods. The first involves the use of an add-in card called an iscsi hardware initiator or host bus adapter, analogous to the Fibre Channel HBA, which connects directly to the datacenter s Ethernet infrastructure. The second does the iscsi conversion in software and sends the Ethernet packets through the standard Ethernet NIC in May 2007 Page 5 Dell Enterprise Technology Center

the server. This study focuses on the use of the software initiator, which is supported by VMware. In the past two years Dell has introduced three iscsi platforms, the Dell PowerVault NX1950, the Dell/EMC CX3 line of combination storage processors (which offer both iscsi and Fibre Channel connectors) and the Dell/EMC AX150i. The three are designed for different sized customers and varying environments but all are easy to configure to provide storage to VMware hosts. This paper will describe Dell s three iscsi storage offerings, document how to configure them with VMware, and then, using a database workload running on two virtual machines, present performance results comparing the three platforms. The performance results shown in this paper represent a single, high-level view of the performance of the different arrays with a simple virtualization workload running on only 5 disks in each array. Typical customer use scenarios will employ larger disk arrays supporting a variety of applications, which may result in varying performance results. May 2007 Page 6 Dell Enterprise Technology Center

Section 3 The Dell iscsi Storage Arrays The PowerVault NX1950 The PowerVault NX1950 is an integrated solution that provides both block-level application data via iscsi as well as file data via the standard Ethernet network protocols NFS (for Linux) and CIFS (for Windows). The PowerVault NX1950 High Availability model consists of two clustered 1 rack unit (1U), two-socket servers running Microsoft Windows Unified Data Storage Server 2003 (WUDSS), connected via serially attached SCSI (SAS) to an MD3000 disk array with redundant embedded RAID controllers containing a total of 1GB cache. WUDSS provides an integrated console for storage system management. The PowerVault NX1950 offers customers an attractive entry point into storage virtualization. With this unified storage solution small and midsized businesses and remote offices have the opportunity to both consolidate file data and virtualize application data in one device. The PowerVault NX1950 comes integrated with an MD3000 disk array and can be expanded to allow for future growth with up to 2 MD1000 Disk Expansion Enclosures (with up to 15 drives each). For this test a configuration with 15 146GB 15K RPM SAS drives (shown in Figure 1) was used. Figure 1: The Dell PowerVault NX1950 High Availability storage array The Dell/EMC FC/iSCSI Arrays The Dell/EMC CX3 FC/iSCSI arrays provide both end-to-end 4 Gb/s Fibre Channel and iscsi storage area network capability. The line includes the CX3-10c, CX3-20c and CX3-40c, with features as shown in Table 1. Each array includes two redundant Storage Processors (SP) and a Standby Power Supply to enable the use of write cache on the storage processors. Management is provided by Navisphere, part of an extended suite of EMC management and backup tools. While both the FC and iscsi ports of an array may be used at the same time, a given host can connect to an array through only one of these May 2007 Page 7 Dell Enterprise Technology Center

protocols at a time. This study tested the CX3-20c and the CX3-40c, both with one DAE4P disk array with 15 73 GB 15K RPM drives (as shown in Figure 2). Feature CX3-10c CX3-20c CX3-40c Physical Size including Standby Power Supply and 1 Disk Array Enclosure (1.75 rack units) 5U 5U 5U CPUs 2x 1.8GHz 2x 2.8GHz 4x 2.8 GHz Cache (GB) 2 4 8 Front-end FC ports 4 4 4 Front-end iscsi ports 4 8 8 Back-end FC loops 1 1 2 Maximum Hard Drives 60 120 240 Table 1: CX3 FC/iSCSI Array Features all figures are for two Storage Processors Figure 2: The Dell/EMC CX3-10c, 20c and 40c with One Disk Array Enclosure (Standby Power Supply not shown) The Dell/EMC AX150i Positioned as an entry-level storage array within the Dell/EMC line, the Dell/EMC AX150i can house two storage processors and twelve 3.5-inch Serial ATA II drives in a single 2U rack enclosure. iscsi connectivity is provided through four 1 Gb/s Ethernet ports. The dual storage processor system contains 1 GB mirrored cache which uses an Uninterruptable Power Supply to enable the write cache. Navisphere Express, a simplified version of Navisphere, manages the array. The system used in the test was a dual storage processor system with 12 250GB 7.2K RPM drives (shown in Figure 3). Figure 3: The Dell/EMC AX150i (Uninterruptible Power Supply not shown) May 2007 Page 8 Dell Enterprise Technology Center

Model The features of the four Dell iscsi storage arrays tested are shown in Table 2. No. of Disks Disk Size (GB) Disk Speed (RPM) Disk Type Size including Standby Power Supply (1.75 Rack Units) Cache Size Max Number of Disks List Price Including Storage Software * Starting At Price ** AX150i 12 250 7.2K SATA II 3U 1 GB 12 $17,239 $5,900 NX1950 15 146 15K SAS 5U 1 GB 45 $48,719 $20,391 CX3-20c 15 73 15K FC4 5U 4 GB 120 $77,052 $52,862 CX3-40c 15 73 15K FC4 5U 8 GB 240 $110,559 $86,369 *The list prices shown include the Workgroup version of Navisphere and on-site SAN implementation services. ** Starting At Price configurations: AX150i: Single SP, 3x250GB/7.2K SATA II drives NX1950: Single NX head, Std Edition OS, 2x36GB/15K SAS drives CX3-20c: Departmental Navisphere, 5x146GB/10K drives CX3-40c: Departmental Navisphere, 5x146GB/10K drives All prices are as of 5/16/07 Table 2: Dell iscsi Storage array Features May 2007 Page 9 Dell Enterprise Technology Center

Section 4 VMware and iscsi ESX Configuration This section will describe the steps needed to attach iscsi storage to a VMware ESX server host. Once attached and formatted, the storage can be provided to guest virtual machines as virtual disks that appear as local storage to the guest. An alternate method, in which the guest is attached directly to iscsi storage through a software iscsi initiator supplied with the guest s operating system, was not employed in this test. Connectivity from a host running VMware s ESX server to iscsi storage is provided through a built-in software initiator (support for hardware iscsi initiators will be provided at a later date). The physical network interface cards (NICs) that connect to the Ethernet network in which the iscsi storage is located must be included within a VMware virtual switch that also includes the ESX Service Console and the VMkernel (which supports VMotion traffic as well as iscsi packets). For a two-nic system it is recommended that the NICs be teamed as shown in Figure 4. This provides NIC/cable failover and iscsi traffic load balancing across the NICs to multiple iscsi targets with different IP addresses. Since the VM and iscsi traffic are mixed in this configuration, CHAP Figure 4: ESX Two NIC Configuration for iscsi May 2007 Page 10 Dell Enterprise Technology Center

authentication and IPSEC encryption should be employed in the iscsi connection. (Alternately, in a two-nic configuration the VM and iscsi traffic can be each placed on their own non-teamed NICs for total isolation). The network configuration is created in Virtual Center (VC) using the Virtual Infrastructure Client. The ESX server to be connected is highlighted, then the Configuration tab is selected, then Networking. If there are more than two NICs available in the ESX server host, it is recommended that two virtual switches be created, one which hosts the Service Console and VMkernel (including iscsi and VMotion traffic), and one which is dedicated to virtual machine (VM) traffic. The two NICs carrying iscsi traffic should be cabled to redundant Ethernet switches. An example for four NICs showing two 2-NIC teams is shown in Figure 5. Figure 5: ESX Four NIC Configuration for iscsi Connecting the ESX Server Host to the iscsi Array The process to connect an ESX Server host to an iscsi storage array has three steps: First, the iscsi software initiator in ESX is configured to point to the storage target. Then the target storage array is configured to enable access from the ESX host and the host s disk LUNs (Logical Unit Numbers) are created. Finally it is necessary to format the storage with the VMware File System (VMFS). Configure iscsi Software Initiator in ESX As a security measure iscsi access is disabled by default in ESX. To enable it, from the ESX host in VC: May 2007 Page 11 Dell Enterprise Technology Center

ESX Server Configuration Security Profile Properties enable Software iscsi Client Next, select Configuration Storage Adapters, and then highlight iscsi Software Adapter. Notice the SAN Identifier. In this test, for host r3esx1950c, the identifier is iqn.1998-01.com.vmware:r3esx1950c-7b658143. In the Details pane select Properties to bring up the iscsi Initiator Properties page. Select Configure and then select Status: Enabled if necessary. Finally, select the Dynamic Discovery tab, select Add and on the Add Send Targets Server page enter the IP address of the iscsi storage array and change the Port from its 3260 default if necessary. Configure Target Storage Array and Create the LUNs First, the LUNs that will be assigned to VMware are created on the target storage array. For the NX1950 use the built-in WUDSS console. For the CX3 arrays and AX150i use Navisphere and Navisphere Express, respectively. The ESX server iscsi identifier needs to be entered into the target array: On the NX1950 management console select Microsoft iscsi Software Target iscsi Target, right click on the NX1950 that has the LUNs to be assigned, and select Properties. In the iscsi Initiators tab select Add, select Identifier Type IQN and enter the full SAN Identifier in the Value field. In the CX arrays right click on the array in Navisphere, select Connectivity Status, then New to create a new Initiator Record. Specify the full SAN Identifier as both Initiator Name and Host name and specify the IP address of the ESX host. Then using Connect Hosts add this initiator to the Storage Group containing the disk LUN to be provided to the ESX server. Format Storage Back in Virtual Center select ESX Server Configuration Storage Adapters and then select Rescan Scan for New Storage Devices. The LUN(s) defined on the iscsi storage array should show up. If this is the first host to access this LUN it is necessary to format the storage with the VMware file system, VMFS3. From the Configuration tab select Storage (SCSI, SAN and NFS) and click on Add Storage. Select Disk/LUN, and then select the LUN to be formatted. Supply a Datastore Name (for example, NX1950-1). For Disk/LUN Formatting accept the default maximum file size (256 GB), block size (1MB), and capacity or change them if desired. The datastore should show up in the list of storage devices. When the next ESX server is connected to the same iscsi storage array the datastore should show up after the Scan for New Storage step. Once the storage LUN has been added to an ESX host, VMs can be created on that ESX host with their virtual disks using the LUN storage. Since the LUN is shared between all ESX server hosts connected to the iscsi storage array the VMs may be live migrated from one host to another using VMotion. Complete details of this process, with screen shots, are available at the Dell TechCenter wiki at http://www.delltechcenter.com/page/iscsi+and+vmware. May 2007 Page 12 Dell Enterprise Technology Center

Section 5 The Test Setup iscsi Performance Comparison To provide a basic performance comparison between the three iscsi storage offerings, a PowerEdge 1950 ESX host with two Intel quadcore Xeon X5355 processors running at 2.66GHz and with 8GB memory was connected as described in the previous section to five-disk RAID 5 LUNs on all four storage arrays. For the tests in this study, conducted by the Dell Enterprise Technology Center in April, 2007, a two-nic team was used for the ESX Service Console, VMkernel (including iscsi traffic) and the virtual machine (VM) traffic as shown in Figure 4. Two Windows Server 2003 R2 VMs with Microsoft SQL Server 2005 running a medium size (1 GB) version of the Dell DVD Store database test application (available as open source from linux.dell.com/dvdstore) were created with their 10GB virtual disks stored on the five-disk LUN on one array. The VMs were then cloned three times with the copies virtual disks placed on similar LUNs on each of the other three arrays. The VMs were configured with 4 virtual CPUs and 512MB each so that each virtual CPU roughly had the resources of a single processor core. The SQL Server database on each VM is created on that VM s local disk so it, too, is stored on the LUN. Since the database size (1 GB) exceeds the memory available to the VM (512 MB), SQL Server will have to access its content from the LUN during the test. The two VMs attached to a single iscsi storage array were then stressed using an order entry simulator (included as part of the DVD Store kit) which models users logging into the online DVD Store, browsing for DVDs by title, artist or category, and finally making a DVD purchase. A complete cycle of logging in, browsing, and purchasing counts as one order. The number of users and the amount of time they spend thinking during an order can be controlled by the simulator. In this test, the think time was set to 0 seconds so that each driver thread can be thought of as a super user entering orders without stopping. Starting with 2 such driver threads per VM, a workload was applied to each VM pair for 4 minutes and the average orders per minute and response time recorded. Then the VMs databases were restored to their original state and a larger workload applied. The workload was increased until the orders per minute rate stopped increasing. At that point the storage array could handle no more IOs per second (IOPS) even as the applied workload increased. The test was then repeated in turn on VMs residing on the other three iscsi storage arrays. The test setup is shown in Figure 6. May 2007 Page 13 Dell Enterprise Technology Center

Figure 6: Test Setup (AX150i, CX3-20c and CX3-40c not shown) May 2007 Page 14 Dell Enterprise Technology Center

Section 6 Results iscsi Performance Comparison # of Driver Threads Per VM The results from the iscsi storage test are shown in Table 3 and plotted in Figure 7. CX3-40c - iscsi Orders per Minute Resp Time (ms) CX3-20c - iscsi Orders per Minute Resp Time (ms) Orders per Minute NX1950 Resp Time (ms) Orders per Minute AX150i Resp Time (ms) 2 10543 21 9282 23 6719 33 4486 51 5 21426 25 18349 30 11174 51 7930 73 10 29570 37 26655 41 14980 75 9494 124 15 33803 49 28891 58 14682 119 9486 188 20 35074 63 29937 75 15566 149 9641 245 25 35173 80 30894 92 15437 190 10231 291 Table 3: iscsi Performance Data Figure 7: iscsi Performance May 2007 Page 15 Dell Enterprise Technology Center

For this workload the CX3-40c, CX3-20c, NX1950 and AX150i max out at around 35,000, 31,000, 15,500 and 10,000 orders per minute, respectively. It should be emphasized that these are maximum performance rates for five-disk LUNs on the four storage arrays but in typical use each storage array will drive many such LUNs from a much larger number of VMs (and non-vmware sources) across multiple disk cabinets. Think of these results as a performance snapshot, giving relative performance of a particular OLTP application on similarly configured disk sets on the four storage arrays. Real world performance of the arrays depends greatly on the actual application. May 2007 Page 16 Dell Enterprise Technology Center

Section 7 Conclusions and Recommendations The three Dell iscsi platforms tested provide a range of performance and functionality that match the various levels of iscsi storage applications and user environments. For those datacenters requiring fast and robust storage, especially those who already have made investments in Dell/EMC hardware and expertise, the CX3 FC/iSCSI series provides a ready means to deploy large iscsi SANs either standalone or in conjunction with existing Dell/EMC Fibre Channel SANs. The Dell PowerVault NX1950 is a good entry level iscsi virtualization solution and is ideal for customers who have file-intensive environments and are looking to deploy virtualization for a small subset of application data. As a unified solution the PowerVault NX1950 provides customers a compelling value proposition offering versatility, ease of use and expandability at an affordable price. Those customers looking solely for an iscsi SAN should consider the Dell/EMC AX150i as it is a purpose-built entry level, easy-to-use, low cost iscsi storage array for workgroups, small to medium-sized businesses or branch offices of large corporations. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. Dell and PowerEdge are trademarks of Dell Inc. EMC, Navisphere and PowerPath are registered trademarks and Access Logix is a trademark of EMC Corp. VMware, Virtual Center, and VMware Infrastructure 3 are registered trademarks of VMware, Inc. Intel and Xeon are registered trademark of Intel Corp. Red Hat is a registered trademark of Red Hat Inc. Linux is a registered trademark of Linus Torvalds. Oracle is a registered trademark of Oracle Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell disclaims proprietary interest in the marks and names of others Pricing, specifications, availability, and terms of offers may change without notice. Taxes, fees, shipping, handling and any applicable restocking charges are extra, and vary. Copyright 2007 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Information in this document is subject to change without notice. May 2007 Page 17 Dell Enterprise Technology Center

Section 8 Acknowledgements The author would like to thank Bala Chandrasekaran for a continuing discussion of VMware issues, Ashish Toshniwal for providing the CX3-40c, and all those who made valuable suggestions during the review process for this paper. May 2007 Page 18 Dell Enterprise Technology Center