W H I T E P A P E R. VMware vsphere 4: Exchange Server on NFS, iscsi, and Fibre Channel

Similar documents
Microsoft Exchange Server 2007

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Configuration Maximums VMware Infrastructure 3

Configuration Maximums VMware vsphere 4.0

Performance of Virtualized SQL Server Based VMware vcenter Database

Microsoft Exchange Solutions on VMware

Configuration Maximums VMware vsphere 4.1

Analysis of VDI Storage Performance During Bootstorm

Configuration Maximums

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1

Power Comparison of Dell PowerEdge 2950 using Intel X5355 and E5345 Quad Core Xeon Processors

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Using VMware HA, DRS and vmotion with Exchange 2010 DAGs. Using VMware HA, DRS and vmotion with Exchange 2010 DAGs

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Virtualized Exchange 2007 Local Continuous Replication

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks

Microsoft Exchange Server 2003 Deployment Considerations

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Kronos Workforce Central on VMware Virtual Infrastructure

EMC CLARiiON CX3 Series FCP

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Using NetApp Unified Connect to Create a Converged Data Center

Setup for Failover Clustering and Microsoft Cluster Service

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Frequently Asked Questions: EMC UnityVSA

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Host Power Management in VMware vsphere 5

Benchmarking Guide. Performance. BlackBerry Enterprise Server for Microsoft Exchange. Version: 5.0 Service Pack: 4

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Setup for Failover Clustering and Microsoft Cluster Service

Pivot3 Reference Architecture for VMware View Version 1.03

Improving Scalability for Citrix Presentation Server

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

P E R F O R M A N C E S T U D Y. Scaling IBM DB2 9 in a VMware Infrastructure 3 Environment

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Setup for Failover Clustering and Microsoft Cluster Service

Esri ArcGIS Server 10 for VMware Infrastructure

Configuration Maximums

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES

W H I T E P A P E R. Performance and Scalability of Microsoft SQL Server on VMware vsphere 4

Setup for Failover Clustering and Microsoft Cluster Service

Scaling in a Hypervisor Environment

Performance of Enterprise Java Applications on VMware vsphere 4.1 and SpringSource tc Server

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

VMware vsphere 4.1 Networking Performance

Virtual Desktop Infrastructure (VDI) made Easy

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

VMware vcenter Update Manager Performance and Best Practices VMware vcenter Update Manager 4.0

Setup for Failover Clustering and Microsoft Cluster Service

VMware VMmark V1.1.1 Results

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

EMC Backup and Recovery for Microsoft SQL Server

SAN Conceptual and Design Basics

EMC Business Continuity for Microsoft SQL Server 2008

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

EMC Virtual Infrastructure for Microsoft SQL Server

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Maximum vsphere. Tips, How-Tos,and Best Practices for. Working with VMware vsphere 4. Eric Siebert. Simon Seagrave. Tokyo.

EMC Backup and Recovery for Microsoft SQL Server

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

Qsan Document - White Paper. Performance Monitor Case Studies

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

IT Business Management System Requirements Guide

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Doubling the I/O Performance of VMware vsphere 4.1

VMware vsphere Data Protection 6.0

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

Comparing the Network Performance of Windows File Sharing Environments

EMC Backup and Recovery for Microsoft Exchange 2007

White Paper. Recording Server Virtualization

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

Adaptec: Snap Server NAS Performance Study

Veeam Cloud Connect. Version 8.0. Administrator Guide

HP ProLiant DL585 G5 earns #1 virtualization performance record on VMmark Benchmark

The functionality and advantages of a high-availability file server system

LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array

Understanding Data Locality in VMware Virtual SAN

Getting Started with VMware Fusion

VXLAN Performance Evaluation on VMware vsphere 5.1

What s New in VMware vsphere 4.1 VMware vcenter. VMware vsphere 4.1

SQL Server Business Intelligence on HP ProLiant DL785 Server

WHITE PAPER Optimizing Virtual Platform Disk Performance

EMC Unified Storage for Microsoft SQL Server 2008

Transcription:

W H I T E P A P E R VMware vsphere 4: Exchange Server on NFS,, and Fibre Channel

Table of Contents Executive Summary............................................................ 3 Introduction................................................................... 3 Hardware...................................................................... 3 Server................................................................................... 3 NetApp FAS63......................................................................... 4 Software...................................................................... 4 ESX 4. Release Candidate................................................................ 4 Domain Controller Virtual Machine....................................................... 5 Exchange Virtual Machines............................................................... 5 LoadGen Test System..................................................................... 5 Testing......................................................................... 5 Anatomy of a LoadGen Test.............................................................. 5 Fibre Channel,, and NFS Tests....................................................... 5 Procedure................................................................................ 6 Results........................................................................ 6 Performance Comparison of ESX 3.5 and ESX 4.......................................... 9 Conclusion.................................................................... 9 2

Executive Summary Microsoft Exchange Server 27 is a storage intensive workload used by a large number of IT organizations to provide email for their users. In this paper, a large installation of 16, Exchange users was configured across eight virtual machines (VMs) on a single VMware vsphere 4 server. The performance of this configuration was measured when using storage supporting Fibre Channel,, and NFS storage protocols. The results show that each protocol achieved great performance with Fibre Channel leading the way, with and NFS following closely behind. Introduction At VMworld Europe 28, VMware announced results of a test demonstrating 16, Exchange 27 users on a single ESX Server. The ESX Server was running eight virtual machines each supporting 2, users. This multiple virtual machine, building block approach allows for a much higher number of users on a single physical server than had been possible in the past. In order to test how Fibre Channel,, and NFS storage protocols would perform under a high stress load for an extended period of time, VMware conducted a similar Exchange Server test. A NetApp FAS63 was selected because it supported all three protocols, which would make it possible to compare them using the same storage and servers. A single Fibre Channel or Ethernet storage connection was used in order to include the bandwidth capability as part of the test. This is important because the additional bandwidth capacity of Fibre is often cited as its key advantage over or NFS. Hardware Server An HP Proliant DL 58 G5, a four socket rack-mounted server, was configured with four Quad-Core Intel Xeon X735 2.93 GHz processors for testing. The server had 128GB of RAM, a Qlogic HBA, and an Intel Quad Port Gigabit Ethernet card. HP ProLiant DL 58 G5 Processor Memory Disk Controller Fibre Channel HBA Network Adapters 4 x Quad-Core Intel Xeon X735 2.93GHz 128GB Smart Array P4 Qlogic QLE2462 4 Gigabit PCI-e Intel PRO 1 Quad Port Gigabit Ethernet Connectivity between the storage arrays and the host was limited to a single connection of each type. This means that the 4 Gigabit Fibre Channel had higher bandwidth capacity than the single Gigabit Ethernet connection used for both and NFS. As this is a key differentiator between the three protocols as commonly deployed, it was also part of the testing. An EMC DS5B switch was used for the Fibre Channel connection and an Extreme Networks Summit 4-48t switch was used for the and NFS connection. 3

NetApp FAS63 The NetApp FAS63 supports Fibre Channel,, and NFS storage protocols, which made it an excellent choice for comparing the three. The FAS63 was configured for this test with 114 disks which were split into four aggregates: two 4 disk Data aggregates, one 26 disk Log aggregate, and a 4 disk Root aggregate. All aggregates were created using NetApp s double parity RAID. Two 2.5TB volumes were created in each of the 7.5TB Data aggregates. One of which was used for NFS and the other for FC and. The remaining 2.5TB on each aggregate was reserved for snapshot cache. The 5.1TB Log aggregate was divided into two 2.5TB volumes with one for nfs and the other for FC and. There was no need to keep snapshots of the logs, so no additional space needed to be reserved. Data Aggregate 1 4 Disks Snapshot Cache NFS Data Vol 1 Data Lun 1 Data Lun 2 Data Aggregate 2 4 Disks Snapshot Cache NFS Data Vol 2 Data Lun 3 Data Lun 4 2-socket/8 cor AMD server NetApp FAS63 Log Aggregate 26 Disks NFS Data Vol Log Lun 1 Log Lun 2 Disks were assigned at the aggregate level, with the volumes and LUNs created within the aggregate spread across evenly. This resulted in good performance within the aggregate as all spindles were actively used. On a NetApp array, the NFS mount points are configured at the volume level. LUNs (which can be accessed via or Fibre Channel) are created within a volume and then assigned to hosts. In order to switch a LUN between and Fibre Channel, the group that it is mapped to was simply changed. Two 1.17TB LUNs were created on each of the Data volumes, which means that four data LUNs were presented to the ESX host. These were mounted as VMFS storage and used to hold the data disks for the Exchange virtual machines. Because NFS is supported directly by ESX, the 2.5TB NFS Data volumes were mounted directly with no VMFS partition used. Software ESX 4. Release Candidate A key component of VMware vsphere is ESX Server. ESX Server 4. Release Candidate build 14815 was installed on the HP DL58 G5. It was configured to use the NetApp 63 by way of Fibre Channel,, and NFS storage options. This included configuring a virtual network switch to connect the software based initiator of ESX to the FAS 63 by a dedicated Gigabit Ethernet network. This same virtual switch was used for the connection to access NFS mounts on the NetApp array. The Fibre Channel based data connections do not go through the virtual switch, but instead are managed from the Storage Adapters section where the connection between the Fibre Channel HBA and the storage is configured. 4

Domain Controller Virtual Machine In order to support the Exchange 27 testing environment, a new domain controller virtual machine was created. It was configured with two vcpus and 8GB of RAM. Windows Server 23 R2 Enterprise Edition was installed with DNS services, and then promoted to domain controller of a new domain with the dcpromo command. In addition to a domain controller, Exchange Server 27 has three server roles that were required. The Hub Transport and Client Access roles were installed on this domain controller virtual machine. These roles are not heavily stressed in the testing, but must be present for Exchange 27. The Domain Controller virtual machine was kept on the same HP DL58 G5 server as the Exchange virtual machines for the testing. Exchange Virtual Machines In order to support 16, users, eight Exchange virtual machines each designed to support 2, users were created. Each virtual machine was configured according to Microsoft s best practices with 14GB of RAM and 2 vcpus. A 2GB operating system disk for each virtual machine was created on the local disks of the DL58 G5. The virtual machines were installed with Windows Server 23 R2 64-bit Enterprise Edition, VMware Tools, PowerShell 1.,.Net Framework 2. with SP 1, and other fixes required for Exchange 27 SP1. The virtual machines were then added to a windows domain created for the purpose of this testing. Exchange 27 SP1 was installed with just the Mailbox Role and management tools options selected. Each virtual machine was assigned two 15GB virtual disks on two of the data LUNs and one 3GB virtual disk from the log LUN. Each virtual machine had a total of six virtual disks assigned: 1 for OS, 4 for mailbox databases, and one for logs. The four mailbox database virtual disks were created so that half of the virtual machines used Data Aggregate 1 and the other half used Data Aggregate 2. All virtual disks were attached to the virtual machines using the default LSI Logic Parallel virtual SCSI controller. The partitions were aligned following best practices. The VMFS partitions are aligned automatically because they are created using the VI Client. The partitions inside the windows guests were aligned using the diskpart command before formatting. LoadGen Test System Microsoft provides Exchange Load Generator, commonly referred to as LoadGen, as a tool to simulate Exchange users. This tool was new with Exchange 27 and replaces the previous tool used with earlier versions of Exchange, which was called LoadSim. A Dell PowerEdge 195 installed with Window Server 23 64-bit Enterprise Edition was used as the LoadGen Server. Testing Anatomy of a LoadGen Test There are two phases to a LoadGen test. The first is the user creation and initialization phase where LoadGen creates the specified number of users and pre-populates their inboxes with a number of items. The provided Heavy User profile is used which means that each user will be created with approximately 25MB of data. The initialization phase of the 16, users proceeds at a rate of one user being initialized every.38 minutes, or a total of about 4 days. To preserve this initialized state for the data, a snapshot of each of the data aggregates was taken on the NetApp array at this point. The snapshot is then used to reset mailbox databases back to the initialized state before each new test. The snapshot restore takes only a few seconds to complete, which is a big time savings over the hours it would take to reinitialize. The second phase of the LoadGen test is to run a simulation. For these tests, each simulation was run for eight hours and 3 minutes to model a typical workday, with a little bit of added time to account for getting the test started. During this simulation, LoadGen used a range of operations including sending mail, browsing the calendar, making appointments, and deleting items to generate its load against the Exchange servers. At the end of the test, LoadGen reports resulted in terms of response time for each of these operations. Fibre Channel,, and NFS Tests The initialization of users was done while the virtual machines were configured to access the NetApp array over Fibre Channel. As mentioned earlier, this initialized state was captured with a snapshot of the two data aggregates on the NetApp array. The NetApp array used connection groups to define how LUNs are accessed. A Fibre Channel and an group are created with each having either the Fibre Channel WWN or initiator string of the ESX server. To switch from Fibre Channel to, the LUNs were simply moved from one group to the other. Switching to NFS requires the VMDK files for the data and log LUNs to be copied to the volume for the NFS mount. To do this, the NFS volumes were added to the ESX server and then a simple copy is performed from within the service console in order to make a copy of the VMDK files on the NFS volumes. 5

Procedure The basic procedure for each test was the same regardless of the storage protocol used. The ESX Server was restarted. Next, the Domain Controller virtual machine is booted and given approximately 1 minutes to complete starting. All of the Exchange virtual machines were started at the same time and then given approximately 25 minutes. During this time the LoadGen server was rebooted. The LoadGen tool was then launched on its server and a test is started for all 16, users with the initialization phase skipped. The test is monitored with esxtop from the ESX Service console, LoadGen on the LoadGen server, perfmon within the virtual machines, and sysstat on the Netapp array. Results Using Microsoft Exchange Load Generator, or LoadGen, with the Outlook 27 Online Heavy User profile has been the standard way to test. This user profile equates to each user doing 47 tasks per day. In order to test an even higher level of work that might more closely resemble email power users, a double heavy profile was created by editing the Heavy profile and doubling all of the user tasks, resulting in 94 tasks per user per day. The LoadGen results show response time or latency for a set of tasks that Exchange users performed. These results show that all three storage protocols were able to provide sufficient performance to support all 16, Exchange users. Fibre Channel provided the best performance, but and NFS were close behind. Of all of LoadGen s user tasks, Send Mail is most commonly used to indicate overall performance. For Send Mail latencies, a result of greater than 1, milliseconds signifies a degraded user experience. The average latency and 95th percentile latencies are shown in the chart below. All results are the average result over the entire eight-hour run. Figure 1. Average Send Mail Latency Avg Send Mail Latency (ms) 25 2 15 1 5 Fibre Channel NFS Heavy Online Profile Double Heavy Online Profile With the Heavy Online Profile, all three protocols were within 4 milliseconds of each other in average Send Mail latency with Fibre Channel reporting the best time. The increased load of the Double Heavy Online Profile better distinguishes performance of the three protocols with Fibre Channel still showing the best performance. 6

While average latency is a good metric, 95th percentile latency is a better metric for understanding the worst-case performance for an individual user. The 95th percentile metric is often used to capture the Quality of Service and to establish Service Level Agreement metrics. The numbers for 95th percentile Send Mail latency are higher overall but show similar relative results to the average latency. All three protocols are still fairly close in the Heavy Profile, with a little bit more separation on the Double Heavy profile. Fibre Channel once again provided the best performance. Figure 2. 95th Percentile Send Mail Latency 95th Percentile Send Mail Latency (ms) 9 8 7 6 5 4 3 2 1 Fibre Channel NFS Heavy Online Profile Double Heavy Online Profile The average I/O operations per second (IOPS) is about 3,7 for the Heavy profile and 5,3 for the Double Heavy profile. This shows how much the increase in tasks affects the test workload. It takes about half of an eight-hour LoadGen test before the test will reach a steady state. The graph below shows the IOPS for all three protocols during the heavy user profile. IOPS initially started off near 6,5, but ended around 3,3 for all three protocols. Figure 3. IOPS during eight-hour Heavy Profile LoadGen Test 8 7 6 Fibre Channel NFS 5 IOPS 4 3 2 1 Eight-Hour Heavy LoadGen Test 7

The same graph of the Double Heavy workload explains the separation in the latencies under higher stress. At the beginning of the test, Fibre Channel was able to reach a higher max number of IOPS than and NFS. NFS and then remained at a higher level of IOPS than Fibre Channel to clear the backlog. After this first part of the test, all three protocols were performing approximately the same number of IOPS much as in the graph from the Heavy profile. Figure 4. IOPS during eight-hour Double Heavy LoadGen Test IOPS 1 9 8 7 6 5 4 3 2 1 Fibre Channel NFS Eight-Hour Double Heavy LoadGen Test Due to the bursty nature of Exchange I/O, and NFS briefly exceeded the maximum capacity of a single Gigabit Ethernet connection, which resulted in higher latency. In order to show that this was the bottleneck, an additional test was run on with four network ports, each with a Gigabit Ethernet connection. This configuration more closely compares with the bandwidth available to the four Gigabit Fibre Channel connection. The results show that when bandwidth is not constrained, latency remained lower and the connection behaved more like the Fibre Channel. The figure below shows the average disk latency results. Figure 5. Disk Latency during eight-hour Double Heavy LoadGen Test with Four Connections for Disk Latency (ms).18.16.14.12.1.8.6.4.2 1 Fibre Channel 1x NIC 4x NICs Eight-Hour Double Heavy LoadGen Test 8

An additional measure of performance for these tests is CPU utilization, which shows the impact that each of the storage protocols had on overall system utilization. Fibre Channel had a lower impact because there is less processing that must be done by the host system, with the majority of the work being done on the HBA. and NFS required additional work at the host which resulted in higher ESX CPU utilization. Figure 6. ESX Host CPU Utilization % CPU Utilization 7 6 5 4 3 2 1 Fibre Channel NFS Heavy Online Profile Double Heavy Online Profile Performance Comparison of ESX 3.5 and ESX 4. There are a lot of configuration differences between the test described in this paper and the 16,-user Exchange test published a little over a year ago. These differences make direct comparison difficult. The storage arrays are different with a different number of total disks that were configured with different types of RAID. Additionally, the Exchange 27 architecture is different. In the earlier test, each virtual machine was loaded with hub transport, client access, and mailbox roles and had a dedicated domain controller virtual machine on another ESX host. In the tests described in this paper, each virtual machine only had the mailbox role, but shared a common domain controller that also has the hub transport and client access roles. This domain controller/hub transport/client access virtual machine was also running on the same server as the Exchange virtual machines. These are significant differences. There are also quite a few similarities. Both tests were with four-socket quad-core Intel servers with 128GB of RAM. The virtual machines were configured the same with two vcpus and 14 Gigabit of RAM running Windows Server 23 x64 and Exchange 27 with SP1. LoadGen was used with the same test profile of Heavy Online Outlook 27 for 16, users and an eight-hour test. The remaining major difference between the two tests is the version of ESX Server, allowing a loose comparison of ESX 3.5 and ESX 4.. The performance of the 16,-user configuration on ESX 4. was better than the ESX 3.5 configuration. On the Heavy workload with Fibre Channel storage, the average latency was 135 milliseconds for Send Mail with ESX Server 3.5 in the previous test and just 11 milliseconds for ESX Server 4. in this test. The 95th percentile latency comparison shows a similar advantage for ESX 4.: 44 milliseconds versus 374 milliseconds for ESX 3.5 and ESX 4., respectively. Conclusion All three protocols tested have shown to be able to support 16, Exchange users in both the Heavy and Double Heavy workloads. These workloads stressed a 16-core HP DL 58 server to approximately 6 percent CPU utilization and pushed an average of over 5, IOPS to the attached NetApp storage array. This throughput was maintained while the average latency stayed under 22 milliseconds and 95th percentile latency remained under one second. Fibre Channel provided the best performance with the lowest CPU utilization. NFS and performance was comparable with slightly longer latencies, and slightly higher CPU utilization. 9

VMware, Inc. 341 Hillview Ave Palo Alto CA 9434 USA Tel 877-486-9273 Fax 65-427-51 www.vmware.com Copyright 29 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMW_9Q2_WP_vSphere_Exch_NFS FC_P1_R1