1 Achieving a Million I/O Operations per Second from a Single VMware vsphere 5.0 Host Performance Study TECHNICAL WHITE PAPER
2 Table of Contents Introduction... 3 Executive Summary... 3 Software and Hardware... 3 Test Workload... 3 Multi-VM Tests... 4 Experimental Setup... 4 Server... 4 Storage Area Network... 4 Virtual Platform... 4 Virtual Machines... 4 Iometer Workload... 4 Test Bed... 5 Results... 7 Scaling I/O Operations per Second with Multiple VMs... 7 Scaling I/O Throughput as I/O Request Size Increase... 7 CPU Cost of an I/O Operation with LSI Logic SAS and PVSCSI Virtual Controllers... 9 Performance of a Single VM Experimental Setup Server Storage Area Network Virtual Platform Virtual Machines Iometer Workload Test Bed Results Scaling I/O Operations per Second with Virtual SCSI Controllers Conclusion Appendix A Selecting the Right PCIe Slots for the HBAs Building a Scalable Test Infrastructure Assigning Virtual Disks to a VM Estimation of CPU Cost of an I/O Operation References About the Author Acknowledgements TECHNICAL WHITE PAPER /2
3 Introduction One of the essential requirements for a platform supporting enterprise datacenters is the capability to support the extreme I/O demands of applications running in those datacenters. A previous study 1 has shown that vsphere can easily handle demands for high I/O operations per second. Experiments discussed in this paper strengthen this assertion further by demonstrating that a vsphere 5.0 virtual platform can easily satisfy an extremely high level of I/O demand that originates from the hosted applications, as long as the hardware infrastructure meets the need. Executive Summary The results obtained from the experiments show that: A single vsphere 5.0 host is capable of supporting a million+ I/O operations per second. 300,000 I/O operations per second can be achieved from a single virtual machine (VM). I/O throughput (bandwidth consumption) scales almost linearly as the request size of an I/O operation increases. I/O operations on vsphere 5.0 systems with Paravirtual SCSI (PVSCSI) controllers use less CPU cycles than those with LSI Logic SAS virtual SCSI controllers. Software and Hardware Because of its capabilities and rich feature set, VMware vsphere has become one of the Industry s leading choices as a platform to build private and public clouds. Significant improvements have been made to vsphere s storage stack in successive releases. These improvements enable vsphere to easily satisfy the ever-increasing demand for I/O throughput by almost all enterprise applications. The Symmetrix VMAX storage system is a high end, highly scalable storage system from EMC 2. It allows the scaling of storage system resources through common building blocks called Symmetrix VMAX engines. VMAX engines can be scaled from one VMAX engine with one storage bay, to eight VMAX engines with a maximum of ten storage bays. Each VMAX engine contains four quad-core processors, up to 128GB of memory, and up to 16 front-end ports for host connectivity. The Emulex LightPulse LPe is an 8Gbits/s Fibre Channel PCI Express dual-channel host bus adapter 3. The LPe12002 delivers some of the industry s highest performance, CPU efficiency, and reliability, making it a convenient choice to enable mission-critical and I/O-intensive applications in cloud environments. Test Workload Iometer was used to generate the I/O load in all the experiments discussed in the paper 4. Iometer was configured to generate 16 or 32 Outstanding I/Os (OIOs) with 100% random and 100% read requests. The size of the I/O requests varied between 512 bytes, 1KB, 2KB, 4KB, and 8KB depending on the experiments. The I/O sizes used in the experiments represent the I/O characteristics of a wide gamut of transaction-oriented applications such as databases and enterprise mail servers. TECHNICAL WHITE PAPER /3
4 Multi-VM Tests Experimental Setup Server 4 Intel Xeon Processors E7-4870, 2.40GHz, 10 cores 256GB memory 6 dual-port Emulex LPe12002 HBAs (8Gbps) Storage Area Network VMAX with 8 engines 4 quad-core processors and 128GB of memory per engine 64 front-end 8Gbps Fibre Channel (FC) ports 64 back-end 4Gbps FC ports 960 * 15K RPM, 450GB FC drives 1 FC switch Virtual Platform vsphere 5.0 Virtual Machines Windows Server 2008 R2 EE x64 4 vcpus 8GB memory 3 virtual SCSI controllers Iometer Workload 1 worker for a pair of virtual disks (five workers per VM) 100% random 100% read An access region of 6.4GB in each virtual disk (a total of 384GB on 60 virtual disks) 16 or 32 OIOs depending on the test case Request size of 512 bytes, 1KB, 2KB, 4KB, or 8KB depending on the test case TECHNICAL WHITE PAPER /4
5 Test Bed A single physical server running vsphere 5.0 was used for the experiments. Six identical VMs were created on this host. Six dual-port 8Gbps FC HBAs were installed in the vsphere host. All 12 FC ports of these HBAs were connected to a FC switch. The VMAX was connected to the same FC switch via 60 8Gbps FC links, with each link connected to a separate front-end FC port on the array. Refer to Building a Scalable Test Infrastructure in the appendix for more details. A total of 480 RAID-1 groups were created on top of 960 FC drives in the VMAX array. A single LUN was created in each RAID group. A 250GB metalun was created on a set of 8 LUNs, resulting in a total of 60 metaluns. All 60 metaluns were exposed to the vsphere host. Separate VMFS datastores were created on each of the 60 metaluns. To ensure high concurrency and effective use of all the available I/O processing capacity, each VMFS datastore was configured to use a fixed path via a dedicated FC port on the array. A single thick * virtual disk was created in each VMFS datastore. The virtual disks were assigned to the VMs as follows: Each VM was assigned a total of 10 virtual disks for Iometer testing. All 10 virtual disks in a VM were distributed across three virtual SCSI controllers. Refer to Assigning Virtual Disks to a VM for more details on the distribution of virtual disks. The virtual SCSI controller type was varied between LSI Logic SAS and PVSCSI for the comparison tests. A total of 384GB (6.4GB in each virtual disk) of disk space was used by Iometer to generate I/O load in all VMs. VMAX, because of its 1TB aggregate memory, was able to cache most of the disk blocks belonging to this 384GB disk space. * Also known as eagerzeroed thick. These virtual disks were different than the virtual disk that contained operating system and Iometer application related files. TECHNICAL WHITE PAPER /5
6 Figure 1. Configuration for Multi-VM Tests TECHNICAL WHITE PAPER /6
7 Results For each experiment, different metrics such as I/O operations per second (IOPS), megabytes per second (MBps), and latency of a single I/O operation measured in milliseconds (ms) were collected to analyze the I/O performance measured under different test scenarios. Scaling I/O Operations per Second with Multiple VMs This test focused on illustrating the scalability of I/O operations per second on a single host as the number of VMs generating the I/O load increased. Iometer in each VM was configured to generate a similar I/O load using 32 OIOs of 8KB size. The number of VMs generating the I/O load was increased from one to six. In each case, the total I/O operations per second and average latency for each I/O operation was measured in each VM. This test was done with only PVSCSI controllers. Input/Output Operations per Second (IOPS) 1,200,000 1,000, , , , ,000 0 IOPs Latency Number of Virtual Machines Latency (ms) Figure 2. Scaling I/O Operations per Second with Multiple VMs Figure 2 shows the aggregate number of I/O operations per second achieved as the number of VMs were increased. Aggregate I/O operations per second scaled from 200 thousand to slightly above 1 million as the number of VMs was increased from one to six. The latency of I/O operations remained under 2 milliseconds throughout the test, increasing by only 10% as the I/O load increased on the host. Scaling I/O Throughput as I/O Request Size Increase This test focused on demonstrating the scalability of I/O throughput achieved on the host as the size of the I/O operations increased from 512 bytes to 8KB. Iometer in each VM was configured to generate a similar I/O load using 16 outstanding I/Os of varying size. The number of VMs generating the I/O load was fixed at six. The total I/O operations per second, I/O throughput (megabytes per second), and average latency for each I/O operation was measured in each VM. This test was run with LSI Logic SAS and PVSCSI virtual controllers in order to compare their impact on overall I/O performance. TECHNICAL WHITE PAPER /7
8 IOPS 1,400,000 1,200,000 1,000, , , , ,000 LSI - IOPs pvscsi - IOPs LSI - Latency pvscsi - Latency I/O Latency (ms) bytes 1KB 2KB 4KB 8KB I/O Size 0.0 Figure 3. I/O Operations per Second with Different Request Sizes Figure 3 shows the aggregate I/O performance of all six VMs when generating similar I/O load, but each time with a different request size. For each request size, experiments were conducted with both LSI Logic SAS and PVSCSI controllers. As seen Figure 3, aggregate I/O operations achieved from six VMs remained well over 1 million for all I/O sizes except 8KB. The average latency of a single I/O operation remained well under a millisecond with each virtual SCSI controller type, except at 8KB request size. The increase in I/O latency as measured by Iometer was due to a corresponding increase in the I/O latency at the storage. Please note that the slightly lower number of aggregate IOPS observed with 8KB request size and 6 VMs in this test case compared to that with a same request size and a same number of VMs in the test case described in the section titled Scaling I/O Operations per Second with Multiple VMs was primarily due to a lower number of OIOs used for this test. Note that the I/O latency in this test was lower than that in the section titled Scaling I/O Operations per Second with Multiple VMs as the number of OIOs was half of that used for the latter test. TECHNICAL WHITE PAPER /8
9 IO Throughput (MBytes/sec) 8,000 6,000 4,000 2,000 LSI - MBps pvscsi - MBps I/O Size (KB) Figure 4. Scaling I/O Throughput with the Request Size The corresponding aggregate I/O throughput observed in the host is shown in Figure 4. As seen in the figure, throughput scales almost linearly as I/O request size is doubled in each iteration. vsphere utilized the available I/O bandwidth to scale almost linearly from 592MB per second to almost 8GB per second as the I/O request size increased from 512 bytes to 8KB. The linear scaling clearly indicates that the vsphere software stack didn t present any type of bottlenecks to the I/O workload originating from the VMs. The slight drop observed at 8KB request size was due to a corresponding increase in the I/O latency observed in the storage array. Figures 3 and 4 also compare the aggregate I/O performance of the host when using two different virtual SCSI controllers - LSI Logic SAS and PVSCSI. The PVSCSI adapter provided 7% to 10% better throughput over LSI Logic SAS in all of the cases. CPU Cost of an I/O Operation with LSI Logic SAS and PVSCSI Virtual Controllers Another important metric that is useful to compare the performance of LSI Logic and PVSCSI adapters is the CPU cost of an I/O operation. A detailed explanation of the estimation methodology is given in Estimation of CPU Cost of an Operation. Figure 6 compares the CPU cost of an I/O operation with a LSI Logic SAS virtual SCSI adapter to that with a PVSCSI adapter for an I/O request size of 8KB. TECHNICAL WHITE PAPER /9
10 1,200,000 IOPs Cycles / IO 1.2 IOPS 1,000, , , , Normalized Cycles / IO 200, LSI PVSCSI Virtual SCSI Controller Figure 5. CPU Cost of an I/O Operation with LSI Logic SAS and PVSCSI Adapters As seen in Figure 5, a PVSCSI adapter provides 8% better throughput at 10% lower CPU cost. These results clearly show that PVSCSI adapters are capable of providing better throughput at a lower CPU cost than LSI Logic SAS adapters at extreme load conditions. TECHNICAL WHITE PAPER /10
11 Performance of a Single VM The next study focuses on the performance of a single VM. Experimental Setup Server 4 Intel Xeon Processors E7-4870, 2.40GHz, 10 cores 256GB memory 4 dual-port Emulex LPe12002 HBAs (8Gbps) Storage Area Network VMAX with 5 engines 4 quad-core processors and 128GB of memory per engine 40 front-end 8Gbps FC ports 40 back-end 4Gbps FC ports 960 * 15K RPM, 450GB FC drives 1 FC switch Virtual Platform vsphere 5.0 Virtual Machines Windows Server 2008 R2 EE x64 16 vcpus 16GB memory 1 to 4 virtual SCSI controllers Iometer Workload Two workers for every 10 virtual disks 100% random 100% read An access region of 6.4GB in each virtual disk 16 OIOs Request size of 8K Bytes Test Bed A single VM running on the vsphere 5.0 host was used for this test. The number of vcpus was increased from 4 to 16. The amount of memory assigned to the VM was increased from 8GB to 16GB. The storage layout created for the multi-vm tests was reused for this study. Iometer was configured to use a total of 40 virtual disks. All the virtual disks were assigned to the single test VM in increments of 10. Each time, the set of 10 virtual disks were attached to a new virtual SCSI controller. Iometer was configured to generate the same I/O load on every virtual vsphere 5.0 supports a total of 4 virtual SCSI controllers per VM. TECHNICAL WHITE PAPER /11
12 disk. This ensured a constant load on each virtual SCSI controller. Each set of 10 virtual disks was accessed through a dual-port HBA in the host but separate FC ports on the array. A total of four dual-port HBAs in the host and 40 front-end FC ports in the array were used to access all 40 virtual disks. Figure 6. Test Configuration for Single-VM test TECHNICAL WHITE PAPER /12
13 Results As done in the multi-vm test case, different metrics such as I/O operations per second and latency for a single I/O operation measured in milliseconds were used to study the I/O performance. Scaling I/O Operations per Second with Virtual SCSI Controllers This test case focused on studying the scalability of I/O operations per second when the number of virtual SCSI controllers in a VM was increased. 420, ,000 pvscsi - IOPs pvscsi - Latency IOPS 300, , , , I/O Latency (ms) 60, Number of Virtual SCSI Controllers 0.0 Figure 7. Scaling I/O Operations per Second with 8KB Request Size in a Single VM Figure 7 shows the aggregate I/O operations per second and the average latency of an I/O operation achieved from a single VM as the number of virtual SCSI controllers was increased. As shown in Figure 8, aggregate I/O operations per second increased linearly while the latency of I/O operations remained close after increasing from one to two virtual SCSI controllers. TECHNICAL WHITE PAPER /13
14 Conclusion vsphere offers a virtualization layer that, in conjunction with some of the industry s biggest storage platforms, can be used to create private and public clouds that are capable of supporting any level of I/O demands originating from the applications running in those clouds. Results of the experiments conducted at EMC labs illustrate that: vsphere can easily support a million+ I/O operations per second from a single host out of the box. A single VM running on vsphere 5.0 is capable of supporting 300,000 I/O operations per second at an 8KB request size. vsphere can scale linearly in terms of I/O throughput (bandwidth consumption) as the request size of an I/O operation increases. vsphere s Paravirtual SCSI controller offers lower CPU cost for an I/O operation compared to that of the LSI Logic SAS virtual SCSI controller. The results presented in this paper are by no means the upper limit for the I/O operations achievable through any of the components used for the tests. The intent is to show that a vsphere virtual infrastructure, such as the one used for this study, can easily handle even the most extreme I/O demands that exist in datacenters today. Customers can virtualize their most I/O-intensive applications with confidence on a platform powered by vsphere 5 and EMC VMAX. TECHNICAL WHITE PAPER /14
15 Achieving a Million I/O Operations per Second Appendix A Selecting the Right PCIe Slots for the HBAs To achieve a million+ I/O operations per second with an 8KB request size through the least number of HBAs required the HBAs to collectively provide 8GB per second of throughput. Although five HBAs, each with a dual port and supporting 8Gbps, could theoretically meet the 8GB per second throughput requirement, at that throughput each HBA would be operating near the HBA s saturation regime. To ensure sufficient concurrency and still have enough capacity to push a million+ IOPS, six HBAs were used for the exercise. This required each HBA to support at least 1.3GB per second. To support that kind of throughput, the HBAs had to be placed in those PCIe slots that were capable of supporting the required throughput from each HBA. The HBAs were placed in different slots as shown in Table 1. The maximum theoretical throughput of each slot is also shown in the table. Figure 8. PCIe Slot Configuration of the Server used for the Tests SLOT NUMBER PCIe VERSION NUMBER OF LANES MAXIMUM THROUGHPUT ** 1 Gen 2 4 2GBps 2, 3, 4, 6 Gen 2 8 4GBps 7 Gen GBps Table 1. Details of the PCIe Slots ** Each PCIe Gen 2 channel provides a theoretical max of 500MB per second. TECHNICAL WHITE PAPER /15
16 Building a Scalable Test Infrastructure To build a scalable virtual infrastructure that can support a million+ IOPS, a few tests were conducted initially. A single VM with 2 vcpus and 8GB of memory was chosen as the basic building block. This VM was assigned 4 thick virtual disks, each created on a separate VMFS datastore. Each datastore was created on a separate metalun in the VMAX array. Each datastore was accessed through a single 8Gbps FC port in the host, but with dedicated FC ports on the array. Iometer was configured to issue 100% random, 100% read requests to 6.4GB worth of storage space on each disk. With this Iometer profile, the VM produced 120K IOPS and 100K IOPS with 512 bytes and 8KB request sizes, respectively. If the vcpu and storage configuration of the VMs were to be doubled, it was theoretically possible to achieve 240K and 200K IOPS with 512 bytes and 8KB request sizes respectively through a dual-port 8Gbps FC HBA. Thus, a single VM with the following configuration was chosen as the building block for the tests: 4 vcpus and 8GB of memory, issuing I/O requests to 10 virtual disks (all on separate VMFS datastores) through two ports of a dual-port adapter. The number of VMs was increased to six, with each VM being identical in the configuration. The final test setup used is shown in Figure 1. Assigning Virtual Disks to a VM To achieve enough parallelism while pushing a high number of I/O operations per second, virtual disks in each VM were spread across multiple virtual SCSI controllers as follows: NUMBER OF VDISKS VIRTUAL SCSI CONTROLLER ID Table 2. Virtual Disk Assignment for Multi-VM tests NUMBER OF VDISKS VIRTUAL SCSI CONTROLLER ID Table 3. Virtual Disk Assignment for Single-VM Test TECHNICAL WHITE PAPER /16
17 Estimation of CPU Cost of an I/O Operation %Processor time of the vsphere host was used to estimate the CPU cost of an I/O operation. When all six VMs were actively issuing 8KB I/O requests to their storage, the total %Processor time of the vsphere host was recorded using esxtop 5. The CPU cost of an I/O operation was estimated using the following equation: (Average % processor time of the host Rated CPU clock frequency Number of logical threads) (100 Total IO operations per second) The rated clock frequency of the processors in the server was 2.4GHz. By default, hyperthreading was enabled on the vsphere host. Hence the number of logical threads was 80. References 1. VMware vsphere 4 Performance with Extreme I/O Workloads. VMware, Inc., EMC Symmetrix VMAX storage system. EMC Corporation, LightPulse LPe Emulex Corporation, Iometer Interpreting esxtop Statistics. VMware, Inc., About the Author Chethan Kumar is a senior member of Performance Engineering at VMware, where his work focuses on performance-related topics concerning database and storage. He has presented his findings in white papers and blog articles, and technical papers in academic conferences and at VMworld. Acknowledgements The author would like to thank VMware s partners Intel, Emulex, and EMC for providing the necessary gear to build the infrastructure that was used for the experiments discussed in the paper. He would also like to thank the Symmetrix Performance Engineering team of Thomas Rogers, John Aurin, John Adams and Dan Aharoni for installing and configuring the hardware setup and for running all the experiments. Finally, the author would like to thank Chad Sakac, VP of VMware Alliance, EMC for supporting this effort and ensuring the availability of the hardware components to complete the exercise in time. TECHNICAL WHITE PAPER /17
18 from a Single vsphere 5.0 Host VMware, Inc Hillview Avenue Palo Alto CA USA Tel Fax Copyright 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
Storage I/O Performance on VMware vsphere 5.1 over 16 Gigabit Fibre Channel Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Setup... 3 Workload... 4 Results...
Virtualized SQL Server Database Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Factors Affecting Storage vmotion... 3 Workload... 4 Experiments and Results...
Performance Study PVSCSI Storage Performance VMware ESX 4.0 VMware vsphere 4 offers Paravirtualized SCSI (PVSCSI), a new virtual storage adapter. This document provides a performance comparison of PVSCSI
VMware vsphere 6 and Oracle Database Scalability Study Scaling Monster Virtual Machines TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Test Environment... 3 Virtual Machine
Performance Study Performance of Virtualized SQL Server Based VMware vcenter Database VMware vsphere 4.1 VMware vsphere is a sound platform on which to virtualize SQL Server databases. One overlooked database
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
Topic Configuration s VMware Infrastructure 3: Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 When you are selecting and configuring your virtual and physical equipment,
Performance of Multiple Java Applications in a VMware vsphere 4.1 Virtual Machine March 2011 PERFORMANCE STUDY Table of Contents Introduction... 3 Executive Summary... 3 Test Environment... 4 Overview...
Performance Study of Oracle RAC on VMware vsphere 5.0 A Performance Study by VMware and EMC TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Test Environment... 3 Hardware... 4 Software... 4 Workload...
VMware Virtual SAN Performance with Microsoft SQL Server Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Experiment Setup... 3 Hardware Configuration...
Industry 8Gb / 16Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with QLogic Executive Summary Explosive growth in the complexity and amount of data of today s datacenter environments
Power Management and Performance in VMware vsphere 5.1 and 5.5 Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Benchmark Software... 3 Power Management
Written and Provided by Expert Reference Series of White Papers Microsoft Exchange Server 200 Performance on VMware vsphere 4 1-800-COURSES www.globalknowledge.com Introduction Increased adoption of VMware
What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
W H I T E P A P E R Performance and Scalability of Microsoft SQL Server on VMware vsphere 4 Table of Contents Introduction................................................................... 3 Highlights.....................................................................
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
Performance Study 1Gbps Networking Performance VMware ESX 3.5 Update 1 With increasing number of CPU cores in today s computers and with high consolidation ratios combined with the high bandwidth requirements
Performance Study Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1 VMware vsphere 4.1 One of the key benefits of virtualization is the ability to consolidate multiple applications
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Increase Longevity of IT Solutions with VMware vsphere July 2010 WHITE PAPER Table of Contents Executive Summary...1 Introduction...1 Business Challenges...2 Legacy Software Support...2 Hardware Changes...3
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
VMware Virtual SAN 6.0 Performance TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Virtual SAN Cluster Setup... 3 Hybrid Virtual SAN Hardware Configuration... 4 All-Flash
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
KVM Virtualized I/O Performance Achieving Unprecedented I/O Performance Using Virtio-Blk-Data-Plane Technology Preview in SUSE Linux Enterprise Server 11 Service Pack 3 (SP3) Khoa Huynh, IBM Linux Technology
Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure
Technical Note Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 This technical note discusses using ESX Server hosts with an IBM System Storage SAN Volume Controller
Virtualized Oracle Database Study Using Cisco Unified Computing System and EMC Unified Storage Technology Validation Report with Scale-Up, Scale-Out and Live Migration Tests White Paper January 2011 Contents
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
Power Comparison of Dell PowerEdge 2950 using Intel X5355 and E5345 Quad Core Xeon Processors By Scott Hanson and Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter
EMC VMAX 40K The EMC VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate your transformation to the
The VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5 R E F E R E N C E A R C H I T E C T U R E B R I E F Table of Contents Overview...................................................................
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and IT
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform The benefits
Performance Study Virtualizing Performance-Critical Database Applications in VMware vsphere VMware vsphere 4.0 with ESX 4.0 VMware vsphere 4.0 with ESX 4.0 makes it easier than ever to virtualize demanding
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
W h i t e p a p e r Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel Introduction The July 2011 launch of the VMware vsphere 5.0 which included the ESXi 5.0 hypervisor along with vcloud Director
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Web server, SQL Server OLTP, Exchange Jetstress, and SharePoint Workloads Can Run Simultaneously on One Violin Memory
White Paper The 8Gb Fibre Channel Adapter of Choice in Microsoft Exchange Environments QLogic Offers Best Performance and Scalability for Microsoft Exchange Workloads Key Findings The QLogic FabricCache
VMware ESX Server Multiple Workload Performance on Dell PowerEdge 2850 and PowerEdge 6850 Servers Enterprise Product Group (EPG) Dell White Paper By Todd Muirhead, Dave Jaffe and Scott Stanford July 2005
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer email@example.com 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Jul 2013 D E P L O Y M E N T A N D T E C H N I C A L C O N S I D E R A T I O N S G U I D E Table of Contents Introduction... 3 VMware
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Table of Contents About this Document.... 3 Introduction... 4 Baseline Existing Desktop Environment... 4 Estimate VDI Hardware Needed.... 5
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Web 2.0 Applications on VMware Virtual SAN Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Configuration... 3 Hardware Configuration... 3 ESXi Configuration...
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
Dell iscsi Storage for VMware By Dave Jaffe Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter May 2007 Contents Executive Summary... 4 Introduction... 5 The Dell iscsi
Demartek Evaluation report prepared under contract with NetApp, Inc. Introduction One of the advantages of the NetApp Unified Storage solutions is that these end-to-end solutions support a full range of
White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.
Improved Virtualization Performance with 9th Generation Servers David J. Morse Dell, Inc. August 2006 Contents Introduction... 4 VMware ESX Server 3.0... 4 SPECjbb2005... 4 BEA JRockit... 4 Hardware/Software
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the
Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization
Storage I/O Control Technical Overview and Considerations for Deployment VMware vsphere 4.1 T E C H N I C A L W H I T E P A P E R Executive Summary Storage I/O Control (SIOC) provides storage I/O performance
QLogic FCoE/iSCSI and IP Networking Adapter Evaluation Evaluation report prepared under contract with QLogic Introduction Enterprises are moving towards 10 Gigabit networking infrastructures and are examining
High Performance Tier Implementation Guideline A Dell Technical White Paper PowerVault MD32 and MD32i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
VMWARE PERFORMANCE STUDY VMware ESX Server 3. Improving Scalability for Citrix Presentation Server Citrix Presentation Server administrators have often opted for many small servers (with one or two CPUs)
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions