1 Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process for many virtual desktop pools is the bootstorm. Enterprises deploying a virtual desktop infrastructure (VDI) environment have a variety of choices when deciding what sort of storage to deploy for their VDI s. Solid State Storage is known to dramatically decrease boot times compared to hard disk drives (HDD), and may be a good storage choice for a pool of Virtual Machines (VMs). Demartek evaluated the performance of Internal HDD, Fibre Channel (FC) HDD, and Fibre Channel solid state drives (SSD) under bootstorm conditions and compared VM boot times, server load, and disk throughput performance metrics. In addition, Demartek compared results for different cache configurations and host bus adapter (HBA) queue depths. This evaluation was conducted in the Demartek lab in Colorado. Executive Summary and Key Findings Demartek evaluated the differences SSD and HDD storage had on bootstorm performance, and analyzed the effects of queue depth and cache configuration on that performance. For Fibre Channel SSDs we found that: Average VM boot time ranged from 2.3x to 4.2x faster than any HDD storage configuration tested. CPU usage ranged from 16% to 105% higher than any HDD storage configuration tested. Lower drive latency during the bootstorm enabled VM boots to be initiated 1.6x to 6.9x faster than any HDD storage configuration tested.
2 Page 2 of 26 For Internal HDDs we found that: Doubling the controller cache size from 512 MB to 1024 MB decreased average boot time by 18%. For Fibre Channel host bus adapters (HBAs) we found that: Increasing the HBA queue depth from 32 to 254 decreased HDD boot times by 33% and decreased SSD boot times by 18%. Increasing the HBA queue depth from 32 to 254 increased HDD CPU usage by 54% and increased SSD CPU usage by 6%. We recommend that enterprises consider SSDs for VDI storage. If HDDs must be used, a larger cache and increased HBA queue depth will greatly increase performance.
3 Test Configuration Analysis of VDI Storage Performance During Bootstorm Page 3 of Windows 7 VMs were set up in VMware vsphere 5.1 on the internal HDDs. Each VM had a startup script which recorded the time in a unique file and then transferred the file over to a server. These VMs were updated as of 5/14/13, and then cloned to storage on the FC SSD and FC HDD drives. Scripts accessing the ESXi console recorded the time and then sent commands to power on a VM. ESXTOP was used to record server load and disk I/O data. Script Delays When ESXi software sent commands to start all 90 VMs at the same time, the software gave errors when attempting to power on some of the VMs. Most of the errors were due to loading shared libraries. As a result, not all VMs would be issued the power-on command by ESXi software. Adding a slight delay to the VM power-on commands led to all VMs booting properly. Multiple attempts at delaying power-on commands were tried for each configuration until the minimum time required was found. This time varied for each storage
4 Page 4 of 26 configuration. This fact led to the conclusion that ESXi software libraries wait for a response from storage while powering-on VMs. Lower latency storage configurations made it possible for the ESXi software to handle more boot requests at the same time without denying requests. Caching Two different internal HDD configurations were tested, one with a 512MB cache RAID controller and one with a 1024MB cache RAID controller replacing the original RAID controller. The Fibre Channel HDD configuration included a 1744 MB data cache. During the first test it was configured for read-only caching, and during the second test it was configured for write-back caching as well. Write back caching is not recommended for usage unless there is a backup battery or data can afford to be lost. Queue Depth The initial tests using Fibre Channel storage used the default queue depth of 32 for the FC HBA. During the second round of tests, the queue depth was increased to 254. Data Collection Server load and disk I/O data was collected in intervals of 15 seconds using ESXTOP. Boot-related timestamps gathered by scripts were rounded to the same 15 second intervals for graph production.
5 Internal HDD Analysis of VDI Storage Performance During Bootstorm Page 5 of 26 We found that the Internal HDDs had a difficult time keeping up with demand. While doubling the read/write cache size from 512MB to 1024MB offered noticeable improvement, some users would have to wait over 8 minutes before their desktop booted. The server CPU was not over-taxed during the bootstorm, topping out at 41% CPU utilization while using the 1024MB cache. Taken to Issue Start Commands (mm:ss) Taken for a VM to Boot Minimum Maximum Average Maximum CPU Utilized Maximum IOPS 512MB Cache 1:59 00:53 09:40 04:30 37% 5, MB Cache 1:47 00:34 08:20 03:42 41% 5,842 The graphs below show the results averaged over three tests for each configuration. Processor peaked slightly higher for the larger cache, while IOPs and memory usage only varied slightly.
8 Page 8 of 26 However, we found that bootstorms are a cache-friendly workload. Increasing cache size to 1024MB enabled the ESXi software to handle VM start requests faster, which lowered the number of VMs in queue and caused the queue to be completed faster. Boot times were reduced for all VMs by an average of 18%. VMs in Queue VMs in Queue MB Cache 1024 MB Cache
9 Page 9 of 26 Increasing cache size to 1024MB enabled the first VMs to complete boot sooner, giving this configuration a head start. In addition, there was a peak of 8 VMs completing boot in 15 seconds with the 1024MB cache, while the 512MB cache typically peaked at 6 VMs completing boot in 15 seconds. Boots were completed faster with the larger cache, cutting down on boot-time for all VMs. VMs Booted VMs Booted MB Cache 1024 MB Cache
10 Fibre Channel HDD Analysis of VDI Storage Performance During Bootstorm Page 10 of 26 The initial Fibre Channel HDD test with queue depth 32 and write through did not have near the same performance as the internal HDD configuration. However, enabling writeback caching and increasing queue-depth to 254 for the second set of tests improved performance dramatically, resulting in a 33% faster boot time on average than the queue depth 32 configuration. This more than brought the performance to levels on par with internal HDDs. Average boot times for the final tests were 9% faster than for the internal HDDs. Taken to Issue Start Commands (mm:ss) Taken for a VM to Boot Minimum Maximum Average Maximum CPU Utilized Maximum IOPS Queue Depth 32, 1744MB Read Cache Queue Depth 254, 1744MB Read/Write Cache 04:36 00:53 11:20 05:02 25% 3,278 01:38 00:41 06:42 03:22 38% 5,669 An analysis of read and write operations can help us determine the source of the bulk of the performance improvements. Writes per second increased from 354 to 497 when the caching was enabled and queue depth was increased to 254, a 40% improvement. Reads/sec increased from an average of 2246 to 3010 when queue depth increased to 254, a 34% improvement. Read caching was enabled for both tests, so the improvement in reads/sec is entirely due to the increase in queue depth. We can see that writes/sec increased by 6% more than reads/sec increased. We can conclude that the majority of the improvement in writes/sec was due to the increase in queue depth, with the additional increase in writes/sec being due to the write cache. As most of the performance improvement appears to be due to the increase in queue depth, the initial bottleneck appears to have been due to the HBA settings.
11 Page 11 of 26 The graphs below show results averaged over three tests for each configuration. IOPS 5,000 4,500 4,000 3,500 3,000 2,500 2,000 1,500 1, IOPS for Queue Depth 254 and 32 Queue Depth 254 Reads/sec Queue Depth 32 Reads/sec Queue Depth 254 Writes/sec Queue Depth 32 Writes/sec The graphs below indicate that faster storage drove the processor 54% harder and resulted in VMs completing boot sooner.
13 Page 13 of 26 IOPS 6,000 5,000 IOPS 4,000 3,000 2,000 1,000 0 Queue Depth 32 Read Cache Queue Depth 254 Read/Write Cache As mentioned previously, ESXi shared libraries wait for a response from storage while powering on VMs, requiring software power-on requests to be delayed to avoid ESXi errors. Below we can see that the reduced latency with queue depth 254 enabled the VM queue to be populated 64% faster, which enabled more VMs to boot at the same time and all VMs to boot faster.
14 Page 14 of 26 VMs in Queue VMs in Queue Queue Depth 32 Read Cache Queue Depth 254 Read/Write Cache
15 Page 15 of 26 The rate at which VMs were able to complete boot was faster for queue depth 254 than it was for queue depth 32, as evidenced by the slope of these lines. Again this shows that the bottleneck was mostly due to the HBA settings. VMs Booted VMs Booted Queue Depth 32 Read Cache Queue Depth 254 Read/Write Cache
16 Fibre Channel SSD Analysis of VDI Storage Performance During Bootstorm Page 16 of 26 Fibre Channel SSDs were tested with queue depths of 32 and 254. Increasing the queue depth enhanced performance, resulting in an 18% improvement in average boot time. The SSDs were the only drives to produce boot times across the board that would be acceptable to an end user, with the maximum boot time topping out at 2 minutes 17 seconds for the slower queue depth 32 configuration, and less than two minutes for the faster queue depth 254 configuration. Taken to Issue Start Commands (mm:ss) Taken for a VM to Boot Minimum Maximum Average Maximum CPU Utilized Maximum IOPS Queue Depth 32 1:01 00:35 02:17 01:28 48% 12,981 Queue Depth :40 00:16 01:57 01:12 50% 12,489 The graphs below show the results averaged over three tests for each configuration. IOPs and CPU vary only slightly between the two tests. CPU was driven only 6% harder on average and IOPS actually decreased by 3% when queue depth was increased to 254. A possible explanation for the decrease in IOPs could be the HBA taking some of the load away from the SSDs.
18 Page 18 of 26 IOPS 14,000 12,000 10,000 8,000 6,000 4,000 2,000 0 IOPS Queue Depth 32 Queue Depth 254 Most of the performance increase may be attributed to the VM Queue populating slightly faster, with the time taken to put all VMs into the queue reducing by 24%.We can see the results of this in the graph below where there are more VMs in queue at the beginning of the test, which gives the queue depth 254 configuration a head start.
19 Page 19 of 26 VMs in Queue VMs in Queue Queue Depth 32 Queue Depth 254 A graph of VMs completing boot shows again how the queue depth 254 configuration gets a head start. The time before the first VM completes boot is shorter when the queue depth is increased to 254, but the slope of both lines for queue depth 254 and queue depth 32 are the same once the boots are started, suggesting that the advantage was again that the first boots were completed faster, giving a head start when the queue depth was increased.
21 Summary and Conclusion Analysis of VDI Storage Performance During Bootstorm Page 21 of 26 The data in the graphs below was taken from the queue depth 254 Fibre Channel SSDs, the queue depth 254 Fibre Channel HDDs with write-back, and the 1024MB cache internal HDDs, as these were the optimal configurations for each storage type. The SSDs completed boot much faster than their HDD counterparts. VMs in Queue VMs in Queue Fibre Channel SSD Fibre Channel HDD Internal HDD
22 Page 22 of 26 Memory 80% 70% 60% 50% 40% 30% 20% 10% 0% % Memory Fibre Channel SSD Fibre Channel HDD Internal HDD SSDs dramatically outperformed the HDDs in a bootstorm, regardless of configuration. This appears to be due to their low latency. Latency for HDDs increased as more VMs were added to the queue, but latency for solid state drives stayed low the entire time. As the ESXi libraries waited for responses from storage, lower latency SSDs were able to increase ESXi library availability, enabling more VMs to start their boot sequences.
24 Page 24 of 26 In addition, solid state drives had the throughput necessary to efficiently handle storage requests from the starting VMs. IOPS 14,000 12,000 10,000 8,000 6,000 4,000 2,000 0 IOPS Fibre Channel SSD Fibre Channel HDD Internal HDD However, SSDs drive CPU harder, and may not be an option if processor availability is limited.
25 Page 25 of 26 Processor 50% 45% 40% 35% 30% 25% 20% 15% 10% 5% 0% % Processor Fibre Channel SSD Fibre Channel HDD Internal HDD HDDs can improve performance by using caching and increasing queue-depth, but do not compete with solid state drive performance. SSDs are recommended for all bootstorm configurations. Should SSDs not be cost effective, HDDs do have acceptable boot times for the first VMs that are started, before the VM queue gets built up. Data indicates boot times stayed below two minutes for the first VMs booted in HDD optimal configurations. HDDs, whether internal or via Fibre Channel, may be acceptable for environments where bootstorms will only consist of 20 VMs or less. It is also possible that more HDD VMs could have booted in under two minutes had the server not been taking new boot requests while attempting to process storage requests from the already booting VMs, possibly delaying their storage requests. Special care must be paid to caching configuration and queue depth when deploying HDDs to achieve optimal performance.
26 Appendix Evaluation Environment The tests were conducted in the Demartek lab in Colorado. Storage Internal HDD Analysis of VDI Storage Performance During Bootstorm Page 26 of 26 15x 15K GB HDD (1.98TB RAID 0 LUN used). External HDD (via Fibre Channel) 12x 15K 300GB HDD, 2x 8Gb interfaces (2.5TB RAID 0 LUN used). External SSD (via Fibre Channel) 24x 100GB SSD, 2x 8Gb interfaces (2.04TB RAID 0 LUN used). Server Dell PowerEdge R820 4x Intel Xeon E5-4650, 2.70 GHz, 256 GB RAM ESXi Dell PERC H710P adapter, 1024MB cache 1 Dell PERC H710 adapter, 512MB cache 1 Brocade Gb Fibre Channel HBA (used 1 interface) Virtual Machines 1 CPU, 2 MB Memory, 20 GB OS drive (thin provisioned) Windows 7 Ultimate SP1, with current updates as of 5/14/13. The most current version of this report is available at on the Demartek website. Demartek is a trademark of Demartek, LLC. All other trademarks are the property of their respective owners.
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
Storage I/O Performance on VMware vsphere 5.1 over 16 Gigabit Fibre Channel Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Setup... 3 Workload... 4 Results...
VMware vsphere 6 and Oracle Database Scalability Study Scaling Monster Virtual Machines TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Test Environment... 3 Virtual Machine
Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
W h i t e p a p e r Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel Introduction The July 2011 launch of the VMware vsphere 5.0 which included the ESXi 5.0 hypervisor along with vcloud Director
Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.
Delivering SDS simplicity and extreme performance Real-World SDS implementation of getting most out of limited hardware Murat Karslioglu Director Storage Systems Nexenta Systems October 2013 1 Agenda Key
Tutorial Virtual server management: Top tips on managing storage in virtual server environments Sponsored By: Top five tips for managing storage in a virtual server environment By Eric Siebert, Contributor
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array Evaluation report prepared under contract with Lenovo Executive Summary Virtualization is a key strategy to reduce the
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark
Implementing Enterprise Disk Arrays Using Open Source Software Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012 Mott Community College (MCC) Mott Community College is a mid-sized
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Web server, SQL Server OLTP, Exchange Jetstress, and SharePoint Workloads Can Run Simultaneously on One Violin Memory
Virtual Desktop Infrastructure (VDI) made Easy HOW-TO Preface: Desktop virtualization can increase the effectiveness of information technology (IT) teams by simplifying how they configure and deploy endpoint
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
White paper QNAP Turbo NAS with SSD Cache 1 Table of Contents Introduction... 3 Audience... 3 Terminology... 3 SSD cache technology... 4 Applications and benefits... 5 Limitations... 6 Performance Test...
Achieving a Million I/O Operations per Second from a Single VMware vsphere 5.0 Host Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Software and Hardware...
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with This Dell Technical White Paper discusses the OLTP performance benefit achieved on a SQL Server database using a
Expert Reference Series of White Papers Visions of My Datacenter Virtualized 1-800-COURSES www.globalknowledge.com Visions of My Datacenter Virtualized John A. Davis, VMware Certified Instructor (VCI),
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer email@example.com 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
PERFORMANCE STUDY NexentaConnect View Edition Branch Office Solution Nexenta Office of the CTO Murat Karslioglu Table of Contents Desktop Virtualization for Small and Medium Sized Office... 3 Cisco UCS
IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark
White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration This flexible and scalable VDI VMware solution combines all aspects of a virtual desktop
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
Cloud-Optimized Performance: Enhancing Desktop Virtualization Performance with Brocade 16 Gbps Discussing the enhancement of Desktop Virtualization using 16 Gbps Fibre Channel and how it improves throughput
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
BlackBerry Enterprise Server for Microsoft Exchange Version: 5.0 Service Pack: 4 Performance Benchmarking Guide Published: 2015-01-13 SWD-20150113132750479 Contents 1 BlackBerry Enterprise Server for Microsoft
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
Technical Brief Caching Software Performance: FlashSoft Software 3.8 for Microsoft Windows Server with Hyper-V and SQL Server 2012 Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035
Product test: DataCore Virtual Desktop Server 2.0 Virtual desktops made easy Dr. Götz Güttich The Virtual Desktop Server 2.0 allows administrators to launch and maintain virtual desktops with relatively
NexentaConnect View Edition Hardware Reference Guide 2.3 5000-nex_con-v2.3-000012-A Copyright 2014 Nexenta Systems, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted
Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation report prepared under contract with HP Executive Summary Solid state storage is transforming the entire
BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important
TechTarget Data Center Media E-Guide Server Virtualization: Balancing CPU, Storage and Networking Demands Virtualization initiatives often become a balancing act for data center administrators, who are
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
Data center virtualization A Dell Technical White Paper August 2011 Lay the foundation for impressive disk utilization and unmatched data center flexibility Executive summary Many enterprise IT departments
Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology Evaluation report prepared under contract with NetApp Introduction As flash storage options proliferate and become accepted in the enterprise,
1 Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering Reference Architecture Guide By Jeff Chen May 2011 Month Year Feedback Hitachi Data Systems
SAN Acceleration Using Nexenta VSA for VMware Horizon View with Third-Party SAN Storage NEXENTA OFFICE OF CTO ILYA GRAFUTKO Table of Contents VDI Performance 3 NV4V and Storage Attached Network 3 Getting
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit
VMware Virtual SAN Product Information Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either
1 Hitachi Unified Storage VM Dynamically Provisioned 24,000 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 4.0 Test Date: July - August 2013 Month Year Notices
White Paper Design Considerations for Increasing VDI Performance and Scalability with Cisco Unified Computing System White Paper April 2013 2013 Cisco and/or its affiliates. All rights reserved. This document
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
1 2 Outline Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models 3 Introduction What is Virtualization Station? Allows users to create and operate
1 L Hitachi Unified Storage VM Dynamically Provisioned 120,000 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 4.0 Test Date: July - August 2013 Month Year Notices
Power Management and Performance in VMware vsphere 5.1 and 5.5 Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Benchmark Software... 3 Power Management
Document Version: 2013-03-04 This guide provides hardware model-specific guidance in server configuration, with BIOS and RAID configuration instructions for the Dell R710. This guide is part of a multi-volume
Avoiding Performance Bottlenecks in Hyper-V Identify and eliminate capacity related performance bottlenecks in Hyper-V while placing new VMs for optimal density and performance Whitepaper by Chris Chesley
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems Presented by July, 2010 Table of Contents EXECUTIVE OVERVIEW 3 TEST EQUIPMENT AND METHODS 4 TESTING OVERVIEW 5 Fragmentation in
EUC2027 Characterize Performance in Horizon 6 Banit Agrawal VMware, Inc Staff Engineer II Rasmus Sjørslev VMware, Inc Senior EUC Architect Disclaimer This presentation may contain product features that
White Paper Doubling the I/O Performance of VMware vsphere 4.1 with Broadcom 10GbE iscsi HBA Technology This document describes the doubling of the I/O performance of vsphere 4.1 by using Broadcom 10GbE
White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware The flexible and scalable vshape VDI FC VMware solution combines all aspects of a virtual desktop environment,
VMware Virtual SAN 6.0 Performance TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Virtual SAN Cluster Setup... 3 Hybrid Virtual SAN Hardware Configuration... 4 All-Flash
VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance Hybrid Storage Performance Gains for IOPS and Bandwidth Utilizing Colfax Servers and Enmotus FuzeDrive Software NVMe Hybrid
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure