Proven Solution Guide EMC GLOBAL SOLUTIONS. Proven Solution Guide. Abstract

Size: px
Start display at page:

Download "Proven Solution Guide EMC GLOBAL SOLUTIONS. Proven Solution Guide. Abstract"

Transcription

1 Proven Solution Guide EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEMS Proven Solution Guide EMC GLOBAL SOLUTIONS Abstract This Proven Solution Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View 4.5, with an EMC VNX57 unified storage platform. This paper focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vsphere, and VMware View. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize performance of the virtual desktop environment, helping to support servicelevel agreements. May 211

2 Copyright 211 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, VMware vcenter, VMware View, and VMware vsphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Iomega and IomegaWare are registered trademarks or trademarks of Iomega Corporation. All other trademarks used herein are the property of their respective owners. Part Number H8197 2

3 Table of contents Table of contents 1 Executive Summary Introduction to the VNX family of unified storage platforms Software suites available Software packs available Business case Solution overview Key results and recommendations Introduction Introduction to the EMC VNX series Document overview Purpose Scope Audience Terminology Technology overview Component list EMC VNX platform EMC Unisphere EMC FAST VP... 2 EMC FAST Cache Block data compression Cisco Unified Computing System (UCS) Solution diagram Configuration Hardware resources Software resources Solution Infrastructure VMware View infrastructure Introduction VMware View components Hypervisor VMware View Connection server VMware vsphere vcenter/view Composer View Security server

4 Table of contents VMware View Transfer server Database server VMware View Agent VMware View Client VMware View Admin Console VMware View PowerCLI VMware ThinApp VMware View virtual desktop infrastructure Introduction Baseline Processor... 3 Memory... 3 Network Storage vsphere 4.1 infrastructure vcenter Server cluster Cisco Technology Overview Overview Cisco Unified Computing System (UCS) B-Series Blade Servers Cisco UCS 61 Series Fabric Interconnects Cisco Nexus 7 Series Switches Cisco Nexus 1V Series Switches, Cisco VN-Link technology Cisco MDS 95 Series Multilayer Directors Windows infrastructure Introduction Microsoft Active Directory Microsoft SQL Server DNS Server DHCP Server Network Design... 4 Considerations... 4 Physical design considerations... 4 Logical design considerations... 4 Link aggregation... 4 VNX for file network configuration Data Mover ports ESX network configuration ESX NIC teaming Port groups

5 Table of contents Enterprise switch configuration Cabling Server uplinks Data Movers Fibre Channel network configuration Introduction Zone configuration Installation and Configuration Installation overview Installing VMware components VMware View installation overview VMware View setup VMware View desktop pool configuration PowerPath Virtual Edition Installing storage components Create storage pools Enable FAST Cache Configure FAST VP Configure VNX Home Directory Testing and Validation Use case descriptions Use Case 1: FAST Cache with no dedicated replica LUN Use Case 2: FAST Cache with dedicated replica LUNs Use Case 3: A dedicated replica LUN with no FAST Cache Boot storm scenario Overview Use Case 1: FAST Cache with no dedicated replica LUN Use Case 2: FAST Cache with dedicated replica LUNs Use Case 3: A dedicated replica LUN with no FAST Cache Antivirus scan scenario Overview Use Case 1: FAST Cache with no dedicated replica LUN Summary desktop antivirus scan desktop antivirus scan desktop antivirus scan desktop antivirus scan Use Case 2: With FAST Cache and a dedicated replica LUN

6 List of Tables Summary desktop antivirus scan desktop antivirus scan desktop antivirus scan desktop antivirus scan Use Case 3: A dedicated replica LUN with no FAST Cache Summary desktop antivirus scan descktop antivirus scan descktop antivirus scan desktop antivirus scan Antivirus scenario summary Login VSI test scenario Overview Use Case 1: with FAST Cache and no dedicated replica LUN With the Auto Tiering option enabled With Performance Tiering option enabled Use Case 2: with FAST Cache and with a dedicated replica LUNs Use Case 3: A dedicated replica LUN with no FAST Cache Conclusion Summary Findings References White papers Cisco documentation Other documentation List of Tables Table 1. VNX features...12 Table 2. Terminology...18 Table 3. Solution hardware...23 Table 4. Solution software...24 Table 5. Windows 7 configuration...29 Table 6. Observed workload...29 Table 7. Virtual machine per core...3 Table 8. Required memory per host...31 Table 9. requirement and disks needed (multiple RAID scenarios)...31 Table 1. Disks needed for RAID 5, RAID 1, RAID Table 11. Disks in storage tiering...32 Table 12. Spindles used in this solution...33 Table 13. Port groups

7 List of Figures List of Figures Figure 1. Unisphere Summary page...2 Figure 2. Solution architecture...22 Figure 3. VMware components...26 Figure 4. Linked clone...27 Figure 5. Linked clone virtual machine...27 Figure 6. Cluster configuration from vcenter Server...34 Figure 7. Virtual machines hosted on View-Cluster Figure 8. Infrastructure cluster...35 Figure 9. Cisco Unified Computing System...36 Figure 1. SQL server databases...39 Figure 11. LACP configuration of the Data Mover ports...41 Figure 12. VNX57 Data Mover configuration...42 Figure 13. Virtual interface devices...42 Figure 14. Interface properties...43 Figure 15. vswitch configuration in vcenter Server...44 Figure 16. Data Mover port switch configuration...45 Figure 17. Zoning configuration...46 Figure 18. Zone configuration for the SAN B fabric...47 Figure 19. Persistent automated desktop pools...49 Figure 2. Select Automated Pool...5 Figure 21. User assignment...5 Figure 22. Select View Composer linked clones...51 Figure 23. Pool identification...52 Figure 24. Pool settings...52 Figure 25. Select Do not redirect Windows profile...53 Figure 26. Provisioning settings...53 Figure 27. vcenter settings...54 Figure 28. Select the datastores for linked clone images...55 Figure 29. Guest customization...55 Figure 3. Verify your settings...56 Figure 31. PowerPath as the owner for managing the path of block devices...56 Figure 32. Thin LUNs created...57 Figure 33. Auto-Tiering...58 Figure 34. Enabling FAST Cache...59 Figure 35. FAST Cache configuration...6 Figure 36. Configuring FAST VP...61 Figure 37. Configure the VNX Home Directory feature...62 Figure 38. Sample Home Directory configuration...63 Figure 39. FAST Cache with no dedicated replica LUN...64 Figure 4. FAST Cache with dedicated replica LUNs...65 Figure 41. Dedicated replica LUN with no FAST Cache...66 Figure 42. LUN and response times for FAST Cache with no dedicated replica LUN...67 Figure 43. Physical disk and response times...68 Figure 44. FAST Cache read and write operations...68 Figure 45. SP utilization during boot storm...69 Figure 46. Example boot-time SP utilization...69 Figure 47. ESX memory activity...7 7

8 List of Figures Figure 48. ESX physical disk and guest latency...7 Figure 49. ESX VAAI statistics...71 Figure 5. LUN and response time...71 Figure 51. Replica LUN and response times...72 Figure 52. Physical disk and response time...72 Figure 53. FAST Cache hit ratio...73 Figure 54. SP utilization during the boot storm...73 Figure 55. Example ESX host physical CPU utilization...74 Figure 56. ESX server memory...74 Figure 57. ESX linked clone LUN and average guest latency...75 Figure 58. Linked clone ESX disk VAAI statistics...75 Figure 59. ESX replica disk and average guest latency...76 Figure 6. ESX replica LUN VAAI statistics...76 Figure 61. Linked clone LUN and response times...77 Figure 62. Replica LUN and response times...78 Figure 63. Physical disk and response times...78 Figure 64. Service processor (SP) utilization during boot storm...79 Figure 65. ESX server physical CPU utilization...79 Figure 66. ESX memory during boot storm...8 Figure 67. Linked clone ESX disk and average guest latency...8 Figure 68. Linked clone ESX disk VAAI...81 Figure 69. ESX replica disk and average guest latency...81 Figure 7. ESX replica LUN VAAI statistics...82 Figure 71. Antivirus scan summary with FAST Cash and no dedicated replica LUN...83 Figure 72. Scan 1 LUN and response times...84 Figure 73. Physical disk and response times...84 Figure 74. FAST Cache hit ratio...85 Figure desktop antivirus scan SP utilization...85 Figure 76. ESX processor utilization...86 Figure 77. ESX server memory utilization...86 Figure desktop antivirus scan ESX LUN and average guest latency...87 Figure 79. Scan 1 - ESX VAAI statistics...87 Figure 8. Scan 1 - virtual machine disk and response times...88 Figure desktop antivirus scan LUN and response times...88 Figure desktop antivirus scan physical disk and response times...89 Figure desktop antivirus scan FAST Cache Hit ratio...89 Figure desktop antivirus scan SP utilization...9 Figure 85. ESX server processor utilization...9 Figure desktop antivirus scan ESX memory utilization...91 Figure desktop antivirus scan ESX LUN and average guest latency...91 Figure 88. ESX VAAI statistics...92 Figure desktop antivirus scan virtual machine disk and latency...92 Figure 9. 3-desktop antivirus scan LUN and response times...93 Figure desktop antivirus scan physical disk and response time...93 Figure desktop antivirus scan FAST Cache hit ratio...94 Figure desktop antivirus scan SP utilization...94 Figure desktop antivirus scan ESX CPU utilization...95 Figure desktop antivirus scan ESX memory utilization...95 Figure desktop antivirus scan ESX LUN and average guest latency

9 List of Figures Figure desktop antivirus scan...96 Figure desktop antivirus scan virtual machine disk and latency...97 Figure desktop antivirus scan LUN and response times...97 Figure 1. 5-desktop antivirus scan physical disk and response times...98 Figure desktop antivirus scan FAST Cache hit ratio...98 Figure desktop antivirus scan service processor utilization...99 Figure desktop antivirus scan ESX server CPU utilization...99 Figure desktop antivirus scan ESX server memory utilization...1 Figure desktop antivirus scan ESX disk and average guest latency...1 Figure desktop antivirus scan ESX VAAI statistics...11 Figure desktop antivirus scan virtual machine disk and latency...11 Figure 18. Antivirus scan summary with FAST Cache and a dedicated replica LUN...12 Figure desktop antivirus scan linked clone LUN and response times...13 Figure desktop antivirus scan replica LUN and response times...13 Figure desktop antivirus scan physical disk and response times...14 Figure desktop antivirus scan FAST Cache hit ratio...14 Figure desktop antivirus scan SP utilization...15 Figure desktop antivirus scan ESX server CPU utilization...15 Figure desktop antivirus scan ESX memory utilization...16 Figure desktop antivirus scan ESX linked clone LUN and average guest latency...16 Figure desktop antivirus scan ESX VAAI statistics...17 Figure desktop antivirus scan ESX replica LUN and average guest latency...17 Figure desktop antivirus scan ESX replica LUN VAAI statistics...18 Figure desktop antivirus scan linked clone LUN and response times...18 Figure desktop antivirus scan replica LUN and response times...19 Figure desktop antivirus scan physical disk and response times...19 Figure desktop antivirus scan FAST Cache hit ratio...11 Figure desktop antivirus scan service processor utilization...11 Figure desktop antivirus scan ESX CPU utilization Figure desktop antivirus scan ESX memory utilization Figure desktop antivirus scan ESX LUN and average guest latency Figure desktop antivirus scan ESX linked clone LUN VAAI statistics Figure desktop antivirus scan ESX replica LUN and average guest latency Figure desktop antivirus scan ESX replica LUN VAAI statistics Figure desktop antivirus scan linked clone LUN and response times Figure desktop antivirus scan replica LUN and response times Figure desktop antivirus scan physical disk and response times Figure desktop antivirus scan FAST Cache hit ratio Figure desktop antivirus scan service processor utilization Figure desktop antivirus scan ESX CPU utilization Figure desktop antivirus scan ESX memory utilization Figure desktop antivirus scan ESX linked clone LUN and average guest latency Figure desktop antivirus scan ESX linked clone LUN VAAI statistics Figure desktop antivirus scan ESX replica LUN and average guest latency Figure desktop antivirus scan VAAI statistics for the replica LUN Figure desktop antivirus scan LUN and response times...12 Figure desktop antivirus scan replica LUN and response times...12 Figure desktop antivirus scan physical disk and response times Figure desktop antivirus scan FAST Cache hit ratio

10 List of Figures Figure desktop antivirus scan service processor utilization Figure desktop antivirus scan ESX server CPU utilization Figure desktop antivirus scan ESX memory utilization Figure desktop antivirus scan ESX linked clone LUN and average guest latency Figure desktop antivirus scan ESX linked clone LUN VAAI statistics Figure desktop antivirus scan ESX replica LUN and average guest latency Figure desktop antivirus scan ESX replica LUN VAAI statistics Figure 153. Antivirus scan summary without FAST Cash but with a dedicated replica LUN Figure desktop antivirus scan LUN and response times Figure desktop antivirus scan replica LUN and response times Figure desktop antivirus scan physical disk and response times Figure desktop antivirus scan service processor utilization Figure desktop antivirus scan ESX CPU utilization Figure desktop antivirus scan ESX memory utilization Figure desktop antivirus scan ESX linked clone LUN and average guest latency...13 Figure desktop antivirus scan ESX linked clone LUN VAAI statistics...13 Figure desktop antivirus scan ESX replica LUN and average guest latency Figure desktop antivirus scan ESX replica LUN VAAI statistics Figure desktop antivirus scan LUN and response time Figure desktop antivirus scan replica LUN and response times Figure desktop antivirus scan physical disk and response time Figure desktop antivirus scan service processor utilization Figure desktop antivirus scan ESX server CPU utilization Figure desktop antivirus scan ESX memory utilization Figure desktop antivirus scan ESX linked clone LUN and average guest latency Figure desktop antivirus scan ESX linked clone LUN VAAI statistics Figure desktop antivirus scanesx replica LUN and average guest latency Figure desktop antivirus scan ESX replica LUN VAAI statistics Figure desktop antivirus scan linked clone LUN and response times Figure desktop antivirus scan replica LUN and response times Figure desktop antivirus scan physical disk and response times Figure desktop antivirus scan service processor utilization Figure desktop antivirus scan ESX CPU utilization...14 Figure desktop antivirus scan ESX memory utilization...14 Figure desktop antivirus scan ESX linked clone LUN and average guest latency Figure desktop antivirus scan ESX linked clone LUN VAAI statistics Figure desktop antivirus scan replica LUN and average guest latency Figure desktop antivirus scan replica LUN VAAI statistics Figure desktop antivirus scan linked clone LUN and response times Figure desktop antivirus scan replica LUN and resonse times Figure desktop antivirus scan physical disk and response times Figure desktop antivirus scan service process utilization Figure desktop antivirus scan ESX CPU utilization Figure desktop antivirus scan ESX memory utilization Figure desktop antivirus scan ESX linked clone LUN and Average guest latency Figure desktop antivirus scan ESX linked clone LUN VAAI statistics Figure desktop antivirus scan ESX replica LUN and average guest latency Figure desktop antivirus scan ESX replica LUN VAAI statistics Figure 194. Average time to scan a single desktop

11 List of Figures Figure 195. Auto Tiering Login VSI results Figure 196. Performance Tiering Login VSI results...15 Figure 197. LUN and response times...15 Figure 198. Physical disk and response time Figure 199. FAST Cache read hit ratio Figure 2. FAST Cache write hit ratio Figure 21. FAST Cache hit ratio Figure 22. Service processor utilization Figure 23. ESX CPU utilization Figure 24. ESX memory utilization Figure 25. ESX disk and average guest latency Figure 26. ESX disk VAAI statistics Figure 27. Virtual machine disk and latency Figure 28. Login VSI test results Figure 29. LUN and response times Figure 21. Replica LUN and response times Figure 211. FAST Cache read hit ratio Figure 212. FAST Cache write hit ratio Figure 213. FAST Cache hit ratio Figure 214. Service processor utilization Figure 215. ESX server CPU utilization Figure 216. ESX server memory utilization...16 Figure 217. ESX linked clone LUN and average guest latency...16 Figure 218. ESX linked clone LUN VAAI statistics Figure 219. ESX replica LUN and average guest latency Figure 22. ESX replica LUN VAAI statistics Figure 221. Login VSI test results Figure 222. Linked clone LUN and response times Figure 223. Replica LUN and response times Figure 224. Physical disk and response times Figure 225. Service process utilization

12 Chapter 1: Executive Summary 1 Executive Summary This chapter summarizes the proven solution described in this document and includes the following sections: Introduction to the VNX family of unified storage platforms Business case Solution overview Key results and recommendations Introduction to the VNX family of unified storage platforms The EMC VNX family delivers industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. All of this is available in a choice of systems ranging from affordable entry-level solutions to high-performance, petabyte-capacity configurations servicing the most demanding application requirements. The VNXe series is purpose-built for the IT manager in entry-level environments, and the VNX series is designed to meet the high-performance, high-scalability requirements of midsize and large enterprises. The VNX family includes two platform series: The VNX series, delivering leadership performance, efficiency, and simplicity for demanding virtual application environments that includes VNX75, VNX57, VNX55, VNX53, and VNX51 The VNXe (entry) series with breakthrough simplicity for small and medium businesses that includes VNXe33 and VNXe31 Customers can benefit from new VNX features described in Table 1: Table 1. Features VNX features Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies VNX series VNXe series High availability, designed to deliver five 9s availability 12

13 Chapter 1: Executive Summary Features Automated tiering with Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously VNX series VNXe series Multiprotocol support for file and block protocols Object access through Atmos Virtual Edition (Atmos VE) Simplified management with EMC Unisphere for a single management framework for all NAS, SAN, and replication needs Up to three times improvement in performance with the latest Intel multicore CPUs, optimized for Flash Note: VNXe does not support block compression. EMC provides a single, unified storage plug-in to view, provision, and manage storage resources from VMware vsphere across EMC Symmetrix, VNX family, CLARiiON, and Celerra storage systems, helping users to simplify and speed up VMware storage management tasks. The VNX family includes five new software suites and three new software packs, making it easier and simpler to attain the maximum overall benefits. Software suites available FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously (not available for the VNXe series or the VNX51). Local Protection Suite Practices safe data protection and repurposing (not applicable to the VNXe31 as this functionality is provided at no additional cost as part of the base software). Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and proves compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software packs available Total Efficiency Pack Includes all five software suites (not available for the VNX51 and VNXe series). Total Protection Pack Includes local, remote, and application protection suites (not applicable to the VNXe31). 13

14 Chapter 1: Executive Summary Total Value Pack Includes all three protection software suites and the Security and Compliance Suite (the VNX51 and VNXe31 exclusively support this package). Business case Solution overview Customers require a scalable, tiered, and highly available infrastructure on which to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution, but they need to know how to best use these technologies to maximize their investment, support service-level agreements, and reduce their desktop total cost of ownership (TCO). The purpose of this solution is to build a replica of a common customer virtual desktop infrastructure (VDI) environment and to validate the environment for performance, scalability, and functionality. Customers will realize: Increased control and security of their global, mobile desktop environment, typically their most at-risk environment Better end-user productivity with a more consistent environment Simplified management with the environment contained in the data center Better support of service-level agreements and compliance initiatives Lower operational and maintenance costs This solution provides a detailed summary and characterization of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View 4.5 on an EMC VNX series platform. It involves building a 2,-seat VMware View 4.5 environment on the EMC unified storage platform and integrates the new features of each of these systems to provide a compelling, cost-effective VDI platform. This solution incorporates the following components and the EMC VNX57 platform: 2, Microsoft Windows 7 virtual desktops VMware View Composer 2.5-based linked clones Storage tiering (SAS and NL-SAS) EMC FAST Cache EMC FAST VP Sizing and layout of the 2,-seat VMware View 4.5 environment Multipathing and load balancing by EMC PowerPath /VE User data on the CIFS share Redundant View Connection Manager 14

15 Chapter 1: Executive Summary Key results and recommendations VMware View 4.5 virtualization technology meets user and IT needs, providing compelling advantages compared to traditional physical desktops and terminal services. EMC VNX57 brings flexibility to multiprotocol environments. With EMC unified storage, you can connect to multiple storage networks using NAS, iscsi, and Fibre Channel SAN. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize performance for the virtual desktop environment. EMC unified storage supports vstorage APIs for Array Integration (VAAI), which were introduced in VMware vsphere 4.1. VAAI enables hosts to support more virtual machines per LUN and allows quicker virtual desktop provisioning. Zero paging recognition and transparent page sharing of the vsphere 4.1 feature helps you save memory and therefore allows you to host more virtual desktops per host. Our team found the following key results during the testing of this solution: By using FAST Cache and VAAI, the time to concurrently boot all 2, desktops to a usable start was significantly reduced by 25 percent. By using a VAAI-enabled storage platform, we were able to store up to 512 virtual machines compared to 64 virtual machines per LUN. With VMware transparent page sharing, we observed memory savings of up to 92 GB on a host with 96 GB of RAM, and with less than 2 percent of it swapping to a FAST Cache-enabled LUN. Using Flash as FAST Cache for the read and write I/O operations reduced the number of spindles needed to support the required. 15

16 Chapter 2: Introduction 2 Introduction This chapter introduces the solution and its components, and includes the following sections: Introduction to the EMC VNX series Document overview Technology overview Solution diagram Configuration Introduction to the EMC VNX series The EMC VNX series delivers uncompromising scalability and flexibility for the midtier storage users while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from the new VNX features such as: Next-generation unified storage, optimized for virtualized applications Extended cache using Flash drives with FAST VP that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file. Multiprotocol support for file, block, and object with object access through Atmos Virtual Edition (Atmos VE). Simplified management with EMC Unisphere for a single management framework for all NAS, SAN, and replication needs. Up to three times improvement in performance with the latest Intel multicore CPUs, optimized for Flash. 6 Gb/s SAS back end with the latest drive technologies supported: 3.5-inch 1 GB and 2 GB Flash, 3.5-inch 3 GB, and 6 GB 15k or 1k rpm SAS, and 3.5-inch 2 TB 7.2k rpm NL-SAS 2.5-inch 3 GB and 6 GB 1k rpm SAS Expanded EMC UltraFlex I/O connectivity Fibre Channel (FC), Internet Small Computer System Interface (iscsi), Common Internet File System (CIFS), Network File System (NFS) including parallel NFS (pnfs), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet. 16

17 Chapter 2: Introduction Document overview This document provides a detailed summary of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware View 4.5, with an EMC VNX57 unified storage platform. It focuses on the sizing and scalability using features introduced in EMC s VNX series, VMware vsphere 4.1, and VMware View 4.5. EMC unified storage uses advanced technologies like EMC FAST VP and EMC FAST Cache to optimize the performance of a virtual desktop environment, helping to support service-level agreements. By integrating EMC VNX unified storage and the new features available in EMC s VNX series and VMware View 4.5, desktop administrators are able to reduce costs by simplifying storage management and increase capacity utilization. Purpose The purpose of this use case is to provide a virtualized solution for virtual desktops that is powered by VMware View 4.5, View Composer 2.5, VMware vsphere 4.1, EMC VNX series, EMC VNX FAST VP, VNX FAST Cache, and storage pools. This solution includes all the attributes required to run this environment, such as hardware and software and the required VMware View configuration. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. It can also be used by other EMC organizations (for example, the technical services or sales organizations) as the basis for producing documentation for a technical services or sales kit. Scope The paper contains the results of testing the EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series, VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 solution. Throughout this paper, we assume that you have some familiarity with the concepts and operations related to virtualization technologies and their use in information infrastructure. This paper discusses multiple EMC products as well as those from other vendors. Some general configuration and operational procedures are outlined. However, for detailed product installation information, refer to the user documentation for those products. Audience The intended audience of this paper includes: Customers EMC partners Internal EMC personnel 17

18 Chapter 2: Introduction Terminology Table 2 provides terms frequently used in this paper. Table 2. Term Block data compression Terminology Description EMC unified storage introduces Block data compression, which allows customers to save and reclaim space anywhere in their production environment with no restrictions. This capability makes storage even more efficient by compressing data and reclaiming valuable storage capacity. Data compression works as a background task to minimize performance overhead. Block data compression also supports thin LUNs, and automatically migrates thick LUNs to thin during compression, freeing valuable storage capacity. EMC FAST Cache EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP) VMware Transparent Page Sharing Linked clone Login VSI Replica Unisphere VDI platform This feature was introduced early with FLARE release 3 and allows customers to use Flash drives as an expanded cache layer for the array. FAST Cache is an array-wide feature that you can enable for any LUN or storage pool. FAST Cache provides read and write access to the array. EMC has enhanced its FAST technology to work at the sub-lun level on both file and bock data. This feature works at the storage pool level, below the LUN abstraction. It supports scheduled migration of data to different storage tiers based on the performance requirements of individual 1 GB slices in a storage pool. Transparent page sharing is a method by which redundant copies of memory pages are eliminated. Refer to for more information. A virtual desktop created by VMware View Composer from a writeable snapshot paired with a read-only replica of a master image. A third-party benchmarking tool developed by Login Consultants that simulates a real-world VDI workload by using an AutoIT script and determines the maximum system capacity based on the response time of the users. A read-only copy of a master image used to deploy linked clones. The centralized interface of the unified storage platforms. Unisphere includes integration with data protection services, provides built-in online access to key support tools, and is fully integrated with VMware. Virtual desktop infrastructure. The server computing model enabling desktop virtualization, encompassing the hardware and software system required to support the virtualized environment. 18

19 Chapter 2: Introduction Term Virtual desktop Description Desktop virtualization (sometimes called client virtualization), that separates a personal computer desktop environment from a physical machine using a client server model of computing. The model stores the resulting "virtualized" desktop on a remote central server, instead of on the local storage of a remote client; therefore, when users work from their remote desktop client, all of the programs, applications, processes, and data used are kept and run centrally. This scenario allows users to access their desktops on any capable device, such as a traditional personal computer, notebook computer, smartphone, or thin client. Technology overview Component list EMC VNX platform EMC Unisphere This section identifies and briefly describes the major components of the validated solution environment. The components are: EMC VNX platform EMC Unisphere EMC FAST Cache EMC FAST VP Block data compression The EMC VNX platform brings flexibility to multiprotocol environments. With EMC unified storage, you can connect to multiple storage networks using NAS, iscsi, and Fibre Channel SAN. EMC unified storage leverages advanced technologies like EMC FAST VP and EMC FAST Cache on VNX OE for block to optimize performance for the virtual desktop environment, helping support service-level agreements. EMC unified storage supports vstorage APIs for Array Integration (VAAI), which were introduced in VMware vsphere 4.1. VAAI enables quicker virtual desktop provisioning and start-up. EMC Unisphere provides a flexible, integrated experience for managing CLARiiON, Celerra, and VNX platforms in a single pane of glass. This new approach to midtier storage management fosters simplicity, flexibility, and automation. Unisphere's unprecedented ease of use is reflected in intuitive task-based controls, customizable dashboards, and single-click access to real-time support tools and online customer communities. Unisphere features include: Task-based navigation and controls that offer an intuitive, context-based approach to configuring storage, creating replicas, monitoring the environment, managing host connections, and accessing the Unisphere support ecosystem. A self-service Unisphere support ecosystem, accessible with one click from Unisphere, that provides users with quick access to real-time support tools, including live chat support, software downloads, product documentation, best 19

20 Chapter 2: Introduction practices, FAQs, online communities, ordering spares, and submitting service requests. Customizable dashboard views and reporting capabilities that enable at-aglance management by automatically presenting users with valuable information in terms of how they manage their storage. For example, customers can develop custom reports up to 18 times faster with EMC Unisphere. Common management provides a single sign-on and integrated experience for managing both block and file features. Figure 1 provides an example of the Unisphere Summary page that gives administrators a wealth of detailed information on connected storage systems, from LUN pool and tiering summaries to physical capacity and RAID group information. Figure 1. Unisphere Summary page EMC FAST VP With EMC FAST VP, EMC has enhanced its FAST technology to be more automated with sub-lun tiering and to support file as well as block. This feature works at the storage pool level, below the LUN abstraction. Where earlier versions of FAST VP operated above the LUN level, FAST VP now analyzes data patterns at a far more granular level. As an example, rather than move an 8 GB LUN to enterprise Flash drives, FAST VP now identifies and monitors the entire storage pool in 1 GB chunks. If data becomes active, then FAST VP automatically moves only these hot chunks to a higher tier like Flash. As data cools, FAST VP also correctly identifies which chunks to migrate to lower tiers and proactively moves them. With such granular tiering, it is now possible to reduce storage acquisition while at the same time improve performance and response time. In addition, because FAST VP is fully automated and policy-driven, no manual intervention is required to make this happen, so you save on operating costs as well. 2

21 Chapter 2: Introduction EMC FAST Cache VNX FAST Cache, a part of the VNX FAST suite, enables Flash drives to be used as an expanded cache layer for the array. FAST Cache has array-wide features available for both file and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to that data chunk are serviced by FAST Cache. This allows immediate promotion of very active data to the Flash drives. This dramatically improves the response time for very active data and reduces the data hot spots that can occur within the LUN. FAST Cache is an extended read/write cache that can absorb read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads such as operating system patches and application updates. Block data compression Cisco Unified Computing System (UCS) EMC unified storage introduces block data compression, which allows customers to save and reclaim space anywhere in their production environment with no restrictions. This capability makes storage even more efficient by compressing data and reclaiming valuable storage capacity. Data Compression works as a background task to minimize performance overhead. Block data compression also supports thin LUNs, and automatically migrates thick LUNs to thin during compression, freeing valuable storage capacity. Cisco UCS provides the computing platform purpose-built for virtualization, delivering a cohesive system that unites computing, networking, and storage access. Cisco UCS integrates a low-latency, lossless 1 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers that scale to the demands of virtualized desktop workloads, without sacrificing performance or application responsiveness. Cisco UCS Manager enables a stateless computing model employing Service Profile Templates that can scale up large pools of fully provisioned computing resources from bare metal, within a fraction of the time required by traditional server solutions. 21

22 Chapter 2: Introduction Solution diagram Figure 2 depicts the logical architecture of this solution. Figure 2. Solution architecture 22

23 Chapter 2: Introduction Configuration Hardware resources Table 3 lists the hardware used for this solution. Table 3. Solution hardware Hardware Quantity Configuration Notes EMC VNX57 1 DAEs configured with: GB 15k rpm SAS disks 35 1 TB 7.2k near-line SAS disks 15 2 GB Flash drives VNX shared storage providing block, file, FAST VP, and FAST Cache Cisco UCS B2 server blade Cisco UCS B2 server blade 16 Two quad-core Intel Xeon 55 Family 48 GB RAM Converged network adapter 8 Two quad-core Intel Xeon 55 Family 96 GB RAM B2-M2 blade with Xeon core proc Two UCS chassis, each hosting 8 blades 8 servers per vsphere cluster. Two clusters, each hosting 5 Windows 7 virtual machines. One UCS chassis of 8 blades. For one ESX cluster hosting 1, Windows 7 virtual machines. Intel server 2 Two quad-core Intel 54 Family 32 GB RAM Gigabit quad-port Intel VT Infrastructure virtual machines (VMware vcenter, DNS, DHCP, Active Directory, MS SQL Server, View Connection server and Replica Servers) Cisco Nexus 7 1 Infrastructure Ethernet switch Cisco MDS For dual FC fabric Cisco UCS Chassis 3 For 24 server blades, 6 UCS 214XP IOMs Windows 7 virtual desktops Each 1 vcpu, 1.5 GB RAM, 2 GB VMDK, 1 (network interface card) NIC Virtual desktops that are created for this solution 23

24 Chapter 2: Introduction Software resources Table 4 lists the software used with this solution. Table 4. Solution software Software Configuration Notes EMC VNX57 VNX OE for block Release 31 Operating environment for the block EMC VNX57 VNX OE for file Release 7. Operating environment for the file VMware vsphere ESX 4.1 Build Server hypervisor EMC PowerPath Virtual Edition 5.4 SP2 Multipathing and load balancing for block access. VMware vcenter Server 4.1 vsphere Management Server VMware View Manager 4.5 Software hosting virtual desktops VMware View Composer 2.5 View component that uses linked clone technology to reduce storage size Microsoft SQL Server 25 Database that hosts the tables for VMware vcenter, View Composer, and View Events Microsoft Windows 28 R2 Operating system for the server environment EMC Unisphere 1. Management tool for EMC VNX series Microsoft Windows 7 64 bit RTM Operating system for the virtual desktops VMware Tools Enhancement tool for the virtual machine Microsoft Office Office 27 SP2 Used on the virtual desktops Cisco UCS 1.2 Firmware, management software 24

25 Chapter 3: Solution Infrastructure 3 Solution Infrastructure This chapter details the infrastructure of each component and includes the following sections: VMware View infrastructure VMware View virtual desktop infrastructure vsphere 4.1 infrastructure Windows infrastructure VMware View infrastructure Introduction VMware View components VMware View delivers rich and personalized virtual desktops as a managed service from a virtualization platform built to deliver the entire desktop, including the operating system, applications, and user data. VMware View 4.5 provides centralized automated management of these components with increased control and cost savings. VMware View 4.5 improves business agility while providing a flexible highperformance desktop experience for end users across a variety of network conditions. To provide a virtual desktop experience, VMware View uses various components, each with its own purpose. The components that make up the View environment are: Hypervisor VMware View Connection server VMware vsphere vcenter Server/View Composer VMware View Security server VMware View Transfer server Supported database server like Microsoft SQL Server VMware View Agent VMware View client VMware View Admin Console View PowerCLI ThinApp 25

26 Chapter 3: Solution Infrastructure Figure 3 shows the VMware components described in the following sections. Figure 3. VMware components Hypervisor VMware View Connection server VMware vsphere vcenter/view Composer Hypervisor is used to host the virtual desktops. To get most of the features, we recommend that you use VMware vsphere 4. The vsphere 4 features such as vsphere API for Array Integration (VAAI), Memory Compression, and Ballooning help to host more virtual desktops on a host. The VMware View Connection server hosts the LDAP directory and keeps the configuration information of the VMware View Desktop Pools, its associated virtual desktops, and VMware View. This data information can be replicated to other View Connection Replica servers. The Connection server also acts as a connection broker that maintains the desktop assignment. It supports a secure socket layer (SSL) connection to the desktop using remote desktop protocol (RDP) or protocol PC over IP (PCoIP). It also supports RSA SecurID two-factor authentication and smart card authentication. The VMware vsphere vcenter server helps you manage your virtual machines and vsphere ESX hosts and provides high availability (HA) and Distributed Resource Scheduling (DRS) clusters. The VMware vcenter server hosts customization specification that permits cloned virtual machines to join the Active Directory (AD) domain. The View Composer service is installed on the vcenter server that provides storage savings by using linked clone technology to share the hard disk of parent virtual machines as shown in Figure 4. 26

27 Chapter 3: Solution Infrastructure Figure 4. Linked clone The operating system reads from the common read-only replica image and writes to the linked clone. Any unique data created by the virtual desktop is also stored in the linked clone. A logical representation of this relationship is shown in Figure 5. Figure 5. Linked clone virtual machine View Security server VMware View Transfer server Database server VMware View Agent VMware View Client The View Security server is a different type of View Connection server. It supports two network interfaces one to a private enterprise network and the other to the public network. It is typically used in a DMZ and enables users outside the organization to securely connect to their virtual desktops. The VMware View Transfer server is another type of View Connection server that is required when you use the local mode feature. The Transfer server can use the CIFS share on VNX files to store the published image. The local mode allows users to work on a virtual desktop disconnected from the network and later synchronizes the changes with the View environment. The VMware View supported database server is used to host the tables used by View Composer and can optionally store the VMware View events. VMware View Agent is installed on the virtual desktop template and is deployed to all virtual desktops. It provides communication to the View Connection server and enables options for USB redirection, virtual printing, PCoIP server, and Smartcard over PCoIP. VMware View Client software is used to connect to the virtual desktops using the connection broker. View Client allows users to print locally from their virtual desktop, and with the proper configuration, users can access USB devices locally. 27

28 Chapter 3: Solution Infrastructure VMware View Admin Console VMware View PowerCLI VMware ThinApp VMware View Admin Console is a browser-based administration tool for VMware View and is hosted on the View Connection server. VMware View PowerCLI provides the basic management of VMware View using Windows Powershell. It allows administrators to script some basic VMware View operations and can be used along with other Powershell scripts. VMware ThinApp is an application virtualization product for enterprise desktop administrators and application owners. It enables rapid deployment of applications to physical and virtual desktops. ThinApp links the application, the ThinApp runtime, the virtual file system, and the virtual registry into a single package. The CIFS share on EMC VNX file can be used as a repository and to deploy the ThinApp to the virtual desktops. 28

29 Chapter 3: Solution Infrastructure VMware View virtual desktop infrastructure Introduction Baseline This section provides information on how we designed our solution for hosting 2, users in a VMware View environment on EMC VNX series. A Windows 7 desktop is loaded with the required applications and fine tuned for the virtual machine load. This includes removing unnecessary scheduled tasks, configurations, and services. For further details, refer to EN.pdf The configuration of the Windows 7 virtual machine is defined in Table 5. Table 5. Windows 7 configuration Device Configuration Notes Processor 1 vcpu Memory 1.5 GB Hard disk 2 GB Replica on Flash, delta on SAS. No FAST Cache. No disposable disk. 64 K allocation unit. Network interface card 1 vnic Login VSI Test - Medium Workload We ran a medium workload on a single virtual machine using Login VSI and observed the workload with a two-second interval during the execution of the test as described in Table 6. Table 6. Read Observed workload OS User data disk Virtual machine Write Total Read Write Total Active RAM % Processor run time Network total MB/s Average th percentile Max

30 Chapter 3: Solution Infrastructure Processor The server used in this solution has two quad-core Intel Xeon 55 processors. The average CPU load during the test is 9 percent. Therefore, we can run approximately 1 virtual machines per core. One host can run 2 4 1=8 virtual machines. The Intel Nehalem architecture is very efficient with hyper-threading and allows 5 to 8 percent more clients. This means it can run 1.5 8=12 to 1.8 8=144 virtual machines per host. While using linked clones, up to eight hosts are allowed in a cluster. Leaving one node as failover capacity, with seven hosts, we can run 144 7=18 virtual machines. One cluster can host 1, virtual desktops. Without considering the Intel Nehalem features, the cluster can host 8 7=56 virtual desktops. To host 2, virtual desktops, we need two to four clusters, which are about 128 to 256 processors in total. In a non-vdi environment, deploying 2, desktops would require 2, processors. With hyper-threading, we are able to host 1, VMs per cluster and without hyperthreading, we are able to host only 5 VMs per cluster. Thus, we are able to double the number of hosts per cluster when using hyper-threading. In our solution, we use hyper-threading with three clusters; one with 1, users and other two with 5 users each. The 5-user cluster has extra room for processor intensive workloads. Table 7 provides a summary of virtual machines per core. Table 7. Virtual machine per core Case Complete cluster Cluster with one node down 1-user cluster 16 virtual machines per core 18 virtual machines per core 5-user cluster 8 virtual machines per core 9 virtual machines per core Memory One Windows 7 virtual machine is assigned 1.5 GB of memory. Without using VMware vsphere 4.1 features, it would require at least =18 GB to = 216 GB per host. VMware vsphere 4.1 provides features such as Transparent Page Sharing, ballooning, compression, recognition of zeroed pages, and memory compression that allows us to overcommit the memory to obtain a better consolidation ratio. During the baseline workload, we observed about 54 MB used in active memory. The memory overhead was 179 MB; the hypervisor used 578 MB for the 48 GB host and 99 MB for the 96 GB host, and the service console memory was 561 MB. Based on this workload, we require 13 GB determined by the following calculation: ((9 8 (54+179) )/124) = 52 GB to ((18 8 (54+179) )/124) = 13 GB VMware vsphere uses the above-mentioned features before it uses the swap memory. The FAST Cache on EMC s VNX series storage platform does provide better response time compared to swapping the memory to SAS disks. Another option is to consider having a solid-state drive (SSD) on each host to host the vswap. This may impact vmotion and also adds complexity to the environment. It is, therefore, advantageous to have the swap served by the FAST Cache on the EMC array. 3

31 Table 8 provides a summary of the memory required per host. Table 8. Case Required memory per host RAM/host min required (Complete Cluster) RAM / host min required (Cluster with one node down) Chapter 3: Solution Infrastructure 1, User Cluster 91 GB 13 GB 96 GB 5 User Cluster 46 GB 52 GB 48 GB RAM/ host used on this solution We used 1,536 GB in total for hosting 2, virtual desktops. In a typical case, 1.5 GB per desktop will not be available, instead 2 GB will be used, which would require 4, GB in total. Still, virtual desktops can provide better boot-up time compared to that of the traditional personal computer. Network Based on the workload, we found one virtual machine requires approximately 18 Mb/s. So, a 1 Mb/s card can support five to six virtual machines per NIC, a 1 GB NIC can support 5 to 6 virtual machines per NIC, and a 1 GB NIC can support 5 to 6 virtual machines per NIC. The Converged Network Adapter (CNA) running at 5 percent bandwidth can support 25 to 3 virtual machines per CNA. Note: This is just a rough estimate and we must always watch for the network load and look for the percent packet drops. If the value is high, check the network configuration and consider adding another NIC. In this solution, we used two CNAs per host to provide fault tolerance. For 2, virtual desktops, we used 2 8 3= 48 NICs. In a traditional desktop scenario, 2, desktops require 2, NICs. Storage The number of spindles required for hosting 2, user desktops is calculated using both the requirement and the capacity needed. Based on the workload, we observed 8.3 per virtual desktop on average. The maximum and 95 th percentile is based on the time interval of the data. The sizing on the average can yield good performance for the virtual desktops operating in a steady state. However, this leaves insufficient headroom in the array to absorb high I/O peaks. To combat the issue of I/O storms, there should be two to three times the average to absorb that load. Table 9 details the requirement and Table 1 describes the disks needed by various RAID levels to meet that. Table 9. requirement and disks needed (multiple RAID scenarios) Item Value Number of Windows 7 desktops 2, per Windows 7 virtual machines 9 Total host (HI) 18, % Read 65 % Write 35 31

32 Chapter 3: Solution Infrastructure Table 1. Disks needed for RAID 5, RAID 1, RAID 6 Item Value Total disk for RAID 5 (R5IO = HI %R + HI 4 %W) 36,9 Number of SAS drives alone (R5IO/18) 25 Number of NL-SAS drives alone (R5IO/8) 462 Total disk for RAID 1 (R1IO = HI %R + HI 2 %W) 24,3 Number of SAS drives alone (R1IO/18) 135 Number of Flash drives alone (R1IO/25) 1 Number of NL-SAS drives (R1IO/8) 34 Total Disk for RAID 6 (R6IO = HI %R + HI 6 %W) 49,5 Number of SAS drives alone (R6IO/18) 275 Number of NL-SAS drives alone (R6IO/8) 619 In keeping with the same and to increase performance or capacity, four SAS drives can be replaced with 9 NL-SAS, 125 SAS drives can be replaced with 9 Flash drives, and 125 NL-SAS drives can be replaced with 4 Flash drives. For a mix of 68 percent SAS, 1 percent NL-SAS, and 31 percent Flash, we need the disks as shown in Table 11 for various RAID options. Table 11. Disks in storage tiering Storage 36,9 (RAID 5) 49,5 (RAID 6) 24,3 (RAID 1) SAS drive count NL SAS drive count Flash drive count When considering the storage size of virtual desktops, VMware View Composer reduces the size required by using linked clone technology. Linked clones are dependent virtual machines linked to the replica virtual machine. A replica virtual machine is a thin provisioned copy of the master virtual machine. We deployed a 2 GB hard disk for the operating system to the master virtual machine. The files occupy 13 GB and, therefore, the replica virtual machine disk size is 13 GB. In the desktop pool, we use a file share on the VNX array to host the user profile and data. A disposable disk that contains the temporary files and windows paging file is used to minimize the expansion of delta disks and, therefore, reduces the refresh frequency to the virtual machine. 32

33 Chapter 3: Solution Infrastructure The size of a virtual desktop is the size of the delta disk plus two times the memory size of the virtual machine plus 2 MB for internal disk plus the disposable disk and log size. Considering 1 GB for the delta disk, it requires approximately 6 GB for one linked clone. VMware View 4.5 supports 512 linked clones from a single replica. To host 5 virtual desktops, we need 3 TB. With the current VMFS version, the maximum size that is supported is 2 TB minus 512 bytes. This means that we need to split them into two datastores. To host 2, virtual desktops, we need eight datastores of 2 TB to allow additional space for growth. Therefore, we need 16 TB in total for the linked clones. If linked clones are not used, it requires 25 GB per virtual machine in thick format or 18 GB per virtual machine using thin disks. This solution uses 2 GB Flash, 3 GB SAS, and 2 TB NL-SAS disks. The usable raw capacities are 18 GB Flash, 268 GB SAS, and 1.8 TB NL-SAS. With four Flash drives in RAID 1, 36 GB is dedicated for the replica with a RAID 5 mix of SAS and NL-SAS, and it gives 37 TB. With a RAID 1 mix of SAS and NL-SAS, we have 16 TB. With RAID 1, we use fewer spindles and that data does not grow much compared to user data. With a dedicated datastore for the replica, the space that is required on the replica LUN is approximately 39 GB for three virtual desktop pools. Any data accessed three times in a given period normally resides in FAST Cache. To maximize the use of Flash, we elected to use it as FAST Cache. Table 12 describes the drives used in this solution. Table 12. Drive Spindles used in this solution Linked clone RAID 1 User data- RAID 6 Hot spare Flash 1 14 NL-SAS NA SAS NA FAST Cache 33

34 Chapter 3: Solution Infrastructure vsphere 4.1 infrastructure VMware vsphere 4.1 is the market-leading virtualization hypervisor used across thousands of IT environments around the world. VMware vsphere 4.1 can transform or virtualize computer hardware resources, including CPU, RAM, hard disk, and network controller, to create a fully functional virtual machine that runs its own operating system and applications just like a physical computer. The high-availability features in VMware vsphere 4.1 along with VMware Distributed Resource Scheduler (DRS) and Storage vmotion enable seamless migration of virtual desktops from one ESX server to another with minimal or no impact to customer usage. vcenter Server cluster Figure 6 shows the cluster configuration from vcenter Server. The clusters View- Cluster-1 and View-Cluster-2 host 5 virtual desktops, while View-Cluster-5 hosts 1, virtual desktops. Figure 6. Cluster configuration from vcenter Server 34

35 Chapter 3: Solution Infrastructure Figure 7 shows the virtual machines hosted on View-Cluster-2, its failover capacity, and its memory utilization. Figure 7. Virtual machines hosted on View-Cluster-2 The infrastructure cluster hosts the SQL Server, vcenter, domain controller, and View Connection servers as shown in Figure 8. Figure 8. Infrastructure cluster 35

36 Chapter 3: Solution Infrastructure Cisco Technology Overview Overview Figure 9 displays the Cisco UCS components described in this section. Figure 9. Cisco Unified Computing System Cisco Unified Computing System (UCS) B-Series Blade Servers Cisco UCS B-Series Blade Servers are designed for compatibility, performance, energy efficiency, large memory footprints, manageability, and unified I/O connectivity: Compatibility: Cisco UCS B-Series Blade Servers are designed around multicore Intel Xeon 55, 56, 65 and 75 Series processors, DDR3 memory, and an I/O bridge. Each blade server's front panel provides direct access for video, two USB, and console connections. Performance: Cisco's blade servers use the Intel Xeon next-generation server processors, which deliver intelligent performance, automated energy efficiency, and flexible virtualization. Intel Turbo Boost Technology automatically boosts processing power through increased frequency and use of hyper-threading to deliver high performance when workloads demand and thermal conditions permit. Intel Virtualization Technology provides best-in-class support for virtualized environments, including hardware support for direct connections between virtual machines and physical I/O devices. Energy efficiency: Most workloads vary over time. Some workloads are bursty on a moment-by-moment basis, while others have predictable daily, weekly, or monthly cycles. Intel Intelligent Power Technology monitors the CPU utilization and automatically reduces energy consumption by putting processor cores into a low-power state based on real-time workload characteristics. 36

37 Chapter 3: Solution Infrastructure Large-memory-footprint support: As each processor generation delivers even more power to applications, the demand for memory capacity to balance CPU performance increases as well. The widespread use of virtualization increases memory demands even further due to the need to run multiple OS instances on the same server. Cisco blade servers with Cisco Extended Memory Technology can support up to 384 GB per blade. Manageability: The Cisco Unified Computing System is managed as a cohesive system. Blade servers are designed to be configured and managed by Cisco UCS Manager, which can access and update blade firmware, BIOS settings, and RAID controller settings from the parent Cisco UCS 61 Series Fabric Interconnect. Environmental parameters are also monitored by Cisco UCS Manager, reducing the number of points of management. Unified I/O: Cisco UCS B-Series Blade Servers are designed to support up to two network adapters. This design can reduce the number of adapters, cables, and access-layer switches by as much as half because it eliminates the need for multiple parallel infrastructure for both LAN and SAN at the server, chassis, and rack levels. This design results in reduced capital and operating expenses through lower administrative overhead and power and cooling requirements. Cisco UCS 61 Series Fabric Interconnects Cisco Nexus 7 Series Switches Cisco Nexus 1V Series Switches, Cisco VN-Link technology A core part of the Cisco Unified Computing System, the Cisco UCS 61 Series Fabric Interconnects provide both network connectivity and management capabilities to all attached blades and chassis. The Cisco UCS 61 Series offers line-rate, low-latency, lossless 1 Gigabit Ethernet and FCoE functions. The interconnects provide the management and communication backbone for the Cisco UCS B-Series Blades and UCS 51 Series Blade Server Chassis. The Cisco Nexus 7 Series offers an end-to-end solution for data center core, aggregation, and high-density end-of-row and top-of-rack server connectivity in a single platform. The Cisco Nexus 7 Series platform is run by Cisco NX-OS software. It is specifically designed for the most mission-critical place in the network, the data center. Cisco Nexus 1V Series Switches are virtual machine access switches that are an intelligent software switch implementation based on IEEE 82.1Q standard for VMware vsphere environments running the Cisco NX-OS Software operating system. Operating inside the VMware ESX hypervisor, the Cisco Nexus 1V Series supports Cisco VN-Link server virtualization technology to provide: Policy-based virtual machine connectivity Mobile virtual machine security and network policy An undisruptive operational model for server virtualization and networking teams 37

38 Chapter 3: Solution Infrastructure Cisco MDS 95 Series Multilayer Directors The Cisco MDS 95 Series Multilayer Director layers a broad set of intelligent features onto a high-performance, open-protocol switch fabric. Addressing the stringent requirements of large data center storage environments, it provides high availability, security, scalability, ease of management, and transparent integration of new technologies. 38

39 Windows infrastructure Introduction Chapter 3: Solution Infrastructure Microsoft Windows provides the infrastructure used to support the virtual desktops and includes the following components: Microsoft Active Directory Microsoft SQL Server DNS Server DHCP Server Microsoft Active Directory Microsoft SQL Server The Windows domain controller runs the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory provides several functions to help you: Manage the identities of users and their information Apply group policy objects Deploy software and updates Microsoft SQL Server is a relational database management system (RDBMS). SQL Server 28 is used to provide the required databases to vcenter Server, View Composer, and View Events as shown in Figure 1. Figure 1. SQL server databases DNS Server DHCP Server DNS is the backbone of Active Directory and provides the primary name resolution mechanism of Windows servers and clients. In this solution, the DNS role is enabled on the domain controller. The DHCP Server provides the IP address, DNS Server name, gateway address, and other information to the virtual desktops. In this solution, we enabled the DHCP role on the domain controller. 39

40 Chapter 4: Network Design 4 Network Design Considerations This chapter describes the network design used in this solution and contains the following sections: Considerations VNX for file network configuration Enterprise switch configuration Fibre Channel network configuration Physical design considerations Logical design considerations EMC recommends that switches support gigabit Ethernet (GbE) connections and Link Aggregation Control Protocol (LACP), and the ports on switches support copper-based media. This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. The IP scheme for the virtual desktop network must be designed so that there are enough IP addresses in one or more subnets for the DHCP Server to assign them to each virtual desktop. Link aggregation VNX platforms provide network high availability or redundancy by using link aggregation. This is one of the methods used to address the problem of link or switch failure. Link aggregation is a high-availability feature that enables multiple active Ethernet connections to appear as a single link with a single MAC address and potentially multiple IP addresses. In this solution, LACP is configured on VNX, which combines eight GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Figure 11 shows the LACP configuration of the Data Mover ports on the Ethernet switch. 4

41 Chapter 4: Network Design Figure 11. LACP configuration of the Data Mover ports 41

42 Chapter 4: Network Design VNX for file network configuration Data Mover ports VNX57 consists of two Data Movers, which can be configured in an active/active or active/passive configuration. In this solution, the Data Movers operate in the active/passive mode. In the active/passive configuration, the passive Data Mover serves as a failover device for the active Data Mover. The VNX57 Data Mover is configured with two UltraFlex I/O modules with each consisting of four 1 Gb interfaces. It is configured to use LACP with all Data Mover ports as shown in Figure 12. Figure 12. VNX57 Data Mover configuration The lacp device was used to support virtual machine traffic, home folder access, and external access for roaming profiles. The virtual interface devices were created on the same LACP for each VLAN that requires access to the Data Mover interfaces as shown in Figure 13. Figure 13. Virtual interface devices 42

43 Chapter 4: Network Design Figure 14 shows the properties of a single interface where the VLAN ID and Maximum Transfer Unit (MTU) are set. Figure 14. Interface properties 43

44 Chapter 4: Network Design ESX network configuration ESX NIC teaming All network interfaces in this solution use 1 GbE connections. The server Ethernet ports on the switch are configured as trunk ports and use VLAN tagging at the port group to separate the network traffic between various port groups. Figure 15 shows the vswitch configuration in vcenter Server. Figure 15. vswitch configuration in vcenter Server Port groups Table 13 lists the configured port groups. Table 13. Port groups Configured port groups Virtual machine network Service Console Desktop-Network Used to Provide external access for administrative virtual machines Manage public network administration traffic Provide a network connection for virtual desktops and LAN traffic 44

45 Enterprise switch configuration Cabling Chapter 4: Network Design In this solution, we spread the ESX server and VNX Data Mover cabling evenly across two line cards to provide redundancy and load balancing of the network traffic. Server uplinks The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vswitches are configured to load balance the network traffic on the originating port ID. We used the following configuration for one of the server ports in this solution: switchport switchport trunk encapsulation dot1q switchport mode trunk no ip address spanning-tree portfast trunk Data Movers The network ports for each VNX57 Data Mover are connected to the Ethernet switch. The ports are configured with LACP, which provides redundancy in case of a NIC or port failure. Figure 16 shows an example of the switch configuration for one of the Data Mover ports. Figure 16. Data Mover port switch configuration 45

46 Chapter 4: Network Design Fibre Channel network configuration Introduction Enterprise-class FC switches are used to provide the storage network for this solution. The switches are configured in a SAN A/SAN B configuration to provide fully redundant fabrics. Each server has a single connection to each fabric to provide load-balancing and failover capabilities. Each storage processor has two links to the SAN fabrics for a total of four available front-end ports. The zoning is configured so that each server has four available paths to the storage array. Figure 17 confirms that information from the vcenter interface. Figure 17. Zoning configuration 46

47 Chapter 4: Network Design Zone configuration This solution uses single initiator and multiple target zoning. Each server initiator is zoned to two storage targets on the array. Figure 18 shows the zone configuration for the SAN B fabric. Figure 18. Zone configuration for the SAN B fabric 47

48 Chapter 5: Installation and Configuration 5 Installation and Configuration Installation overview This chapter describes how to install and configure this solution and includes the following sections: Installation overview Installing VMware components Installing Storage components This section describes how to configure both the VMware and storage components in this solution, including: Desktop pools Storage pools FAST Cache Auto-tiering (FAST VP) VNX Home Directory PowerPath/VE The installation and configuration steps for the following components are available on the VMware website VMware View Connection Server VMware View Composer 2.5 VMware ESX 4.1 VMware vsphere 4.1 The installation and configuration of the following components are not covered: Microsoft System Center Configuration Manager (SCCM) Microsoft Active Directory, DNS, and DHCP vsphere and its components Microsoft SQL Server 28 R2 48

49 Chapter 5: Installation and Configuration Installing VMware components VMware View installation overview The VMware View Installation Guide available on the VMware website has detailed procedures to install View Connection Server and View Composer 2.5. There are no special configuration instructions required for this solution. The ESX Installable and vcenter Server Setup Guide available on the VMware website has detailed procedures to install vcenter Server and ESX and is not covered in further detail in this paper. There are no special configuration instructions required for this solution. VMware View setup VMware View desktop pool configuration Before deploying the desktop pools, ensure that the following steps from the VMware View Installation Guide have been completed: Prepare Active Directory Install View Composer 2.5 on vcenter Server Install View Connection Server (standard and replica) Add a vcenter Server instance to View Manager One desktop pool is created for each vsphere cluster. Two pools will host 5 desktops and the other will host 1, desktops. In this solution, persistent automated desktop pools are used as shown in Figure 19. Figure 19. Persistent automated desktop pools To create a persistent automated desktop pool as configured for this solution, complete the following steps: 1. Log in to the VMware View Administration page, which is located at where server is the IP address or DNS name of the View Manager server. 2. Click the Pools link in the left pane. 3. Click Add under the Pools banner. 4. In the Type page, select Automated Pool as shown in Figure 2, and click Next. 49

50 Chapter 5: Installation and Configuration Figure 2. Select Automated Pool 5. In the User assignment page, select Dedicated and select the Enable automatic assignment checkbox as shown in Figure 21, and click Next. Figure 21. User assignment 5

51 Chapter 5: Installation and Configuration 6. In the vcenter Server page, select View Composer linked clones and select a vcenter Server that supports View Composer, as shown in Figure 22. Click Next. Figure 22. Select View Composer linked clones 7. In the Pool Identification page, enter the required information as shown in Figure 23, and click Next. The pool ID is used by the View Administrators and the Display name is what the users will see in the View Client. 51

52 Chapter 5: Installation and Configuration Figure 23. Pool identification 8. In the Pool Settings page, make any required changes as shown in Figure 24, and click Next. Figure 24. Pool settings 9. In the View Composer Disks page, select Do not redirect Windows profile, and click Next. 52

53 Chapter 5: Installation and Configuration Figure 25. Select Do not redirect Windows profile 1. In the Provisioning Settings page, select a name for the desktop pool and enter the number of desktops to provision, as shown in Figure 26. Click Next. {n:fixed=4} increments the desktop numbering with 4 digits padded. We used the pool ID at the end to easily associate the desktop name to its pool. Figure 26. Provisioning settings 53

54 Chapter 5: Installation and Configuration 11. In the vcenter Settings page, browse to select a default image, a folder for the virtual machines, the cluster hosting the virtual desktops, the resource pool to hold the desktops, and the data stores that will be used to deploy the desktops as shown in Figure 27, and then click Next. Figure 27. vcenter settings 12. In the Select Datastores page, select the datastores for linked clone images, and then click OK. We used Aggressive as the Storage Overcommit option to allow more desktops per virtual provisioned datastore as shown in Figure

55 Chapter 5: Installation and Configuration Figure 28. Select the datastores for linked clone images 13. In the Guest Customization page, select the domain and AD container, and then select Use a customization specification (Sysprep). Click Next. Figure 29. Guest customization 14. In the Ready to Complete page (shown in Figure 3), verify the settings for the pool, and then click Finish to start the deployment of the virtual desktops. 55

56 Chapter 5: Installation and Configuration Figure 3. Verify your settings PowerPath Virtual Edition PowerPath/VE supports ESX 4.1. The EMC PowerPath/VE for VMware vsphere Installation and Administration Guide available on Powerlink provides the procedure to install and configure PowerPath/VE. There are no special configuration instructions required for this solution. The PowerPath/VE binaries and support documentation are available on Powerlink. Figure 31 shows that PowerPath is managing the block devices on the ESX host. Figure 31. PowerPath as the owner for managing the path of block devices 56

57 Chapter 5: Installation and Configuration Installing storage components Create storage pools Storage pools in the EMC VNX OE support heterogeneous drive pools. In this solution, we configured a 96-disk pool with RAID 1 from 92 SAS disks and four near-line SAS drives. From this storage pool, we created 8 thin LUNs, each 2,47 GB in size as shown in Figure 32. Figure 32. Thin LUNs created 57

58 Chapter 5: Installation and Configuration For each LUN in the storage pool, the tiering policy is set to Auto Tiering as shown in Figure 33. As data ages and is used infrequently, it is moved to the near-line SAS drives in the pool. Figure 33. Auto-Tiering 58

59 Chapter 5: Installation and Configuration Enable FAST Cache FAST Cache is enabled as an array-wide feature in the system properties of the array in Unisphere as shown in Figure 34. Figure 34. Enabling FAST Cache 59

60 Chapter 5: Installation and Configuration From the Storage System Properties dialog box, click the FAST Cache tab, click Create, and then select the eligible Flash drives to create the FAST Cache as shown in Figure 35. There are no user-configurable parameters for the FAST Cache. Figure 35. FAST Cache configuration FAST Cache is enabled for all LUNs in this solution. The replica images are provisioned on all datastores allocated to that pool. But, as the data gets frequently accessed, it ends up in FAST Cache. 6

61 Chapter 5: Installation and Configuration Configure FAST VP To configure the FAST VP feature for a pool LUN, go to the properties for a pool LUN in Unisphere, click the Tiering tab, and set the tiering policy for the LUN as shown in Figure 36. Figure 36. Configuring FAST VP 61

62 Chapter 5: Installation and Configuration Configure VNX Home Directory The VNX Home Directory installer is available on the NAS Tools website and the application CD for each VNX OE for file release. You can also download the software from Powerlink. With this feature, you can create a unique share called HOME, redirect data to this path based on specific criteria, and provide the user with exclusive rights to the folder. After installing the VNX Home Directory feature, use the Microsoft Management Console (MMC) snap-in to configure the feature. Figure 37 shows a sample configuration. Figure 37. Configure the VNX Home Directory feature The sample configuration shown in Figure 38 shows how to automatically create a user Home Directory for any user in domain view45 in the Homedirs folder on the View45 file system. \View45\Homedirs\<user> For example, when user1 logs in, that user will see that \\VNXFILE\HOME points to \View45\Homedirs\User1 on the Data Mover. For user2, \\VNXFILE\Home points to \View45\Homedirs\User2. 62

63 Chapter 5: Installation and Configuration Figure 38. Sample Home Directory configuration 63

64 6 Testing and Validation Use case descriptions This chapter compares how the following use cases performed in the boot storm, Login VSI, and antivirus scan test scenarios. FAST Cache with no dedicated replica LUN FAST Cache and dedicated replica LUN No FAST Cache with a dedicated replica LUN Use Case 1: FAST Cache with no dedicated replica LUN In Use Case 1, we created the linked clone desktop pool without a dedicated replica LUN and used 14 flash drives for the FAST Cache configuration. We created a replica virtual machine created for every LUN that hosts the linked clone as shown in Figure 39. Figure 39. FAST Cache with no dedicated replica LUN 64

65 Use Case 2: FAST Cache with dedicated replica LUNs In Use Case 2, we created one replica virtual machine for every linked clone desktop pool and stored that replica on a different LUN than the linked clones. Figure 4 shows this configuration. We used four Flash drives to host the replica virtual machine and configured FAST Cache to use 1 Flash drives. Figure 4. FAST Cache with dedicated replica LUNs 65

66 Use Case 3: A dedicated replica LUN with no FAST Cache In Use Case 3, we did not use FAST Cache and reduced the dedicated replica LUN configuration, hosting 1, users in this environment as shown in Figure 41. Figure 41. Dedicated replica LUN with no FAST Cache 66

67 Boot storm scenario Overview Use Case 1: FAST Cache with no dedicated replica LUN This section describes the boot storm results for each of the three use cases when powering up the desktop pools. For Use Case 1, the virtual desktops took an average of 1.5 seconds to boot. Figure 42 shows the LUN and response times. The LUN response time stayed below 2 ms LUN and response times Response time (ms) ViewDS8 Read ViewDS8 Response time (ms) ViewDS8 Write Figure 42. LUN and response times for FAST Cache with no dedicated replica LUN 67

68 Figure 43 shows one of the underlying disk and response times of the block storage pool Physical disk and response times response time (ms) Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Figure 43. Physical disk and response times Almost 9 percent of the read and write operations are served by the FAST Cache as shown in Figure VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure 44. FAST Cache read and write operations 68

69 Figure 45 shows that the service processor (SP) utilization during the boot storm is approximately 3 percent SP utilization during boot storm SP A - Utilization (%) SP B - Utilization (%) Figure 45. SP utilization during boot storm An ESX server s SP utilization remained below 3 percent during the boot process as displayed in Figure \\c1b1\physical Cpu(_Total)\% Util Time \\c1b1\physical Cpu(_Total)\% Util Time Figure 46. Example boot-time SP utilization 69

70 Figure 47 shows the memory activity from one of the ESX servers. When the virtual machine boots, it consumes the free available memory. The amount of swap memory it uses is very low compared to the memory gain achieved by the Transparent Page Sharing ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure 47. ESX memory activity Figure 48 shows the ESX disk and response time for one of the LUNs ESX physical disk and guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/ sec Avg. Guest Latency Millisec/command \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes /sec Figure 48. ESX physical disk and guest latency 7

71 Figure 49 shows the number of SCSI reservations prevented by using the Autonomic Test and Set (ATS) feature used by VAAI and the zero requests for the array. ATS ESX VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Figure 49. ESX VAAI statistics This graphic shows that there are approximately 1,67 ATS requests during the boot and about 125 zeroing requests on this LUN. Use Case 2: FAST Cache with dedicated replica LUNs Figure 5 shows the LUN and response times during the virtual desktop boot process. The response times stayed below 4 ms most of the time LUN 7 and response times Response Time (ms) ViewDS8 Read ViewDS8 Response time ViewDS8 Write Figure 5. LUN and response time 71

72 Figure 51 shows the replica LUN and response times. As expected, most of the from the replica LUN are read Replica LUN and response times Response Time (ms) SSD1 Replica Read SSD1 Replica Response time SSD1 Replica Write Figure 51. Replica LUN and response times Figure 52 shows one of the underlying physical disk and response times of the block storage pool Physical disk and response time Bus Enclosure 2 Disk 3 - Read Throughput (IO/s) Bus Enclosure 2 Disk 3 - Write Throughput (IO/s) Bus Enclosure 2 Disk 3 - Response Time (ms) response time (ms) Figure 52. Physical disk and response time 72

73 Figure 53 shows the FAST Cache hit ratio of the block storage pool during the virtual machine boot process FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure 53. FAST Cache hit ratio As shown in Figure 54, service processor utilization is approximately 5 percent during the boot storm. 1 9 SP utilization during the boot storm SP A - Utilization (%) SP B - Utilization (%) Figure 54. SP utilization during the boot storm 73

74 Figure 55 shows the CPU utilization of one of the ESX hosts during the boot process. \\c1b1\physical Cpu(_Total)\% utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure 55. Example ESX host physical CPU utilization Figure 56 shows the ESX server s memory consumption and the memory savings using the transparent page sharing ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure 56. ESX server memory 74

75 Figure 57 shows the ESX physical disk and the average guest latency ESX linked clone LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec Avg. Guest Latecy (ms/cmd) Figure 57. \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command ESX linked clone LUN and average guest latency Figure 58 shows that the ESX server used 1,546 ATS operations and 84 zeroing requests on the linked clone disk during the boot operation. 275 Linked clone ESX disk VAAI statistics 2364 ATS Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Figure 58. Linked clone ESX disk VAAI statistics 75

76 Figure 59 shows the replica disk and average guest latency during the boot operation ESX replica disk and average guest latency \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Reads/s ec Avg. Guest Latency (msec/cmd) \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Writes/s ec Figure 59. ESX replica disk and average guest latency Figure 6 shows that on the replica LUN, ESX used 58 ATS operations and no zeroing requests during the virtual machine boot process. ATS ESX replica LUN VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\ATS \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Zeros Figure 6. ESX replica LUN VAAI statistics 76

77 Use Case 3: A dedicated replica LUN with no FAST Cache Without FAST Cache, we need more spindles to host 2, users. The existing spindles can support up to 1, users, so we performed our testing with 1, users for this use case. Figure 61 shows the linked clone LUN and response times during the boot process Linked clone LUN and response times Response Time (ms) ViewDS8 Read ViewDS8 Write ViewDS8 Response time Figure 61. Linked clone LUN and response times 77

78 Figure 62 shows the replica LUN and response times during the boot process for 1, users. Replica LUN and response times Response Time (ms) SSD1 Read SSD1 Write SSD1 Response time Figure 62. Replica LUN and response times Figure 63 shows the physical disk and response times during the boot process for 1, users. Physical disk and response times Response Time (ms) Bus 1 Enclosure 2 Disk 9 - Read Throughput (IO/s) Bus 1 Enclosure 2 Disk 9 - Write Throughput (IO/s) Bus 1 Enclosure 2 Disk 9 - Response Time (ms) Figure 63. Physical disk and response times 78

79 Figure 64 shows the service processor utilization during the boot process. 6 SP utilization during boot storm SP A - Utilization (%) SP B - Utilization (%) Figure 64. Service processor (SP) utilization during boot storm Figure 65 shows the ESX server physical CPU utilization during the boot process. 45 \\c3b1\physical Cpu(_Total)\% utilization time \\c3b1\physical Cpu(_Total)\% Util Time Figure 65. ESX server physical CPU utilization 79

80 Figure 66 shows the ESX memory utilization during the boot process. Chapter 6: Testing and Validation 12 ESX memory during boot storm \\c3b1\memory\free MBytes \\c3b1\memory\pshare Shared MBytes \\c3b1\memory\swap Used MBytes \\c3b1\memory\total Compressed MBytes Figure 66. ESX memory during boot storm Figure 67 shows the linked close ESX disk and average guest latency during the boot process. Linked clone ESX disk and average guest latency Avg. Guest Latency (ms) \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Reads/s ec \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Writes/ sec Figure 67. Linked clone ESX disk and average guest latency 8

81 Figure 68 shows that the ESX server used 1,48 ATS operations and 2,781 zeroing requests on the linked clone disk during the boot operation. Linked clone ESX disk VAAI ATS Zeros \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\ATS Figure 68. Linked clone ESX disk VAAI Figure 69 shows the ESX replica disk and average guest latency. ESX replica disk and average guest latency Avg. Guest Latency (ms) \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Reads/s ec \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Writes/ sec Figure 69. ESX replica disk and average guest latency 81

82 Figure 7 shows that the ESX replica used 125 ATS operations and no zeroing requests on the linked clone disk during the boot operation. ESX replica LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\ATS \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Zeros Figure 7. ESX replica LUN VAAI statistics 82

83 Antivirus scan scenario Overview We installed the McAfee Enterprise Virus Scan command line utility on all of the virtual desktops in our test environment, and executed the script remotely from a central machine. Note: Although this is not the preferred way to implement an antivirus scanner in a VDI environment, the purpose of this test is to simulate a traditional customer implementation. This section describes the antivirus scan results when powering up the desktop pools for each of the three use cases. It includes a results summary graph and graphs showing individual results from scanning 1, 2, 3, 5, and 1, desktops in each of the three scenarios. Use Case 1: FAST Cache with no dedicated replica LUN Summary Figure 71 shows the summary results from an antivirus scan for 5, 3, 2, and 1 desktops for a scenario with FAST Cache and no dedicated replicated LUN. Anti-virus scan summary with FAST Cash and no dedicated replica LUN :43:12 Time Taken to Scan (h:mm:ss) :36: :28:48 :21:36 :14:24 :7:12 :: Scan 3 Scan 2 Scan 1 Scan Figure 71. Antivirus scan summary with FAST Cash and no dedicated replica LUN The graphs in this section shown the antivirus scan response times for each of the above desktop configurations. 83

84 1-desktop antivirus scan Figure 72 shows that the 1-desktop scan took 11 minutes. Scan 1 LUN and response times Chapter 6: Testing and Validation Response Time (ms) ViewDS1 Read ViewDS1 Response Time (ms) ViewDS1 Write Figure 72. Scan 1 LUN and response times Figure 73 shows the physical disk and response times for the 1-desktop antivirus scan. Physical disk and response times Response Time (ms) Bus Enclosure 2 Disk 6 - Read Throughput (IO/s) Bus Enclosure 2 Disk 6 - Write Throughput (IO/s) Bus Enclosure 2 Disk 6 - Response Time (ms) Figure 73. Physical disk and response times 84

85 Figure 74 shows the FAST Cache hit ratio for the 1-desktop antivirus scan. 1 FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure 74. FAST Cache hit ratio Figure 75 shows the service processor utilization for the 1-desktop antivirus scan. 25 Scan 1 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan SP utilization 85

86 Figure 76 shows the ESX Processor Utilization for the 1-desktop antivirus scan. 35 \\c1b1\physical Cpu(_Total)\% Util Time \\c1b1\physical Cpu(_Total)\% Util Time Figure 76. ESX processor utilization Figure 77 shows the ESX server memory utilization during the 1-desktop antivirus scan. 5 Scan 1 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure 77. ESX server memory utilization 86

87 Figure 78 shows the ESX LUN and average guest latency for the 1-desktop antivirus scan Scan 1 - ESX LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/s ec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/ sec Figure desktop antivirus scan ESX LUN and average guest latency Figure 79 shows the ESX VAAI statistics for the 1-desktop antivirus scan. ATS Scan 1 - ESX VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\AT S Figure 79. Scan 1 - ESX VAAI statistics 87

88 Figure 8 shows the virtual machine disk and response times for the 1- desktop antivirus scan. Scan 1 - virtual machine disk and response times Read Response Time (ms) \\c1b1\virtual Disk(VD14P1)\Average MilliSec/Read \\c1b1\virtual Disk(VD14P1)\Reads/sec \\c1b1\virtual Disk(VD14P1)\Writes/sec Figure 8. Scan 1 - virtual machine disk and response times 2-desktop antivirus scan Figure 81 shows that it took 27 minutes and 12 seconds to scan 2 desktops. Scan 2 - LUN and response times Response Time (ms) ViewDS1 Read ViewDS1 Response Time (ms) ViewDS1 Write Figure desktop antivirus scan LUN and response times 88

89 Figure 82 shows the physical disk and response times for the 2-desktop antivirus scan Scan 2 - Physical disk and response times Response Time (ms) Bus 1 Enclosure 6 Disk 6 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 6 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 6 - Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 83 shows the FAST Cache hit ratio for the 2-desktop antivirus scan. 15 Scan 2 - FAST Cache Hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache Hit ratio 89

90 Figure 84 shows the service processor utilization for the 2-desktop antivirus scan Scan 2 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan SP utilization Figure 85 shows the ESX Server s processor utilization during the 2-desktop antivirus scan. \\c1b1\physical Cpu(_Total)\% Util Time \\c1b1\physical Cpu(_Total)\% Util Time Figure 85. ESX server processor utilization 9

91 Figure 86 shows the ESX server s memory utilization during the 2-desktop antivirus scan. 5 Scan 2 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization Figure 87 shows the ESX LUN and average guest latency for the 2-desktop antivirus scan Scan 2 - ESX LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec Figure desktop antivirus scan ESX LUN and average guest latency 91

92 Figure 88 shows the ESX VAAI statistics for the 2-desktop antivirus scan. Scan 2 - ESX VAAI statistics ATS Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS Figure 88. ESX VAAI statistics Figure 89 shows the virtual machine disk and latency for the 2-desktop antivirus scan. Scan 2 - virtual machine disk and latency Avg. Read Latency \\c1b1\virtual Disk(VD14P1)\Reads/sec \\c1b1\virtual Disk(VD14P1)\Writes/sec \\c1b1\virtual Disk(VD14P1)\Average MilliSec/Read Figure desktop antivirus scan virtual machine disk and latency 92

93 3-desktop antivirus scan Figure 9 shows that it took 4 minutes and 47 seconds to scan 3 desktops. Scan 3 - LUN and response time Response Time (ms) ViewDS1 Read ViewDS1 Response Time (ms) ViewDS1 Write Figure 9. 3-desktop antivirus scan LUN and response times Figure 91 shows the physical disk and response times for the 3-desktop antivirus scan. Scan 3 - physical disk and response time Bus 1 Enclosure 6 Disk 6 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 6 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 6 - Response Time (ms) Response Time (ms) Figure desktop antivirus scan physical disk and response time 93

94 Figure 92 shows the FAST Cache hit ratio for the 3-desktop antivirus scan. 12 Scan 3 - FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache hit ratio Figure 93 shows the service processor utilization for the 3-desktop antivirus scan. 8 Scan 3 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan SP utilization 94

95 Figure 94 shows the EXS server CPU utilization for the 3-desktop antivirus scan Scan 3 - ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX CPU utilization Figure 95 shows the ESX memory utilization for the 3-desktop antivirus scan Scan 3 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization 95

96 Figure 96 shows the ESX LUN and average guest latency for the 3-desktop antivirus scan Scan 3 - ESX LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec Figure 96. \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command 3-desktop antivirus scan ESX LUN and average guest latency Figure 97 shows ESX VAAI statistics for the 3-desktop antivirus scan. ATS Scan 3 - ESX VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS Figure desktop antivirus scan 96

97 Figure 98 shows the virtual machine disk and latency for the 3-desktop antivirus scan. Scan 3 - virtual machine disk and latency Read Latency \\c1b1\virtual Disk(VD152P1)\Reads/sec \\c1b1\virtual Disk(VD152P1)\Writes/sec \\c1b1\virtual Disk(VD152P1)\Average MilliSec/Read Figure desktop antivirus scan virtual machine disk and latency 5-desktop antivirus scan Figure 99 shows that it took one hour and 17 minutes to scan 5 desktops. Scan 5 - LUN and response times Response Time (ms) ViewDS1 Read ViewDS1 Write ViewDS1 Response Time (ms) Figure desktop antivirus scan LUN and response times 97

98 Figure 1 shows the physical disk and response times for the 5-desktop antivirus scan Scan 5 - Physical disk and response times Response Time (ms) Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Figure 1. 5-desktop antivirus scan physical disk and response times Figure 11 shows the FAST Cache hit ratio for the 5-desktop antivirus scan. 12 Scan 5 - FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache hit ratio 98

99 Figure 12 shows the service processor utilization for the 5-desktop antivirus scan. 7 Scan 5 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization Figure 13 shows the ESX server CPU utilization during the 5-desktop antivirus scan. 7 Scan 5 - ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX server CPU utilization 99

100 Figure 14 shows the ESX memory utilization during the 5-desktop antivirus scan. 5 Scan 5 - ESX memory utilization \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX server memory utilization Figure 15 shows the ESX disk and average guest latency for the 5-desktop antivirus scan. Scan 5 - ESX disk and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX disk and average guest latency 1

101 Figure 16 shows the ESX VAAI statistics for the 5-desktop antivirus scan ATS Scan 5 - ESX VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Figure desktop antivirus scan ESX VAAI statistics Figure 17 shows the virtual machine disk and latency for the 5-desktop antivirus scan. 25 Scan 5 - virtual machine disk and latency Read Latecy (ms/read) \\c1b1\virtual Disk(VD246P1)\Reads/sec \\c1b1\virtual Disk(VD246P1)\Writes/sec \\c1b1\virtual Disk(VD246P1)\Average MilliSec/Read Figure desktop antivirus scan virtual machine disk and latency 11

102 Use Case 2: With FAST Cache and a dedicated replica LUN Summary Figure 18 shows the summary results from an antivirus scan for 5, 3, 2, and 1 desktops for a scenario with FAST Cache and a dedicated replicated LUN. Anti-virus scan summary with FAST Cache and a dedicated replica LUN 1:55:12 Time Taken to Scan (h:mm:ss) 1:4:48 1:26:24 1:12: :57:36 :43:12 :28:48 :14:24 :: Scan 3 Scan 2 Scan 1 Scan Figure 18. Antivirus scan summary with FAST Cache and a dedicated replica LUN The graphs in this section show the antivirus scan response times for each of the above desktop configurations. 12

103 1-desktop antivirus scan Figure 19 shows that it took 19 minutes and 55 seconds to scan 1 desktops Scan 1 - Linked clone LUN and response times Response Time (ms) ViewDS1 Read ViewDS1 Response Time (ms) ViewDS1 Write Figure desktop antivirus scan linked clone LUN and response times Figure 11 shows the replica LUN and response times for the 1-desktop antivirus scan. Scan 1 - Replica LUN and response times Response Time (ms) SSD2 Read SSD2 Write SSD2 Response Time (ms) Figure desktop antivirus scan replica LUN and response times 13

104 Figure 111 shows the physical disk and response times for the 1-desktop antivirus scan Scan 1 - Physical disk and response times Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 112 shows the FAST Cache hit ratio for the 1-desktop antivirus scan. 12 Scan 1 - FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache hit ratio 14

105 Figure 113 shows the service processor utilization for the 1-desktop antivirus scan. The replica LUN is owned by service processor A. 45 Scan 1 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan SP utilization Figure 114 shows the ESX server CPU utilization during the 1-desktop antivirus scan Scan 1 - ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX server CPU utilization 15

106 Figure 115 shows the ESX memory utilization during the 1-desktop antivirus scan. Scan 1 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization Figure 116 shows the ESX linked clone LUN and average guest latency for the 1-desktop antivirus scan. Scan 1 - ESX linked clone LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX linked clone LUN and average guest latency 16

107 Figure 117 shows the ESX VAAI statistics for the 1-desktop antivirus scan. ATS Scan 1 - ESX VAAI statistics \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Zeros Figure desktop antivirus scan ESX VAAI statistics Figure 118 shows the ESX replica LUN and average guest latency for the 1- desktop antivirus scan Scan 1 - ESX replica LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Reads/sec \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Writes/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX replica LUN and average guest latency 17

108 Figure 119 shows the ESX replica LUN VAAI statistics for the 1-desktop antivirus scan. Scan 1 - ESX replica LUN VAAI statistics 45 1 ATS Zeros \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\ATS \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Zeros Figure desktop antivirus scan ESX replica LUN VAAI statistics 2-desktop antivirus scan Figure 12 shows that it took 44 minutes and 27 seconds to scan 2 desktops. Scan 2 - Linked clone LUN and response times Response Time (ms) ViewDS1 Read ViewDS1 Write ViewDS1 Response Time (ms) Figure desktop antivirus scan linked clone LUN and response times 18

109 Figure 121 shows the replica LUN and response times for the 2-desktop antivirus scan Scan 2 - Replica LUN and response times Response Time (ms) SSD2 Read SSD2 Write SSD2 Response Time Figure desktop antivirus scan replica LUN and response times Figure 122 shows the physical disk and response times for the 2-desktop antivirus scan Scan 2 - Physical disk and response times Response Time (ms) Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Figure desktop antivirus scan physical disk and response times 19

110 Figure 123 shows the FAST Cache hit ratio for the 2-desktop antivirus scan Scan 2 - FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache hit ratio Figure 124 shows the service processor utilization during the 2-desktop antivirus scan Scan 2 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization 11

111 Figure 125 shows the ESX CPU utilization during the 2-desktop antivirus scan Scan 2 - ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX CPU utilization Figure 126 shows the ESX memory utilization during the 2-desktop antivirus scan. 5 Scan 2 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization 111

112 Figure 127 shows the ESX server LUN and average guest latency for the 2- desktop antivirus scan. Scan 2 - ESX LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX LUN and average guest latency Figure 128 shows the ESX linked clone LUN VAAI statistics of the 2-desktop antivirus scan. ATS Scan 2 - ESX linked clone LUN VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 112

113 Figure 129 shows the ESX replica LUN and average guest latency of the 2- desktop antivirus scan Scan 2 - ESX replica LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Reads/sec \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Writes/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX replica LUN and average guest latency Figure 13 shows the ESX replica LUN VAAI statistics of the 2-desktop antivirus scan. Scan 2 - ESX replica LUN VAAI statistics ATS Zeros \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\ATS \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Zeros Figure desktop antivirus scan ESX replica LUN VAAI statistics 113

114 3-desktop antivirus scan Figure 131 shows that it took 1 hour, 7 minutes, and 39 seconds to scan 3 desktops Scan 3 - Linked clone LUN and response times ViewDS1 Read ViewDS1 Write ViewDS1 Response Time (ms) Response Time (ms) Figure desktop antivirus scan linked clone LUN and response times Figure 132 shows the replica LUN and response time for the 3-desktop antivirus scan. Scan 3 - Replica LUN and response times Response Time (ms) SSD2 Read SSD2 Write SSD2 Response Time (ms) Figure desktop antivirus scan replica LUN and response times 114

115 Figure 133 shows the physical disk and response times for the 3-desktop antivirus scan Scan 3 - Physical disk and response times Response Time (ms) Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 134 shows the FAST Cache hit ratio for the 3-desktop antivirus scan. 12 Scan 3 - FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache hit ratio 115

116 Figure 135 shows the service processor utilization during the 3-desktop antivirus scan Scan 3 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization Figure 136 shows the ESX CPU utilization during the 3-desktop antivirus scan Scan 3 - ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX CPU utilization 116

117 Figure 137 shows the ESX server s memory utilization during the 3-desktop antivirus scan. 5 Scan 3 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization Figure 138 shows the ESX linked close LUN and average guest latency during the 3-desktop antivirus scan Scan 3 - ESX linked clone LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX linked clone LUN and average guest latency 117

118 Figure 139 shows the ESX linked clone LUN VAAI statistics for the 3-desktop antivirus scan. Scan 3 - ESX linked clone LUN VAAI statistics ATS Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 118

119 Figure 14 shows the ESX replica LUN and average guest latency for the 3- desktop antivirus scan Scan 3 - ESX replica LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Reads/sec \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Writes/sec Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX replica LUN and average guest latency Figure 141 shows the ESX replica LUN VAAI statistics for the 3-desktop antivirus scan. Scan 3 - ESX replica LUN VAAI statistics ATS Zeros \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\ATS \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Zeros Figure desktop antivirus scan VAAI statistics for the replica LUN 119

120 5-desktop antivirus scan Figure 142 shows that it took 2 hours, 1 minute, and 1 second to scan 5 virtual desktops. Scan 5 - LUN and response times Response Time (ms) ViewDS1 Read ViewDS1 Response Time (ms) ViewDS1 Write Figure desktop antivirus scan LUN and response times Figure 143 shows the replica LUN and response times for the 5-desktop antivirus scan. Scan 5 - Replica LUN and response times Response Time (ms) SSD2 Read SSD2 Write SSD2 Reponse Time (ms) Figure desktop antivirus scan replica LUN and response times 12

121 Figure 144 shows the physical disk and response times for the 5-desktop antivirus scan Scan 5 - Physical disk and response times Response Time (ms) Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 145 shows the FAST Cache hit ratio for the 5-desktop antivirus scan. 12 Scan 5 - FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure desktop antivirus scan FAST Cache hit ratio 121

122 Figure 146 shows the service processor utilization during the 5-desktop antivirus scan. 45 Scan 5 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization Figure 147 shows the ESX server CPU utilization during the 5-desktop antivirus scan Scan 5 - ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX server CPU utilization 122

123 Figure 148 shows the ESX server s memory utilization during the 5-desktop antivirus scan Scan 5 - ESX memory \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization Figure 149 shows the ESX linked clone LUN and average guest latency during the 5-desktop antivirus scan Scan 5 - ESX linked clone LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command Avg. Guest Latency (ms/cmd) Figure desktop antivirus scan ESX linked clone LUN and average guest latency 123

124 Figure 15 shows the ESX linked clone LUN VAAI statistics for the 5-desktop antivirus scan. ATS Scan 5 - ESX linked clone LUN VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 124

125 Figure 151 shows the ESX replica LUN and average guest latency for the 5- desktop antivirus scan Scan 5 - ESX replica LUN and average guest latency \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Reads/sec 5 Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX replica LUN and average guest latency Figure 152 shows the ESX replica LUN VAAI statistics for the 5-desktop antivirus scan. ATS Scan 5 - ESX replica LUN VAAI statistics Zeros \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\ATS \\c1b1\physical Disk SCSI Device(naa aa3243c51d12fe11)\Zeros Figure desktop antivirus scan ESX replica LUN VAAI statistics 125

126 Use Case 3: A dedicated replica LUN with no FAST Cache Summary Figure 153 shows the summary results from an antivirus scan for 5, 3, 2, and 1 desktops for a scenario with a dedicated replica LUN but no FAST Cache configured. Time taken to scan (h:mm:ss) Anti-virus scan summary with a dedicated replica LUN and no FAST Cache 1:55:12 1:4:48 1:26:24 1:12: :57:36 :43:12 :28:48 :14:24 :: Scan 3 Scan 2 Scan 1 Scan Figure 153. Antivirus scan summary without FAST Cash but with a dedicated replica LUN The graphs in this section show the antivirus scan response times for each of the above desktop configurations. 126

127 1-desktop antivirus scan Figure 154 shows the LUN and response times for the 1-desktop antivirus scan. Scan 1 - LUN and response times Response Time (ms) ViewDS5 Read ViewDS5 Write ViewDS5 Response Time (ms) Figure desktop antivirus scan LUN and response times Figure 155 shows the replica LUN and response times for the 1-desktop antivirus scan. Scan 1 - Replica LUN and response times Response Time (ms) SSD1 Read SSD1 Write SSD1 Response Time Figure desktop antivirus scan replica LUN and response times 127

128 Figure 156 shows the physical disk and response times for the 1-desktop antivirus scan Scan 1 - Physical disk and response times Response Time (ms) Bus Enclosure 2 Disk 11 - Read Throughput (IO/s) Bus Enclosure 2 Disk 11 - Write Throughput (IO/s) Bus Enclosure 2 Disk 11 - Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 157 shows the service processor utilization during the 1-desktop antivirus scan. 35 Scan 1 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization 128

129 Figure 158 shows the ESX server CPU utilization during the 1-desktop antivirus scan. 25 Scan 1 - ESX CPU utilization \\c3b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX CPU utilization Figure 159 shows the ESX server s memory utilization during the 1-desktop antivirus scan Scan 1 - ESX memory \\c3b1\memory\free MBytes \\c3b1\memory\memctl Current MBytes \\c3b1\memory\pshare Shared MBytes \\c3b1\memory\swap Used MBytes \\c3b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization 129

130 Figure 16 shows the ESX linked clone LUN and average guest latency for the 1-desktop antivirus scan. Scan 1 - ESX linked clone LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Writes/se c Figure desktop antivirus scan ESX linked clone LUN and average guest latency Figure 161 shows the ESX linked clone LUN VAAI statistics for the 1-desktop antivirus scan. Scan 1 - ESX linked clone LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\ATS Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 13

131 Figure 162 shows the ESX replica LUN and average guest latency for the 1- desktop antivirus scan Scan 1 - ESX replica LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Reads/sec \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Writes/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX replica LUN and average guest latency Figure 163 shows the ESX replica LUN VAAI statistics for the 1-desktop antivirus scan. Scan 1 - ESX replica LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\ATS Figure desktop antivirus scan ESX replica LUN VAAI statistics 131

132 2-descktop antivirus scan Figure 164 shows the LUN and response times for the 2-desktop antivirus scan. Scan 2 - LUN and response time Response Time (ms) ViewDS7 Read ViewDS7 Response Time (ms) ViewDS7 Write Figure desktop antivirus scan LUN and response time Figure 165 shows the replica LUN and response times for the 2-desktop antivirus scan. Scan 2 - Replica LUN and response times Response Time (ms) SSD1 Read SSD1 Write SSD1 Response Time (ms) Figure desktop antivirus scan replica LUN and response times 132

133 Figure 166 shows the physical disk and response times for the 2-desktop antivirus scan. Scan 2 - Physical disk and response time Response Time (ms) Bus Enclosure 2 Disk 11 - Read Throughput (IO/s) Bus Enclosure 2 Disk 11 - Write Throughput (IO/s) Bus Enclosure 2 Disk 11 - Response Time (ms) Figure desktop antivirus scan physical disk and response time Figure 167 shows the service processor utilization for the 2-desktop antivirus scan Scan 2 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization 133

134 Figure 168 shows the ESX server s CPU utilization during the 2-desktop antivirus scan. 25 Scan 2 - ESX CPU utilization \\c3b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX server CPU utilization Figure 169 shows the ESX memory utilization during the 2-desktop antivirus scan Scan 2 - ESX memory \\c3b1\memory\free MBytes \\c3b1\memory\memctl Current MBytes \\c3b1\memory\pshare Shared MBytes \\c3b1\memory\swap Used MBytes \\c3b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization 134

135 Figure 17 shows the ESX linked clone LUN and average guest latency for the 2-desktop antivirus scan Scan 2 - ESX linked clone LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Writes/sec \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX linked clone LUN and average guest latency 135

136 Figure 171 shows the ESX linked clone LUN VAAI statistics for the 2-desktop antivirus scan. Scan 2 - ESX linked clone LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\ATS \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Zeros Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 136

137 Figure 172 shows the ESX replica LUN and average guest latency for the 2- desktop antivirus scan. Scan 2 - ESX replica LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Writes/sec \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scanesx replica LUN and average guest latency Figure 173 shows the ESX replica LUN VAAI statistics for the 2-desktop antivirus scan. Scan 2 - ESX replica LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\ATS \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Zeros Figure desktop antivirus scan ESX replica LUN VAAI statistics 137

138 3-descktop antivirus scan Figure 174 shows that it took 58 minutes and 14 seconds to scan 3 virtual desktops Scan 3 - Linked clone LUN and response times Response Time (ms) ViewDS5 Read ViewDS5 Response Time (ms) ViewDS5 Write Figure desktop antivirus scan linked clone LUN and response times Figure 175 shows the replica LUN and response times for the 3-desktop antivirus scan. Scan 3 - Replica LUN and response times Response Time (ms) SSD1 Read SSD1 Write SSD1 Response Time (ms) Figure desktop antivirus scan replica LUN and response times 138

139 Figure 176 shows the physical disk and response times for the 3-desktop antivirus scan Scan 3 - Physical disk and response times Response Time (ms) Bus 1 Enclosure 6 Disk 2 - Read Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Write Throughput (IO/s) Bus 1 Enclosure 6 Disk 2 - Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 177 shows the service processor utilization during the 3-desktop antivirus scan. 4 Scan 3 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service processor utilization 139

140 Figure 178 shows ESX server s CPU utilization during the 3-desktop antivirus scan. 35 Scan 3 - ESX CPU utilization \\c3b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX CPU utilization Figure 179 shows the ESX server s memory utilization during the 3-desktop antivirus scan Scan 3 - ESX memory \\c3b1\memory\free MBytes \\c3b1\memory\memctl Current MBytes \\c3b1\memory\pshare Shared MBytes \\c3b1\memory\swap Used MBytes \\c3b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization 14

141 Figure 18 shows the ESX link clone LUN and average guest latency for the 3- desktop antivirus scan. Scan 3 - ESX linked clone LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Writes/sec Figure desktop antivirus scan ESX linked clone LUN and average guest latency Figure 181 shows the ESX linked clone LUN VAAI statistics for the 3-desktop antivirus scan. Scan 3 - ESX linked clone LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\ATS \\c3b1\physical Disk SCSI Device(naa.66167b293a25bc67dc25e11)\Zeros Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 141

142 Figure 182 shows the replica LUN and average guest latency for the 3-desktop antivirus scan. Scan 3 - Replica LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Writes/sec \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan replica LUN and average guest latency Figure 183 shows the replica VAAI statistics for the 3-desktop antivirus scan. Scan 3 - Replica LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\ATS \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Zeros Figure desktop antivirus scan replica LUN VAAI statistics 142

143 5-desktop antivirus scan Figure 184 shows that it took approximately 1 hour and 47 minutes to scan 5 desktops. Scan 5 - Linked clone LUN and response times Response Time (ms) ViewDS7 Read ViewDS7 Response Time (ms) ViewDS7 Write Figure desktop antivirus scan linked clone LUN and response times Figure 185 shows the replica LUN and response times for the 5-desktop antivirus scan. Scan 5 - Replica LUN and response times Response Time (ms) SSD1 Read SSD1 Write SSD1 Response Time (ms) Figure desktop antivirus scan replica LUN and resonse times 143

144 Figure 186 shows the physical disk and response times for the 5-desktop antivirus scan Scan 5 - Physical disk and response times Response Time (ms) Bus 1 Enclosure Disk 13 - Read Throughput (IO/s) Bus 1 Enclosure Disk 13 - Write Throughput (IO/s) Bus 1 Enclosure Disk 13 - Response Time (ms) Figure desktop antivirus scan physical disk and response times Figure 187 shows the service processor utilization during the 5-desktop antivirus scan. 45 Scan 5 - SP utilization SP A - Utilization (%) SP B - Utilization (%) Figure desktop antivirus scan service process utilization 144

145 Figure 188 shows the ESX server s CPU utilization during the 5-desktop antivirus scan. 25 Scan 5 - ESX CPU utilization \\c3b1\physical Cpu(_Total)\% Util Time Figure desktop antivirus scan ESX CPU utilization Figure 189 shows the ESX server s memory utilization during the 5-desktop antivirus scan Scan 5 - ESX memory utilization \\c3b1\memory\free MBytes \\c3b1\memory\memctl Current MBytes \\c3b1\memory\pshare Shared MBytes \\c3b1\memory\swap Used MBytes \\c3b1\memory\total Compressed MBytes Figure desktop antivirus scan ESX memory utilization 145

146 Figure 19 shows the ESX linked clone LUN and average guest latency for the 5-desktop antivirus scan. Scan 5 - ESX linked clone LUN and Average guest latency \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Writes/se c \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX linked clone LUN and Average guest latency Figure 191 shows the ESX linked clone LUN VAAI statistics for the 5-desktop antivirus scan. Scan 5 - ESX linked clone LUN VAAI statistics ATS Zeros \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\ATS \\c3b1\physical Disk SCSI Device(naa.66167b293e25bc67dc25e11)\Zeros Figure desktop antivirus scan ESX linked clone LUN VAAI statistics 146

147 Figure 192 shows the ESX replica LUN and average guest latency for the 5- desktop antivirus scan Scan 5 - ESX replica LUN and average guest latency \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Reads/sec Avg. Guest Latency (ms/cmd) \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Writes/sec \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Average Guest MilliSec/Command Figure desktop antivirus scan ESX replica LUN and average guest latency Figure 193 shows the ESX replica LUN VAAI statistics for the 5-desktop antivirus scan. ATS Scan 5 - ESX replica LUN VAAI statistics Zeros \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\ATS \\c3b1\physical Disk SCSI Device(naa aa2243c51d12fe11)\Zeros Figure desktop antivirus scan ESX replica LUN VAAI statistics 147

148 Antivirus scenario summary Having the larger FAST Cache configuration benefited Use Case 1 (FAST Cache only) in comparison to Use Cases 2 and 3 as shown in Figure 194. Time Taken Average time to scan a single desktop 1:4:48 :57:36 :5:24 :43:12 :36: :28:48 :21:36 :14:24 :7:12 :: 5 Scan 3 Scan 2 Scan 1 Scan FAST Cache & Replica on EFD Avg (P1) :57:4 :41:1 :31:3 :2:36 No FAST Cache Avg (P3/P4) :46:33 :27:46 :14:47 :6:11 FAST Cache only Avg (P1) :25:1 :26:3 :18:3 :9:43 Figure 194. Average time to scan a single desktop Note: We performed Use Case 3 (without FAST Cache) on a server that had twice the ESX memory than the other two use cases. 148

149 Login VSI test scenario Overview To simulate a real-world user workload scenario, the Virtual Session Index (VSI) tool version 2.1 was used. The Login VSI workload can be categorized as light, medium, heavy, and custom. A medium workload was selected for this testing and had the following characteristics: Simulates normal user behavior and speeds for medium workload It uses Microsoft Office applications, Internet Explorer, Adobe Acrobat Reader, and zip files The tasks include launching the application, typing, minimizing, maximizing other application, printing, reading PDF, browsing sites that are flash-based. This section describes the Login VSI tests for each of the three use cases. Use Case 1: with FAST Cache and no dedicated replica LUN We tested this use case in the following conditions: With the Auto Tiering option enabled With Performance Tiering option enabled The results of both conditions are shown below. With the Auto Tiering option enabled Figure 195 shows the Login VSI test results using the Auto Tiering option. Figure 195. Auto Tiering Login VSI results 149

150 With the Performance Tiering option enabled Figure 196 shows the Login VSI results using the Performance Tiering option. Figure 196. Performance Tiering Login VSI results The following screens capture results from the performance tier output. Figure 197 shows the LUN and response times during the Login VSI test with FAST Cache enabled and no dedicated replica LUN. LUN and response times Response Time (ms) ViewDS1 Read ViewDS1 Response Time (ms) ViewDS1 Write Figure 197. LUN and response times 15

151 Figure 198 shows the physical disk and response times during the Login VSI test with FAST Cache enabled and no dedicated replica LUN. Physical disk and response time Response Time (ms) Bus 1 Enclosure Disk - Read Throughput (IO/s) Bus 1 Enclosure Disk - Write Throughput (IO/s) Bus 1 Enclosure Disk - Response Time (ms) Figure 198. Physical disk and response time Figure 199 shows the FAST Cache read hit ratio for the during the Login VSI test with FAST Cache enabled and no dedicated replica LUN. 4 FAST Cache read hit ratio VDI - FAST Cache Read Hits/s VDI - FAST Cache Read Misses/s Figure 199. FAST Cache read hit ratio 151

152 Figure 2 shows the FAST Cache write hit ratio during the Login VSI test with FAST Cache enabled and no dedicated replica LUN. 7 FAST Cache write hit ratio VDI - FAST Cache Write Hits/s VDI - FAST Cache Write Misses/s Figure 2. FAST Cache write hit ratio Figure 21 shows the FAST Cache hit ratio for both read and write activity during the Login VSI test with FAST Cache enabled and no dedicated replica LUN. 15 FAST Cache hit ratio VDI - FAST Cache Read Hit Ratio VDI - FAST Cache Write Hit Ratio Figure 21. FAST Cache hit ratio 152

153 Figure 22 shows the service processor utilization during the Login VSI test with FAST Cache enabled and no dedicated replica LUN SP utilization Figure 22. Service processor utilization SP A - Utilization (%) SP B - Utilization (%) Figure 23 shows the ESX CPU utilization during the Login VSI test with FAST Cache enabled and no dedicated replica LUN. 35 ESX CPU utilization Figure 23. ESX CPU utilization \\c1b1\physical Cpu(_Total)\% Util Time 153

154 Figure 24 shows the ESX server s memory utilization during the Login VSI test with FAST Cache enabled and no dedicated replica LUN ESX memory Figure 24. ESX memory utilization \\c1b1\memory\free MBytes \\c1b1\memory\memctl Current MBytes \\c1b1\memory\pshare Shared MBytes \\c1b1\memory\swap Used MBytes \\c1b1\memory\total Compressed MBytes Figure 25 shows the ESX disk and average guest latency for the Login VSI test with FAST Cache enabled and no dedicated replica LUN. ESX disk and average guest latency Avg. Guest Latency (ms/cmd) \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Reads/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Writes/sec \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Average Guest MilliSec/Command Figure 25. ESX disk and average guest latency 154

155 Figure 26 shows the ESX disk VAAI statistics for the Login VSI test with FAST Cache enabled and no dedicated replica LUN. ESX disk VAAI statistics ATS Zeros \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\ATS \\c1b1\physical Disk SCSI Device(naa.66167b293225bc67dc25e11)\Zeros Figure 26. ESX disk VAAI statistics Figure 27 shows the virtual machine disk and latency for the Login VSI test with FAST Cache enabled and no dedicated replica LUN. Virtual machine disk and latency Latency \\c1b1\virtual Disk(VD15P1)\Reads/sec \\c1b1\virtual Disk(VD15P1)\Writes/sec \\c1b1\virtual Disk(VD15P1)\Average MilliSec/Read \\c1b1\virtual Disk(VD15P1)\Average MilliSec/Write Figure 27. Virtual machine disk and latency 155

156 Use Case 2: with FAST Cache and with a dedicated replica LUNs Figure 28 shows the Login VSI test results for Use Case 2. Figure 28. Login VSI test results Figure 29 shows the LUN and response times for the Login VSI test with FAST Cache enabled and a dedicated replica LUN LUN and response times Response Time (ms) ViewDS8 Read ViewDS8 Write ViewDS8 Response Time Figure 29. LUN and response times 156

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage EMC Information Infrastructure Solutions Abstract Virtual desktop infrastructures introduce a new way for IT organizations

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

Virtualized Exchange 2007 Local Continuous Replication

Virtualized Exchange 2007 Local Continuous Replication EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Local Continuous Replication EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Simplify management and decrease TCO Streamline Application

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform

Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform Implementing an Oracle Data Warehouse Test Workload White Paper January 2011, Revision 1.0 Contents Executive Summary... 3 Cisco

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

EMC DESKTOP-AS-A-SERVICE

EMC DESKTOP-AS-A-SERVICE Proven Solutions Guide EMC DESKTOP-AS-A-SERVICE EMC VNX, EMC SYMMETRIX VMAX, VMWARE VCLOUD DIRECTOR, VMWARE VSPHERE 5, AND VMWARE VIEW 5 Deploy virtual desktop services in cloud environments Support virtual

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Proven Solution Guide

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Proven Solution Guide Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

EMC VNXe3200 UFS64 FILE SYSTEM

EMC VNXe3200 UFS64 FILE SYSTEM White Paper EMC VNXe3200 UFS64 FILE SYSTEM A DETAILED REVIEW Abstract This white paper explains the UFS64 File System architecture, functionality, and features available in the EMC VNXe3200 storage system.

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

DESKTOP VIRTUALIZATION SIZING AND COST MODELING PROCESSOR MEMORY STORAGE NETWORK

DESKTOP VIRTUALIZATION SIZING AND COST MODELING PROCESSOR MEMORY STORAGE NETWORK DESKTOP VIRTUALIZATION SIZING AND COST MODELING PROCESSOR MEMORY STORAGE NETWORK INTRODUCTION Session Format PHYSICAL DESIGN LOGICAL DESIGN Memory Sizing Processor Sizing Network (User Bandwidth) Storage

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE.

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE. EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications Essentials Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS

VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS Vblock Solution for SAP: SAP Application and Database Performance in Physical and Virtual Environments Table of Contents www.vce.com V VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

Design a Scalable Virtual Desktop Infrastructure

Design a Scalable Virtual Desktop Infrastructure Design a Scalable Virtual Desktop Infrastructure Ranganath GK, Technology Consultant, VCP VMware India. 6 th November 2008 Basics of Virtual Desktop Infrastructure (VDI) Ease of provisioning Migration

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Jul 2013 D E P L O Y M E N T A N D T E C H N I C A L C O N S I D E R A T I O N S G U I D E Table of Contents Introduction... 3 VMware

More information

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise REFERENCE ARCHITECTURE PernixData FVP Software and Splunk Enterprise 1 Table of Contents Executive Summary.... 3 Solution Overview.... 4 Hardware Components.... 5 Server and Network... 5 Storage.... 5

More information

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX White Paper EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX Citrix XenDesktop 5.6 with Provisioning Services 6.1 for 5000 Desktops Including: Citrix XenDesktop Citrix Provisioning Services EMC

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5 Table of Contents www.vce.com VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5 Version 2.0 February 2013 1 Copyright 2013 VCE Company, LLC. All Rights Reserved.

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Proven Solution Guide

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Proven Solution Guide Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Cisco Solution for EMC VSPEX End-User Computing

Cisco Solution for EMC VSPEX End-User Computing Reference Architecture Guide Cisco Solution for EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Cisco Unified Computing System, Cisco Nexus

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

VMware Workspace Portal Reference Architecture

VMware Workspace Portal Reference Architecture VMware Workspace Portal 2.1 TECHNICAL WHITE PAPER Table of Contents Executive Summary.... 3 Overview.... 4 Hardware Components.... 5 VMware vsphere.... 5 VMware Workspace Portal 2.1.... 5 VMware Horizon

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array A Dell Storage Reference Architecture Enterprise Storage Solutions Cloud Client Computing December 2013

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group Russ Fellows, Evaluator Group SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

SIZING EMC VNX SERIES FOR VDI WORKLOAD

SIZING EMC VNX SERIES FOR VDI WORKLOAD White Paper SIZING EMC VNX SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure

More information

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

MICROSOFT EXCHANGE SERVER 2010 PERFORMANCE REVIEW USING THE EMC VNX5300 UNIFIED STORAGE PLATFORM

MICROSOFT EXCHANGE SERVER 2010 PERFORMANCE REVIEW USING THE EMC VNX5300 UNIFIED STORAGE PLATFORM White Paper MICROSOFT EXCHANGE SERVER 2010 PERFORMANCE REVIEW USING THE EMC VNX5300 UNIFIED STORAGE PLATFORM EMC GLOBAL SOLUTIONS Abstract This white paper focuses on the performance of the EMC VNX5300

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Best Practices for Microsoft

Best Practices for Microsoft SCALABLE STORAGE FOR MISSION CRITICAL APPLICATIONS Best Practices for Microsoft Daniel Golic EMC Serbia Senior Technology Consultant Daniel.golic@emc.com 1 The Private Cloud Why Now? IT infrastructure

More information

Microsoft Exchange Solutions on VMware

Microsoft Exchange Solutions on VMware Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...

More information

Virtual Client Solution: Desktop Virtualization

Virtual Client Solution: Desktop Virtualization IBM System x and BladeCenter Virtual Client Solution: Desktop Virtualization Powered by and VMware View June 29, 2010 Agenda 1 2 3 4 5 6 Virtual client solution overview Addressing companies pain points

More information

Microsoft Private Cloud Fast Track

Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Table of Contents About this Document.... 3 Introduction... 4 Baseline Existing Desktop Environment... 4 Estimate VDI Hardware Needed.... 5

More information

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Exam : VCP5-DCV Title : VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Version : DEMO 1 / 9 1.Click the Exhibit button. An administrator has deployed a new virtual machine on

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment Executive Summary... 2 HP StorageWorks MPX200 Architecture... 2 Server Virtualization and SAN based Storage... 3 VMware Architecture...

More information

VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY

VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY Vblock Solution for SAP Application High Availability Table of Contents www.vce.com VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY Version 2.0 February 2013 1 Copyright 2013 VCE Company, LLC. All

More information

Cisco Solution for EMC VSPEX End-User Computing

Cisco Solution for EMC VSPEX End-User Computing Reference Architecture Cisco Solution for EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5 for 1000 Virtual Desktops Enabled by Cisco Unified Computing System, Cisco Nexus Switches,Citrix

More information

SQL Server Consolidation on VMware Using Cisco Unified Computing System

SQL Server Consolidation on VMware Using Cisco Unified Computing System White Paper SQL Server Consolidation on VMware Using Cisco Unified Computing System White Paper December 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...

More information

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features

More information

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere Test Validation Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere Author:, Sr. Partner, Evaluator Group April 2013 Enabling you to make the best technology decisions 2013 Evaluator Group, Inc.

More information

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review White Paper EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review Abstract This white paper introduces EMC Unisphere for VNXe, a web-based management environment for creating storage

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

IBM Storwize V5000. Designed to drive innovation and greater flexibility with a hybrid storage solution. Highlights. IBM Systems Data Sheet

IBM Storwize V5000. Designed to drive innovation and greater flexibility with a hybrid storage solution. Highlights. IBM Systems Data Sheet IBM Storwize V5000 Designed to drive innovation and greater flexibility with a hybrid storage solution Highlights Customize your storage system with flexible software and hardware options Boost performance

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

Top 5 Reasons to choose Microsoft Windows Server 2008 R2 SP1 Hyper-V over VMware vsphere 5

Top 5 Reasons to choose Microsoft Windows Server 2008 R2 SP1 Hyper-V over VMware vsphere 5 Top 5 Reasons to choose Microsoft Windows Server 2008 R2 SP1 Hyper-V over VMware Published: April 2012 2012 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and

More information

Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V

Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V Chapte 1: Introduction Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V A Detailed Review EMC Information Infrastructure

More information

Virtual Desktop Infrastructure (VDI) Overview

Virtual Desktop Infrastructure (VDI) Overview Virtual Desktop Infrastructure (VDI) Overview October 2012 : EMC Global Services Gary Ciempa, Vinay Patel EMC Technical Assessment for Virtual Desktop Infrastructure COPYRIGHT 2012 EMC CORPORATION. ALL

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

MAXIMIZING AVAILABILITY OF MICROSOFT SQL SERVER 2012 ON VBLOCK SYSTEMS

MAXIMIZING AVAILABILITY OF MICROSOFT SQL SERVER 2012 ON VBLOCK SYSTEMS Maximizing Availability of Microsoft SQL Server 2012 on Vblock Systems Table of Contents www.vce.com MAXIMIZING AVAILABILITY OF MICROSOFT SQL SERVER 2012 ON VBLOCK SYSTEMS January 2013 1 Contents Introduction...4

More information