Reference Architecture: VDI for Federal Government Agencies

Similar documents
VMware View: High-Performance, Highly Manageable Desktop Infrastructures for U.S. Federal Agencies WHITE PAPER

Optimizing Storage For VDI Deployments

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

WHITE PAPER 1

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

EMC XTREMIO EXECUTIVE OVERVIEW

Optimize VDI with Server-Side Storage Acceleration

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Nimble Storage for VMware View VDI

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Virtual Desktop Infrastructure (VDI) made Easy

Violin Memory Arrays With IBM System Storage SAN Volume Control

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Cisco Desktop Virtualization Solution with Atlantis ILIO Diskless VDI and VMware View on Cisco UCS

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Pivot3 Reference Architecture for VMware View Version 1.03

Analysis of VDI Storage Performance During Bootstorm

EMC Unified Storage for Microsoft SQL Server 2008

Flash Storage Optimizing Virtual Desktop Deployments

Storage Solutions to Maximize Success in VDI Environments

Nimble Storage VDI Solution for VMware Horizon (with View)

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Best Practices for VMware Horizon View VDI Based on HUAWEI OceanStor 5500 V3 Converged Storage System. Huawei Technologies Co., Ltd.

Virtual Desktop Infrastructure (VDI) Overview

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Unified Computing Systems

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft

Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization

Desktop Virtualization (VDI) Overview VMware Horizon View

EMC Business Continuity for Microsoft SQL Server 2008

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

NexGen N5 Hybrid Flash Array with XenDesktop

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

PERFORMANCE STUDY. NexentaConnect View Edition Branch Office Solution. Nexenta Office of the CTO Murat Karslioglu

White Paper. Recording Server Virtualization

Pure Storage: All-Flash Performance for XenDesktop

Virtual server management: Top tips on managing storage in virtual server environments

The VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Virtual SAN Design and Deployment Guide

Benchmarking Cassandra on Violin

White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

Flash Storage Roles & Opportunities. L.A. Hoffman/Ed Delgado CIO & Senior Storage Engineer Goodwin Procter L.L.P.

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

VDI Appliances Accelerate and Simplify Virtual Desktop Deployment

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card

Cisco Desktop Virtualization with UCS: A Blueprint for Success

White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

ASKING THESE 20 SIMPLE QUESTIONS ABOUT ALL-FLASH ARRAYS CAN IMPACT THE SUCCESS OF YOUR DATA CENTER ROLL-OUT

Citrix XenDesktop Deploying XenDesktop with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, November 2012

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

VMware vsphere 5.1 Advanced Administration

Desktop Virtualization and Storage Infrastructure Optimization

Using NetApp Unified Connect to Create a Converged Data Center

Global Financial Management Firm Implements Desktop Virtualization to Meet Needs for Centralized Management and Performance

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

Building the Virtual Information Infrastructure

Cisco and VMware: Transforming the End-User Desktop

Introduction 3. Before VDI and VMware Horizon View 3. Critical Success Factors.. 4

WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES

Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Sizing and Best Practices for Deploying VMware View 4.5 on VMware vsphere 4.1 with Dell EqualLogic Storage

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Understanding Data Locality in VMware Virtual SAN

System Requirements Version 8.0 July 25, 2013

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

SMB Direct for SQL Server and Private Cloud

VMware Horizon 6 on Coho DataStream Reference Architecture Whitepaper

VMware vsphere 5.0 Boot Camp

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

VMware View 5 and Tintri VMstore T540

Virtual Desktops at the Speed of Memory

New Hitachi Virtual Storage Platform Family. Name Date

Delivering SDS simplicity and extreme performance

HP iscsi storage for small and midsize businesses

SOLUTIONS PRODUCTS INDUSTRIES RESOURCES SUPPORT ABOUT US ClearCube Technology, Inc. All rights reserved. Contact Support

Private Cloud Migration

Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform

ISE 820 All Flash Array. Performance Review. Performance Review. March 2015

Transcription:

Technical White Paper Report Technical Report Reference Architecture: VDI for Federal Government Agencies Delivering High-Performance, Cost-Effective Virtual Desktops Using Violin Memory Arrays, Cisco UCS, and VMware View January 2013 Abstract This technical report describes a reference architecture for a virtual desktop infrastructure (VDI) that utilizes flash-based array technology from Violin Memory for high-performance, cost-effective storage.

Contents Contents...2 1 Introduction...3 1.1 Purpose and Scope...3 1.2 Reference Architecture Objectives...3 1.3 Primary Use Case: Federal Agency Teleworking...3 2 Federal Agency Technology Challenges...4 3 Force 3 Virtual Desktop Infrastructure...4 3.1 Using Flash Array to Optimize VDI Performance and Storage...4 4 Solution Components...5 4.1 VMware View 5...5 4.2 View PC over IP Protocol...5 4.3 VMware View Composer with Linked Clone...5 4.4 Cisco Unified Computing System (UCS)...6 4.5 Cisco UCS Manager...6 4.6 Cisco Unified Fabric...6 4.7 Violin Flash Memory Array Storage...6 5 Solution Architecture...6 6 Performance Validation...7 6.1 Force 3 Customer Innovation Center...7 6.2 Test Topology...8 6.3 Test Hardware Resources...8 6.4 Software Resources...9 6.5 Test Procedure...9 6.6 Performance Validation Results...10 7 Conclusion...13 About Violin Memory...2 2

1 Introduction IT organizations in federal agencies are being pressured to increase systems efficiency and agility. Many factors contribute to this pressure, including the need to reduce costs, increase data security, streamline systems management, and ensure systems continuity during emergencies. An increasing number of federal agencies are looking to Virtual Desktop Infrastructures (VDI) as an effective means of addressing these needs. VDI offers IT organizations the means to centrally manage and deliver desktops as a service. This paper discusses the challenges faced by federal agencies and shows how VDI helps them: Provide federal workers with immediate and flexible access to desktops and applications. Comply with security requirements and continuity of operations standards. Reduce the cost and complexity of desktop management. 1.1 Purpose and Scope This technical report provides a unique VDI reference architecture that was developed and tested by Force 3, a VMware Premier Partner that has served federal agencies for nearly 20 years. The reference architecture has been designed to meet federal desktop computing needs, and it has been tested with workloads that simulate real-world activities. The test results illustrate how this VDI reference architecture delivers excellent performance for the user and provides the capacity to scale the number of virtual desktops for agency-wide deployment. The architecture can serve as a guide for federal agencies as they seek to architect a VMware View deployment. 1.2 Reference Architecture Objectives This VDI reference architecture was designed to meet several goals, including the following: 1. Provide an excellent user experience around application access, data availability, and overall performance regardless of worker role, location, or device. 2. Support rapid desktop provisioning and refreshes to create flexibility and speed in maintaining the desktop environment. 3. Provide a simple and cost-effective method for rapid piloting with an ability to quickly scale up as needed. 4. Enable streamlined and comprehensive centralized desktop management, including infrastructure (for example, storage), desktop images, and desktop security. 1.3 Primary Use Case: Federal Agency Teleworking Teleworking provides a multitude of benefits, including higher employee morale, improved employee recruitment and retention, and increased productivity. Additionally, it is a key component of continuity of operations (COOP) plans and an answer to the government s greening of the data center initiative. But difficulties in managing remote workstations, increased security risks with laptops, and IT funding for implementation have proven to be major barriers for government participation in teleworking. VDI becomes an attractive technology platform for teleworking by providing the best security (the end device does not contain any user data), anytime/anywhere access to the desktop via network or VPN, and lower management, operational, and support costs. 3

2 Federal Agency Technology Challenges Federal agencies spend nearly 70 percent of their budgets just to keep existing systems up and running. When it comes to desktop management, the situation is becoming more complex: Federal workers and contractors are becoming increasingly mobile Security and compliance requirements are proliferating The variety and overall number of computing devices is expanding rapidly Managing traditional desktop environments was relatively straightforward when all personnel and work resources were confined to a controlled office space. But today s dynamic federal workforce including the military involves field-based, office-based, remote, forward-deployed, and contractor personnel. This mix of environments, devices, and worker roles presents many new challenges for desktop management. Providing anytime, anywhere access Wherever they are, federal workers need real-time, secure access to familiar computing resources. However, provisioning the correct operating system, security parameters, applications, and data can be difficult and expensive. Meeting strict security and compliance requirements Agencies must meet strict computing security requirements that often involve complicated policies and procedures. Ensuring that these policies are properly enforced in a traditional desktop environment requires significant time and IT resources. Supporting continuity of operations planning Since 9/11 and Hurricane Katrina, all federal agencies have committed to improving contingency planning and emergency preparedness through COOP planning. Maintaining continuity of operations requires a highly available and resilient IT infrastructure including desktops so that federal workers can do their jobs effectively, regardless of circumstances. Streamlining IT operations and management At the same time, as federal IT organizations are being pressured to reduce expenses by streamlining operations and management, supporting and patching traditional desktop computers is becoming increasingly complex. Meeting these challenges while streamlining operations requires a new, more efficient way to deliver applications and manage desktops. 3 Force 3 Virtual Desktop Infrastructure Force 3 designs innovative technology solutions for governmental and commercial customers worldwide. Virtualized telework and desktop solutions are a Force 3 core competency. The architecture is designed to optimize user experiences by employing an innovative storage system that accelerates boot times, provisioning, and steady-state operation. It has been tested with workloads that simulate real-world federal computing environments. Test results using this architecture show excellent performance for users while providing the capacity to scale the number of simultaneous virtual desktops. 3.1 Using Flash Array to Optimize VDI Performance and Storage In a typical VDI deployment, storage input/output (I/O) becomes a major performance bottleneck and therefore a major contributor to a negative end-user experience. By centralizing the desktop workloads, VDI combines the I/O requirements of individual desktops into a centralized storage system. Although the average user I/O workload can be 5-8 I/Os per second (IOPS), peak IOPS can range from 20 to 100 IOPS on a centralized platform. So during peak I/O, such as morning login and evening logoff, the storage system should be able to handle the peak I/O for all of the users at the same time with little effect on the end-user. The Force 3 reference architecture solves the storage I/O issue by using flash arrays as the storage medium for virtualized desktop instances. Flash arrays offer very high IOPS and ultra-low latency (response times) compared to traditional hard disk drives. Even under heavy workloads, flash arrays provide consistent 4

performance characteristics compared to the exponential performance degradation often experienced by spinning disks. The Violin flash Memory Arrays used in Force 3 s reference architecture can provide up to one million IOPS and 32TB of SLC NAND storage in a 3U unit. Flash arrays provide extraordinary performance but come with higher capital investment costs than traditional spinning disks. To minimize overall solution expanses, the Force 3 architecture employs several mechanisms to reduce flash array storage requirements. The key components of our storage optimization approach include: Utilization of VMware View Composer Linked Clone VMware Linked Clone uses writable snapshot technology to reduce the space requirement for a virtual machine (VM) deployment. Only changes to the master desktop image are stored on the Linked Clone storage. Placement of permanent user data to network storage via Folder Redirection and Persistent Disk By placing user persistent disk and VM swap files on non-flash storage and re-directing user data via folder redirection, flash array storage requirements can be further reduced. Regular desktop refreshes A high-performance flash array backend enables fast desktop refreshes. Regular desktop refreshes clean out the linked clone storage. Disabling VM suspension A disabling virtual machine suspension feature removes requirements for suspend files for each VM in flash array storage. 4 Solution Components This section describes the hardware and software components used by Force2 in this reference architecture. 4.1 VMware View 5 VMware View 5 offers a comprehensive solution purpose-built for delivering desktops as a service. View reduces operational costs while increasing security and control for IT staff by simplifying the desktop provisioning and maintenance process. By separating the desktop service from the physical hardware platform, end-users gain freedom of access and an enhanced user experience, including mobility, centralized backup, and anywhere/anytime accessibility. 4.2 View PC over IP Protocol Built into VMware View 5 is a PCoIP remote display protocol. PCoIP is a highly efficient display technology specifically built for virtual desktop access. The Teradici PCoIP protocol dynamically optimizes the end-user network connections, enabling the best desktop experience at any user location. Internal Force 3 testing showed that over a VPN connection, PCoIP provided an acceptable flash video experience while RDP was subpar. PCoIP provided 20x greater performance on RDP during Guimark2 Flash 10 testing and similar performance to physical desktops with a direct Internet connection (80% performance of physical PC). 4.3 VMware View Composer with Linked Clone One of the key technologies enabling the cost-effective flash array solution is VMware View Composer with linked clones. View Composer creates desktop images that share virtual disks with a base image, reducing the storage capacity requirements by 50 to 90 percent. However, like any writable snapshot technology, the linked clone suffers from Copy-on-First-Write (COFW) penalties and generates additional I/O compared to traditional thick client desktops. The I/O characteristics of linked clones make them more susceptible to storage I/O bottlenecks. By placing the base image and the space optimized linked clone on high-performance flash array storage, the Force 3 architecture increases the VDI performance at minimal cost compared to traditional diskbased storage technology. Force 3 s architecture also places user data and user persistent disk (containing user profile information) onto separate file sharing and non-flash array storage with Windows folder redirection and View persistent disk redirection. The separation of user data to existing enterprise storage further reduces flash array space requirements for a VDI implementation. 5

4.4 Cisco Unified Computing System (UCS) Cisco UCS is a next-generation computing platform that unifies compute, network, and storage access. Optimized for virtual environments, UCS integrates a low-latency, lossless 10Gb unified network fabric that combines both network communication and storage data traffic into single unified fabric. 4.5 Cisco UCS Manager The UCS system is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain. The Cisco UCS Manager can manage up to 320 servers with thousands of virtual machines in a single domain. The single-pane-of-glass management combined with the simplicity of the unified fabric accelerates the delivery of new services simply, reliably, and securely. 4.6 Cisco Unified Fabric In addition to simplified management, the Cisco Unified Fabric enhances the reduced cost model for a VDI architecture requiring high-speed network communication and Fibre Channel (FC) access. Traditional blade servers require chassis-based 10G Ethernet and FC switches for every chassis. Because the management and interface mechanism is at the UCS Fabric Interconnect level, UCS does not require chassis-based switches, resulting in corresponding cost reductions. 4.7 Violin Flash Memory Array Storage Violin 6000 Series flash Memory Arrays are all-silicon shared storage systems built from the ground up to harness the full power of flash memory and deliver industry leading performance and ultra low data access latencies. A single 3U array delivers up to one million IOPS with consistent, spike-free latencies measured in microseconds, a full order of magnitude lower than spinning disk-based storage solutions. The Memory Arrays scale to 32TB per unit and all active components in a 6000 Series enclosure, from the memory gateways, power supplies, and array controllers all the way down to the flash memory modules have built-in hardware controlled redundancy. 6000 Series flash Memory Arrays connect natively to existing 8Gb/s Fibre Channel, 10GE iscsi, and 40Gb/s Infiniband network infrastructures. They are managed by the Violin Memory Operating System (vmos), which provides a simple, easy GUI management interface for one or multiple enclosures. Provisioning storage for an application is simplified to the extreme. No RAID groups, storage silos, or storage tiers get in the way. New LUNs are automatically spread across the entire surface of the Flash Memory Fabric, Violin Memory s unique flash hardware ecosystem. The Flash Memory Fabric works with vmos to enable profoundly reliable, highly available storage at the speed of memory that offers multiple industry leading benefits: Spike-free Low Latency - The Violin Flash Memory Fabric delivers spike-free and predictable latency that is 95% lower than hard disk drives and 70% lower than SSD (flash in an HDD form) and PCIe card solutions. High Bandwidth - A single Violin flash Memory Array supports over 4000 flash devices and 500 independent flash interfaces. This provides the bandwidth needed for outstanding flash performance with four times greater bandwidth than other storage systems and 50 times greater than most SSDs. Extreme Reliability All active components of the Flash Memory Fabric are hot-swappable for enterprise grade reliability and serviceability. 5 Solution Architecture The Force 3 reference architecture creates a building block of 500 Microsoft Windows 7 users. After the initial building block, additional users can be accommodated by expanding storage capacity and adding blades. Force 3 s building block calls for hosting 500 Windows 7 VMs in an 8-node server cluster. Extra capacity in server sizing is built in to handle peak processing and server failure situations. By sizing the system capacity 6

and storage performance to handle peak workloads, a consistent user experience can be maintained even during a degraded situation such as a node failure. For flash array storage calculations, each linked clone is allowed up to 3GB of growth space plus the Windows paging file space. For Windows 7 VMs with 1GB page file, 4GB space per linked clone is required. 2TB of flash array storage space, which includes capacity for UCS boots from the SAN, is allocated per 500 users. The flash array space utilization can be further optimized via thin provisioning, a technology feature incorporated with the VMware vsphere that prevents the over-allocation of storage space. Additional considerations for storage include SCSI locking due to reservation. Since Violin Array supports VAAI, the SCSI lock contention issue is minimized. 6 Performance Validation To characterize the behavior of the VDI environment under heavy I/O workloads, Force 3 performed storage stress testing on the Force 3 architecture-based VDI system. The stress testing was performed at the Force 3 Customer Innovation Center. 6.1 Force 3 Customer Innovation Center The Force 3 Customer Innovation Center is a hands-on, multi-technology lab environment where a team of Force 3 engineers, collaborating with different vendor and customer partners, develop and rigorously test solutions that are designed to meet an organization s specific objectives and goals. The Customer Innovation Center was used in the testing phase of the VDI solution to verify the criteria and benchmarks set forth in the reference architecture. 7

6.2 Test Topology Due to limited hardware resources, the testing environment used consisted of less server processing power and memory than specified by the reference architecture. The test results with reduced server capacity then can be viewed as worst-case performance result for the reference architecture. 6.3 Test Hardware Resources Table 1 lists the hardware resources used in the performance validation tests. Table 1. Performance Validation Hardware Resources Equipment Quantity Configuration Desktop Image 1 Windows 7 SP1 64-bit 2 vcpu 1G RAM 20G HD Violin Memory 6616 Array 1 16TB raw capacity Symantec Endpoint Protection v 11 10TB usable capacity Cisco Unified Computing System 1 1 x Cisco UCS 5100 Chassis w FC Expansion 2 x Cisco UCS 6120XP Fabric Interconnect 5 B200 M1 Blades w/2x5570cpus @2.93Ghz (4 cores/cpu), 48GB RAM, M71KR-Q Converged Network Adaptor 2 B200 M1 Blades w/2x5570cpus @2.93Ghz (4 cores/cpu), 48GB RAM, M71KR-Q Converged Network Adaptor 1 B250 M2 Blade with 2x5650 CPU @ 2.67Ghz (6 cores/cpu), 192G RAM, M71KR-Q CAN Fibre Switches 2 2 x Q-Logic 5802 w/8 x 8Gbs ports Enterprise Network Switch One for this deployment; 2 recommended for redundancy Gigabit Ethernet Switch w/32 ports Nexus 5K, 10Gb switch 8

6.4 Software Resources Table 2 lists the software resources used for performance validation. Table 2. Performance Validation Software Resources Operating System /Multipath Microsoft Windows Update Version Windows 7 Enterprise SP1 64-bit VMware vsphere ESXi 5.0 Update 1 Build 260247 VMware View 5.1.1 Build 799444 6.5 Test Procedure This section describes the processes and procedures used to validate the performance of the Force 3 VDI reference architecture. 6.5.1 Boot Time for 500 Virtual Machines The boot time testing measured how long it took for a virtual machine to register as available in the VMware Connection Broker meaning that the machine was ready to be used. The exact measurement started when the power-on command was sent for all virtual machines and ended when the last virtual machine was marked as available in the Connection Broker. The primary question was how long does it take for all 500 virtual machines to boot? This boot testing was also designed to answer several performance questions when all 500 machines are booting at the same time (a boot storm): What does host CPU utilization look like during a boot storm? How much load is placed on the storage system during a boot storm? 6.5.2 Provisioning Time for 500 Virtual Machines Provisioning time was measured from when the virtual machine pool was created to when the last virtual machine was marked as available in VMware View. The primary question for this testing was: How long does it take to provision 500 virtual machines using the design specified in the reference architecture? What does the storage load look like during provisioning? This testing also provided performance data for CPU and storage system use when all 500 virtual machines are being provisioned at the same time. 6.5.3 Antivirus Definition Update and Scan for 500 Virtual Machines Antivirus update and scan is often the most performance intensive operation in VDI environment. Due to the nature of AV scan, the storage performance becomes the biggest bottleneck in VDI and best practice often suggest staggering AV scan to avoid storage performance degradation. Unfortunately, typical environment, it s not easy to deviate from or change the Information Assurance policy. This test performed Symantec Endpoint Protection (SEP) definition update and scan on the 500 VDI VMs simultaneously. The testing provided performance data for storage system utilization during AV definition update and scan. 9

6.6 Performance Validation Results This section describes the results of the performance validation testing. 6.6.1 Boot Time Test Results It took only 8 minutes to boot 500 virtual machines from Power On to Available in the VMware Connection Broker. Testing showed the following host performance characteristics during the nine-minute boot storm: CPU utilization averaged 22 percent; however, the CPU peaked at 100% for initial 5-minute duration of the boot storm. This indicates that CPU resource will be the performance bottleneck if the storage performance barrier is removed. With faster processor specified in the reference architecture, there is additional headroom for boot time reduction. Violin Flash memory storage I/O latency stayed between 0 and 686 microseconds, never reaching 1 millisecond. Violin Storage throughput reaches 1.3GB/s. 1.3GB Bandwidth translates to 325,000 IOPS with 4K block size. Figure 1. CPU Utilization during a Boot Storm 10

Figure 2. Violin Array Latency and Bandwidth during Boot Storm 6.6.2 Provisioning Time Test Results It took 45 minutes to provision all 500 virtual machines, or just over 11 virtual machines per minute. In terms of host performance, testing showed the following performance characteristics while all 500 machines: CPU utilization averaged 69 percent. It can be seen that the CPU stayed at 100% for significant time during the provisioning process. As seen during the boot storm test, Violin Flash Array removed storage as performance barrier and CPU became performance bottleneck. Flash memory storage I/O latency stayed between 0 and 4.5 milliseconds. Flash Storage throughput stayed between 1.5GB/s and 2.1GB/s. 1.5GB/s throughput translates to 375,000 IOPS. 2.1GB/s throughput translates to 525,000 IOPS with 4KB block size. 11

Figure 3. CPU utilization during 500 VM Provisioning Figure 5. Violin Flash Array latency and bandwidth during 500 VM provisioning 6.6.3 A/V Definition Update and Scan Time Test Results 500-VM Symantec SEP AV definition update and scan took 7 minutes and 19 seconds. Testing showed the following storage performance characteristics during the AV update and scan: Flash memory storage I/O latency stayed between 0 and 527 microseconds. Storage throughput peaks at 1.2GB/s. 1.2GB/s bandwidth translates to 300,000 IOPs. Figure 6. Symantec SEP Update and Scan 12

Figure 7. Violin latency and bandwidth during 500 VM AV definition and scan 6.7 Performance Characteristics The Force 3 VDI architecture stress testing demonstrated the following performance characteristics: Provisioning time of 500 VMs in 45 minutes 8 minutes to boot 500 desktop VM s to ready state in View Connection Broker 7 minutes for AV update and scan for 500 VMs Violin Flash array provided peak bandwidth of 2.1G/S at (525K IOPS). The Force 3 VDI reference architecture segregates the desktop I/O to separate storage while keeping the valuable persistent user data in an enterprise NAS or SAN via folder redirection and persistent disk. This approach can optimize the storage performance against data value and overall system cost. 7 Conclusion The Force 3 VDI reference architecture aims to provide end-users with an experience similar to that of a traditional desktop, while at the same time providing an ROI model for agencies that exceeds that of a traditional desktop deployment or other VDI solutions. As the government IT landscape adapts to the changing needs of the Federal worker, a VDI environment offers increased manageability, enhanced security and data protection, and new capabilities such as anywhere/anytime access for teleworkers. 13

About Violin Memory Violin Memory is pioneering a new class of high-performance flash-based storage systems that are designed to bring storage performance in-line with high-speed applications, servers and networks. Violin Flash Memory Arrays are specifically designed at each level of the system architecture starting with memory and optimized through the array to leverage the inherent capabilities of flash memory and meet the sustained highperformance requirements of business critical applications, virtualized environments and Big Data solutions in enterprise data centers. Specifically designed for sustained performance with high reliability, Violin s Flash Memory Arrays can scale to hundreds of terabytes and millions of IOPS with low, predictable latency. Founded in 2005, Violin Memory is headquartered in Mountain View, California. For more information about Violin Memory products, visit. 2013 Violin Memory. All rights reserved. All other trademarks and copyrights are property of their respective owners. Information provided in this paper may be subject to change. For more information, visit. vmem-13q1-tr-federal-vdi-ref-arch-r1-uslet-en