Optimizing Storage For VDI Deployments



Similar documents
Reference Architecture: VDI for Federal Government Agencies

VMware View: High-Performance, Highly Manageable Desktop Infrastructures for U.S. Federal Agencies WHITE PAPER

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Virtual Desktop Infrastructure (VDI) made Easy

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

EMC Business Continuity for Microsoft SQL Server 2008

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Pivot3 Reference Architecture for VMware View Version 1.03

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

WHITE PAPER 1

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

White Paper. Recording Server Virtualization

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card

Desktop Virtualization (VDI) Overview VMware Horizon View

Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5

Microsoft Exchange Solutions on VMware

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Building the Virtual Information Infrastructure

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Cloud Optimize Your IT

Characterize Performance in Horizon 6

EMC XTREMIO EXECUTIVE OVERVIEW

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Design a Scalable Virtual Desktop Infrastructure

Handling Multimedia Under Desktop Virtualization for Knowledge Workers

Cisco Desktop Virtualization Solution with Atlantis ILIO Diskless VDI and VMware View on Cisco UCS

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Evaluation of Enterprise Data Protection using SEP Software

Nimble Storage for VMware View VDI

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Cisco Desktop Virtualization with UCS: A Blueprint for Success

Virtual SAN Design and Deployment Guide

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

Technology Insight Series

PERFORMANCE STUDY. NexentaConnect View Edition Branch Office Solution. Nexenta Office of the CTO Murat Karslioglu

Intel RAID SSD Cache Controller RCS25ZB040

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Analysis of VDI Storage Performance During Bootstorm

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

Windows Server ,500-user pooled VDI deployment guide

Nutanix Complete Cluster Reference Architecture for Virtual Desktop Infrastructure

Introduction to VMware EVO: RAIL. White Paper

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

UCS M-Series Modular Servers

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

EMC Backup and Recovery for Microsoft SQL Server

The VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5

A virtual SAN for distributed multi-site environments

Condusiv s V-locity 4 Boosts Virtual Machine Performance Over 50% Without Additional Hardware

Merge Healthcare Virtualization

Unified Computing Systems

VMware vsphere Design. 2nd Edition

Private cloud computing advances

EMC Virtual Infrastructure for Microsoft SQL Server

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

EMC Integrated Infrastructure for VMware

IBM FlashSystem and Atlantis ILIO

Storage Solutions to Maximize Success in VDI Environments

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

VMware vsphere 5.0 Boot Camp

Violin Memory Arrays With IBM System Storage SAN Volume Control

HP Client Virtualization SMB Reference Architecture for Windows Server 2012

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

SUN FIRE X4600 M2 SERVERS SCALING VIRTUAL DESKTOPS ON. Evaluating Performance and Capacity with VMware Infrastructure 3. White Paper March 2009

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

VMware Software-Defined Storage & Virtual SAN 5.5.1

Veeam Backup & Replication Enterprise Plus Powered by Cisco UCS: Reliable Data Protection Designed for Virtualized Environments

SAN Conceptual and Design Basics

EMC Unified Storage for Microsoft SQL Server 2008

VMware Horizon 6 on Coho DataStream Reference Architecture Whitepaper

SMB Direct for SQL Server and Private Cloud

Cisco and VMware: Transforming the End-User Desktop

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

Virtual Desktop Infrastructure (VDI) Overview

EMC Backup and Recovery for Microsoft SQL Server

EMC BACKUP-AS-A-SERVICE

Step by Step Guide To vstorage Backup Server (Proxy) Sizing

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

Transcription:

Optimizing Storage For VDI Deployments

2 Overview... 3 Business Challenges... 3 Force 3 Virtual Desktop Infrastructure... 3 Storage Performance... 3 Graphics and Multi Media Experience... 4 Force 3 Reference Architecture... 4 System Configuration... 5 User Workload Profile... 6 Solution Components... 6 VMware View 4... 6 View PC over IP Protocol... 6 VMware View Composer with Linked Clone... 6 Cisco Unified Computing System (UCS)... 7 Cisco UCS Manager... 7 Cisco Unified Fabric... 7 FalconStor Network Storage Server (NSS) Storage Virtualization... 7 Violin Flash Array Storage... 8 Performance Validation... 9 Force 3 Customer Innovation Center... 9 Test Topology... 9 Hardware Resources... 10 Software Resources... 10 Test Procedure... 10 Results... 12 Performance Characteristics... 13 Use Case: The Federal Teleworker... 14 Conclusion... 16

3 Overview Virtual Desktop Infrastructure (VDI) is poised to provide a new management and deployment model for end-user computing by centralizing the physical desktops to virtualized desktop farms in the data center. The resulting environment offers increased manageability, enhanced security and data protection, while providing new capabilities such as anywhere, anytime access for teleworkers. The risk, of course, lies in end-user performance expectations and the return on investment (ROI). Ideally, a VDI solution will provide an equal or better end-user experience than a physical desktop while keeping the deployment cost at par or lower than desktop deployment costs. If these objectives are met, the significant operational cost savings will support an immediate migration toward VDI for the majority of organizations. Business Challenges The traditional model of desktop computing has been to deploy a thick client PC, one machine per user. In this client-server compute model, the PC is a workhorse, with dedicated CPU, hard drive, video card, display, operating system, etc. This approach does allow for fewer servers in the datacenter, a very good multimedia experience, and flexibility. The thick client set up makes operating system updates and patch management one of the most difficult aspects of a desktop solution. Software management, application hot fixes, and upgrades are necessary for addressing bugs, providing enhancements, and addressing security vulnerabilities. But with traditional desktops, each PC has its own operating system and applications, and must be updated individually, a large drain on IT staffing resources. Force 3 Virtual Desktop Infrastructure Solution Overview The Force 3 VDI reference architecture has two major goals: To provide a better end-user experience than physical desktops Provide direct cost savings against physical desktops These goals are accomplished with two key components: Storage Performance In a typical VDI deployment, storage input/output (I/O) becomes a major performance bottleneck, and therefore a major contributor to a negative end-user experience. By centralizing the desktop loads, VDI combines the I/O requirements of individual desktops into a centralized storage system. Although the average user I/O load can be 5-8 IOPS (I/Os per second), the peak IOPS can range from 20 to 100 IOPS on a centralized platform. So during peak I/O, such as morning login and evening logoff, the storage system

4 should be able to handle the peak I/O for all of the users at the same time, with little effect on the enduser. The Force 3 architecture solves the Storage I/O issue by using flash array as the storage for desktop VM OS images. Flash array can offer very high IOPS at low latency. Even under heavy load, flash array provides consistent performance characteristics rather than an exponential performance degradation experienced by spinning disks. The flash array used in Force 3 s architecture, which encompasses technology from FalconStor Software and Violin Memory, can provide up to 200,000 IOPS and 20TB of SLC NAND storage in a 3U unit. Flash array, while providing great performance, is inherently more expensive than traditional spinning disks. To optimize the solution cost, the Force 3 architecture employs several mechanisms to reduce the flash array space requirements. The key components of the storage optimization include: Utilization of VMware View Composer Linked Clone VMware Linked Clone uses writable snapshot technology to reduce the space requirement for a virtual machine deployment. Only changes to the master desktop image are stored on the Linked Clone storage Placement of permanent user data to network storage via Folder Redirection and Persistent Disk By placing user persistent disk and VM swap file on a non-flash array storage and re-directing user data via folder redirection, the flash array storage requirement can be further reduced. Regular refresh of desktop A high-performance flash array backend enables fast desktop refreshes. Regular desktop refreshes clean out the linked clone storage and reduces SSD space requirement. Disabling VM suspension Disabling the virtual machine suspension feature removes requirements for suspend files for each VM in flash array storage. Graphics and Multi Media Experience Remote display protocol which can provide quality graphics and a multi-media experience is key to enduser acceptance of the VDI solution. VMware View PC over IP (PCoIP) delivers a rich user experience over IP network with progressive build, bi-directional audio, and USB redirection in a completely new protocol. Force 3 Reference Architecture The Force 3 architecture creates a building block of 380 Microsoft Windows 7 users. After the initial building block, additional users can be accommodated by expanding storage capacity and server blades. Force 3 s building block calls for hosting 380 Windows 7 VMs in an 8-node VM cluster. Extra capacity in server sizing was built in to handle peak processing and server failure situations. By sizing the system capacity and storage performance to handle peak load, a consistent user experience can be maintained even during a degraded situation such as a node failure. For flash array storage calculation, each linked clone is allowed up to 3GB of growth space plus the Windows paging file space. For Windows 7 VM with 1G paging file, 4GB space per linked clone is

5 required. 1.8TB of flash array storage space, which includes UCS boot from SAN space, is allocated per 380 users. The flash array space utilization can be further improved via Thin Provisioning, a technology feature from FalconStor that prevents the over-allocation of storage space. Additional consideration for storage is SCSI locking due to reservation. To reduce the SCSI lock contention, a maximum of 40 VMs are placed in a datastore, giving 10 flash array datastores per VDI cluster. With Thin Provisioning, having a multiple number of datastores has minimal impact on space utilization. System Configuration Microsoft Windows 7 OS o 1.5G RAM per VM o 1 vcpu per VM o Folder Redirection of My Document, Desktop, App Data, and Favorites VMware View Premier with Composer o Automated Persistent Pool with linked clone deployment o Separation of user persistent disk to non-flash array data store o Placement of VM swap file to non-flash array data store o View PCoIP protocol for desktop connection Cisco UCS Blade Server o Dual Cisco 6120 Fabric Interconnect o 8 Cisco UCS B200 M2 Blade Servers 2 Xeon 5650 CPU 72G RAM No internal HD. Boot from SAN ESXi installation

6 o 10G CNA Each blade server can host 55 Windows 7 Guest FalconStor NSS SAN Accelerator o 1.7TB usable flash array space o Thin Provisioning o FalconStor SafeCache technology o FalconStor Hot Zone technology The initial cost of the solution including Cisco UCS, VMware View, and FalconStor NSS ranges from $700/user for a 380 user system to $615/user for a 1500 user system. User Workload Profile The Force 3 reference architecture has been developed to support the needs and activities of a typical office worker profile. While PCoIP protocol provides robust graphics under typical office use, it lacks a 3D graphics card to support anything above and beyond. If 3D graphics processing (e.g., 3D CAD) is desired, a dedicated workstation with a PCoIP Host Bus Adaptor (HBA) can be used to provide high-end 3D capability with the same end-device and desktop management system. Solution Components VMware View 4 VMware View 4 offers a comprehensive solution purpose built for delivering desktops as a service. View reduces operational costs while increasing security and control for IT staff by simplifying the desktop provisioning and maintenance process. By separating the desktop service from physical hardware platform, end-users gain freedom of access and enhanced user experience including mobility, centralized backup, and anywhere anytime accessibility. View PC over IP Protocol Built into VMware View 4 is a PCoIP remote display protocol. PCoIP is highly efficient display technology specifically built for virtual desktop access. The Teradici PCoIP protocol dynamically optimizes to the end-user network connections, enabling the most optimal desktop experience at any user location. Internal Force 3 testing showed that over a VPN connection, PCoIP provided acceptable flash video experience while RDP was subpar. PCoIP provided 20X greater performance on RDP on Guimark2 Flash 10 test and provided similar performance as physical desktop with direct internet connection (80% performance of physical PC). VMware View Composer with Linked Clone One of the key technologies enabling the cost-effective flash array solution is VMware View Composer with linked clone. View Composer creates desktop images that share virtual disks with a base image, reducing the storage capacity requirements by 50 to 90 percent. However, like any writable snapshot technology, the linked clone suffers from Copy on First Write (COFW) penalty and generates additional I/O operations compared to full desktop. The I/O characteristics of linked clone make them more susceptible to storage I/O bottlenecks. By placing the base image and the space optimized linked clone

7 on a high-performance flash array storage, the Force 3 architecture increases the VDI performance at minimal cost compared to traditional disk based storage technology. Force 3 s architecture also places user data and user persistent disk (containing user profile information) onto separate file share and non flash array storage with Windows folder redirection and View persistent disk redirection. The separation of user data to existing enterprise storage further reduces flash array space requirements for a VDI implementation. Cisco Unified Computing System (UCS) Cisco Unified Computing System (UCS) is a next-generation computing platform that unifies compute, network, and storage access. Optimized for virtual environments, UCS integrates a low-latency, lossless 10G unified network fabric which combines both network communication and storage data traffic into single unified fabric. Cisco UCS Manager The UCS system is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain. The Cisco UCS Manager can manage up to 320 servers with thousands of virtual machines in a single domain. The combination of single-pane of glass management combined with the simplicity of unified fabric accelerates the delivery of new services simply, reliably, and securey. Cisco Unified Fabric In addition to simplified management, Cisco Unified Fabric enhances the reduced cost model for a VDI architecture requiring high-speed network communication and Fibre Channel (FC) access. Traditional blade servers require chassis-based 10G Ethernet and FC switches for every chassis. Since the management and interface mechanism is at the UCS Fabric Interconnect level, UCS does not require chassis-based switches with corresponding cost reduction. FalconStor Network Storage Server (NSS) Storage Virtualization The FalconStor Network Storage Server (NSS) intelligent storage virtualization platform is built to address the storage and data protection needs of today s enterprises. Built on an open architecture and offering iscsi and Fibre Channel (FC) connectivity to deliver high performance, the FalconStor NSS solution provides highly reliable, scalable, intelligent storage virtualization and application-aware data protection. FalconStor NSS technology is available as an enterprise-level software solution, hardware including storage and gateway appliances, virtual appliances for VMware environments, and the FalconStor NSS SAN Accelerator, which enables application, virtualization, VDI, and global SAN acceleration for cost-effective implementation of solid-state technology. High Performance FalconStor NSS technology offers the performance characteristics required for highly demanding enterprise applications. Its native support for high-speed FC access and 10Gb iscsi connectivity makes it the industry performance leader, providing superior data throughput and input/output operations per second (IOPS). The FalconStor NSS architecture is designed for end-to-end performance. The hardware

8 platform uses a Serial Attached SCSI (SAS) channel as the data exchange link between the controller and disks, providing data switching bandwidth of up to 72Gbps for greatly increased transmission bandwidth, efficiency, and reliability. Features such as controller redundancy and intelligent load balancing further enhance the overall performance of the system and greatly increase link reliability. Optimized for VMware Environments The FalconStor NSS technology is optimized to enable efficient VMware server virtualization and integrates with VMware Infrastructure technology to provide optimal data services and continuous data and application availability. Application-aware FalconStor snapshot agents provide snapshots that are 100% transactionally consistent, providing instant recovery with complete data integrity. The FalconStor Snapshot Director for VMware ensures the highest level of awareness and integration into all host and hosted components of a VMware deployment by coordinating data protection processes between the VMware ESX Server and the hosted operating systems and applications. Support for VMware Site Recovery Manager (SRM) facilitates automated failover of VMware environments between sites and simplifies VMware disaster recovery (DR) deployments. Physical-to-virtual (P2V) recovery is also supported, meaning that physical systems can be recovered or recreated as virtual machines for multiple purposes, such as testing or development work. Violin Flash Array Storage The Violin 3200 is a redundant, modular 3U memory array that scales from 1.3 to 10TB SLC NAND Flash and provides the industry s best price/performance attributes. It is the first in the Violin 3000 series of Memory Arrays that scale to more than 140TB in a rack with performance over 2 Million IOPS. The enterprise-grade Violin 3200 includes hardware-based Flash RAID across hot-swappable memory modules to provide robust data protection and spike free latency of less than 100 microseconds. The key technology components of the Violin 3200 are: Non-blocking Flash RAID: Flash without RAID is like a motorcyclist without a helmet. Flash die and blocks can fail. Bit error rates are much higher than on HDD technology. While ECC algorithms provide sufficient data integrity for short term testing, RAID is required for long-term data integrity and retention. Violin s Flash RAID guarantees that a Flash memory Erase (currently 2-10ms) will never impact a read or a write and hence provides significantly more predictable response time in a intermixed read/write workload than any other Flash based solution. Whereas some Flash solutions market their products as RAID protected, several use RAID within a module, not across modules. RAID across modules is essential to prevent any single point of failure from causing data loss. Distributed Flash Management: In other flash memory products, garbage collection is performed by a processor. However, that processor may already be overworked by applications or other duties, such as RAID protection. Violin, therefore, has implemented distributed garbage collection in the hardware. This enables the system to achieve over 200K sustained write IOPS

9 with low latency. In comparison, most SSDs have a sustained write performance of between 1K and 5K IOPS. Performance Validation To characterize the behavior of the VDI environment under heavy I/O load, Force 3 performed storage stress testing on the Force 3 architecture-based VDI system. The stress testing was performed at the Force 3 Customer Innovation Center. Force 3 Customer Innovation Center The Force 3 Customer Innovation Center is a hand-on, multi-technology lab environment where a team of Force 3 engineers, collaborating with different vendor and customer partners, develop and rigorously test solutions that are specifically designed to meet an organization s specific objectives and goals. The Customer Innovation Center was used in the testing phase of the VDI solution to verify the criteria and benchmarks set forth in the reference architecture. Test Topology The test environment at CIC do have less memory capacity (48G vs 72G) then reference architecture. The testing configuration has been adjusted to accommodate the physical constraint at Force3 CIC lab.

10 Hardware Resources Equipment Quantity Configuration Desktop Image 1 MS Windows 7 and MS Windows XP based on federal desktop image Violin Memory Systems 3200 1 2.6TB raw capacity 1.7TB usable capacity FalconStor NSS Gateway Appliance 1 Dual Quad-Core Xeon E5520 @2.27Ghz 16GB RAM 2xQlogic 2462 4Gb FC adaptors Cisco Unified Computing System 1 1xCisco UCS 5100 Chassis w FC Expansion 2xCisco UCS 6100XP Fabric Interconnect 6xB200 M1 Blades w/2x5570cpus @2.93Ghz, 48GB RAM, M71KR-Q Converged Network Adaptor Fabric Switches 2 1xCisco MDS 9124 w/8x4gb ports 1xBrocade 300E w/8x8gb ports Enterprise Network Switch One for this deployment (Two recommended for redundancy) Gigabit Ethernet Switch w/32 ports Nexus 5K, 10Gb switch Software Resources Operating System /Multipath Update Version Microsoft Windows 7 Enterprise 32-bit VMware vsphere ESXi 4.1.0 Build 260247 VMWare View View 4.5 FalconStor NSS Enterprise 6.15 (6164) VMware Desktop RAWC Workload Simulator 1.1.3 IOMeter 2006.07.27 Test Procedure To provide the storage I/O stress, a separate Microsoft Windows 2003 Virtual Machine with IOMeter was used as the I/O load generator. While IOMeter was running against the flash array, the boot time of 320 Microsoft Windows 7 VM on the same flash-array was measured. Varying loads of I/O were generated by increasing the number of I/Os per target on the IOMeter parameter. The boot time measurement occurred from power on to VM Tools ready as indicated on the Virtual Center Console.

11 Microsoft Windows 7 Optimization Optimizing Microsoft Windows 7 master VM is important to maximize the performance of desktop provisioning and operations. The Windows 7 build for the testing used following VM specification and optimizations VM Specification: 1 vcpu 1G RAM (Recommendation is 1.5G 2G. Used 1G due to limited memory capacity of test hardware) 1 LSI SAS VDisk 30G thin provisioned 1 VMXNET 3 NIC Video card with 16MB Video RAM Windows 7 Enterprise Installed Apps: Adobe Reader, Office 2007, Flash, Java Disabled Services: Background Intelligent Transfer Service Bluetooth Support Service Disk Defragmenter Fax Internet Connection Sharing IP Helper Media Center Extender Serivce Offline Files Security Center SSDP Discovery Superfetch Windows Search Windows Update Windows Defender VMWare Workload Generator Profile: Microsoft Word Microsoft Excel Microsoft PowerPoint Internet Explorer Adobe Reader Play 2 Videos of 720p Quality 7zip Compression

Boot Time 12 Java Compile Results On average, it took one minute to boot a single Windows 7 VM. It took an average of five minutes and 30 seconds to boot all 320 VMs. Even with 70K IOPS driven through the backend SSD array, the average boot time remained fairly static. The graph below shows that increasing I/O loads on the flash array does not impact the Windows 7 boot times. 0:06:29 0:05:46 0:05:02 0:04:19 0:03:36 0:02:53 0:02:10 0:01:26 0:00:43 First VM Tools Ready 320 VM Tools Ready Linear (First VM Tools Ready) Linear (320 VM Tools Ready) 0:00:00 0 100 200 300 400 IO Queue Load 320 Windows 7 Boot Time vs. external I/O load on the storage system One interesting observation from the testing is the impact of Windows 7 VM boot on the backend storage array. The I/O generator server was able to generate 100% random 4K I/O with a 50/50 read/write mix of 40K 70K IOPS. However, the I/O on the I/O generator server dropped by almost 80% during the 320 VM Windows boot process. It is important to note that even though the external I/O load did not impact the Windows desktop VM boot time, the reverse is not true. As a result, it can be implied that if enterprise SAN is used for both VDI and enterprise application (database, Microsoft Exchange, server VMs), the VDI process can exert heavy I/O pressure and starve other enterprise applications of the I/O performance.

IOPS (4K 50/50 RW,100% Random) 13 70000 60000 50000 40000 30000 20000 IOPS IOPS during Boot 10000 0 0 40 80 120 160 320 IO Queue Load IO generator vs. VM boot process Performance Characteristics The Force 3 VDI architecture stress testing demonstrated the following performance characteristics: Provisioning Time of 220 VM/Hr 2-hour recompose time for 320 VMs: 30 minutes to delete, 1 hour 30 minutes for reprovisioning 1 minute for Microsoft Windows 7 VM boot 5 minutes and 30 seconds to boot 320VM to VM Tools ready state 66K IOPS at 4ms latency on the flash array (4K Block Size, 100% Random, 50/50 R/W Mix) Additional benefits of flash based architecture are reduced provisioning time. Reduced provisioning time directly translates to better manageability. If the desktop pool management time can be limited to less than 2-3 hours, most of the pool management tasks can be performed during off hours in the workday rather than limited to weekend. Faster provisioning speed also benefits such operations as emergency patch application and disaster recovery minimizing user downtime. With parallel provisioning with multiple composer instances, thousands of VMs can be provisioned in few hours to meet normal maintenance window.

14 The external I/O load on the storage system did not impact the Microsoft Windows desktop boot performance significantly. This implies that the flash array has enough performance bandwidth to accommodate a very heavy peak I/O load without impacting the user experience. However, the Windows desktop I/O has a strong impact on the storage system performance. It is therefore recommended to separate out the desktop operating system I/O from the rest of the enterprise storage to limit the impact to the critical enterprise application. The Force 3 VDI reference architecture segregates the desktop I/O to separate storage while keeping the valuable persistent user data in an enterprise NAS or SAN via folder redirection and persistent disk. This approach can optimize the storage performance against data value and overall system cost. Use Case: The Federal Teleworker Teleworking provides a multitude of benefits including higher employee morale, improved employee recruitment and retention, and increased productivity. Additionally, it is a key component of continuity of operations (COOP) plans and an answer to the government s greening of the data center initiative. Difficulties in managing the remote workstations, increased security risks with laptops, and IT funding for implementation have proven to be major barriers for government participation in teleworking. Virtual Desktop Infrastructure becomes an attractive technology platform for teleworking by providing the best security (the end device does not contain any user data), anytime anywhere access to the desktop via network or VPN, and lower management, operational and support costs. The following teleworker architecture for remote user support is based upon the Force 3 VDI reference architecture. This teleworking architecture simplifies end-user support and management by utilizing zero-client hardware and a pre-configured VPN router. Key components of the Force 3 teleworker architecture include: Cisco 871W Wireless VPN router for the teleworker s home Cisco ASA 55xx integrated security device CiscoWorks Network Compliance Manager Cisco Access Control System The Cisco 871W Wireless VPN Router can be pre-configured to create a VPN tunnel from the teleworker s home to a Cisco ASA device in the corporate data center. The teleworker would not need to perform any technical configuration or troubleshooting of the VPN device. Once the Cisco 871W router is connected to the teleworker s home network, the PCoIP zero client and any other network devices such as an IP telephone can be connected to the 871W router via a hard Ethernet cable or a wireless network. This architecture provides a secure yet easy-to-support teleworker environment. Benefits of the architecture include: Security The use of zero client hardware means that valuable data never leaves the central data center. The PCoIP zero client does not store any data and is used for display purposes only. With View 4.5, PCoIP protocol also supports SmartCard redirection which would be important

15 for most of Federal agencies. Additionally, the VPN tunnel can be limited to specific TCP and UDP ports required for PCoIP and limited to the VMware View Connection Broker and desktop VMs only. Ease of management The use of a hardware VPN router and zero-client hardware minimizes support requirements. The only possibly failure scenarios are a WAN failure and hardware device failure. Support staff does not need to troubleshoot a user s home computer setup or VPN and Client software issues. The Cisco 871W is a fixed configuration SOHO router with an Ethernet uplink. With built-in hardware acceleration for IPsec VPN, integrated Wi-Fi, and a desktop form-factor, it is ideal for teleworker applications. For user connectivity it provides four 10/100 Ethernet ports as well as 802.11b/g wireless access. Management of the devices can be handled via CiscoWorks Network Compliance Manager (CWNCM) which will allow for extreme scalability and rapid deployment of new devices. CWNCM continuously monitors network devices and looks for changes that do not comply with defined policies. It can be configured with role-based access controls including an automated change request/approval system so that IT staffing resources can maintain visibility and/or control of configuration changes on their network as they happen. This is especially important for the Cisco 871W since inconsistent wireless configuration can commonly lead to security failures. VPN access control can be managed and scaled through the use of Cisco s Secure Access Control System.

16 Conclusion The Force 3 VDI reference architecture aims to provide end-users with an experience similar to that of a desktop and at the same time, providing an RIO model for agencies that exceeds that of a desktop deployment or other VDI solutions. As the government IT landscape adapts to the changing needs of the federal worker, a VDI environment offers increased manageability, enhanced security and data protection, and new capabilities such as anywhere, anytime access for teleworkers. Copyright 2010 FalconStor Software. All Rights Reserved. FalconStor Software, FalconStor, SafeCache, and HotZone are trademarks or registered trademarks of FalconStor Software, Inc. in the United States and other countries. All other company and product names contained herein are trademarks of the respective holders.