5,1 PVS DESKTOPS ON XTREMIO With XenDesktop 5.6 and XenServer 6.1 A Test Report December 213 ABSTRACT This report documents the consistent low latency performance of XtremIO under the load of 5,1 concurrent PVS desktops. XtremIO is an all-flash-array with many unique capabilities that shifts the paradigm in desktop virtualization with XenDesktop. The goal of this document is to help the reader understand the configuration required and high performance delivered by XtremIO in a large XenDesktop deployment. Copyright 213 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC 2, EMC, the EMC logo, and the RSA logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Copyright 213 EMC Corporation. All rights reserved. Published in the USA. 11/13 White Paper
TABLE OF CONTENTS ABSTRACT... 1 TABLE OF CONTENTS... 2 1. EXECUTIVE SUMMARY... 3 2. OVERVIEW... 3 Intended Audience... 3 Test components... 3 Scalability Parameters... 4 Testing Tools... 4 Login VSI Test Ochestration and Workload Generation... 4 Test Setup and Configuration... 4 XenServer... 5 Host Software Configurations... 5 XenDesktop Site Configuration... 6 Virtual Desktops Configuration Details... 6 Configuration Details... 7 STAT Configuration Details... 7 Configuring XtremIO... 8 3. TEST RESULTS... 8 3.1 Performance: End users at a local campus... 8 Highlights of XtremIO s performance:... 11 3.2 Performance: End users at a remote branch office but high speed WAN... 11 Highlights of XtremIO s performance:... 12 3.3 Performance: End users at a campus but low bandwidth... 13 Highlights of XtremIO s performance:... 15 3.4 Performance: End users at a remote branch office... 15 Highlights of XtremIO s performance:... 16 4 INLINE DATA REDUCTION WITH XTREMIO... 16 5 CONCLUSIONS... 17 6 APPENDIX... 17 PHYSICAL SYSTEM CONFIGURATION... 17 VIRTUALIZED SYSTEM CONFIGURATION... 19 HOW TO LEARN MORE... 22 CONTACT US... 22 2
1. EXECUTIVE SUMMARY Citrix XenDesktop and EMC s all-flash XtremIO storage array are leading solutions for desktop virtualization and enterprise storage respectively. Together, they provide the best end user experience and unprecedented ease of administration at the best $/desktop for any XenDesktop deployment. 5,1 virtual desktops provisioned on 2 XtremIO X-Bricks were rigorously tested with the following results: a) The total IO requirements of 5,1 concurrent knowledge worker desktops never exceeded a small fraction of the total IO capability of the two X-Bricks. The results were remarkably similar for both campus users accessing virtual desktops over a high speed low latency campus LAN and remote branch office users over a low bandwidth high latency WAN link (both simulated). b) The end user experience was excellent all throughout with average storage response times of sub-1milisecond. There was no evidence of any IO hiccups or write cliffs over months of testing. c) Administrators simply configured XtremIO in three simple steps and left it running. During normal desktop operations most IOs to XtremIO was random WRITEs but there was no cache sizing or tuning required at any time. d) The entire deployment occupied 66% less space due to XtremIO s inline data reduction technology yielding the best $/desktop economics. 2. OVERVIEW Intended Audience It is assumed that readers have a basic understanding of Citrix XenDesktop, EMC XtremIO Storage, and their functionality. Test components We configured the following environment to understand the performance benefits and scalability of XtremIO with 5,1 concurrent XenDesktop users under two primary deployment conditions a) typical campus deployment with employees at offices connecting to their virtual desktops hosted at a Data Center nearby b) Employees at remote office branch offices (ROBO) accessing their virtual desktops hosted out of remote data centers. The ROBO tests included different variations of bandwidth and latency on the WAN link. Tests were performed using Login VSI standard medium workload. All tests included the following common components: Citrix products and components: XenDesktop version: XD 5.6 FP1 ICA Client: Receiver: 13.4 Citrix Licensing server: 11.1 Web Interface: 5.4.2 User Profile Management: 4.1 Environment Subcomponents: Hypervisor: Citrix XenServer 6.1 HF28 Authentication : MS Windows 28 R2 AD Forest Database Server(s): Microsoft SQL ENT 28 R2 SP2 Test Environment Components Launch and Orchestration Tool: LoginVSI 3.7 Performance Capture Tool : STAT 12.12 Client OS: Windows 7 x64 (2vCPU, 2GB vram) 3
Client Launch Method: VSI with a Python script with WI Client Deployment Method: Citrix Provisioning Server over XenServer Command Center Virtual Desktop OS: Windows 7 x86 (1vCPU, 1GB vram) Virtual Desktop Office Suite: Microsoft Office 21 Professional x86, SP1 Virtual Desktop Deployment Method: PVS User Workload Generation: Login VSI Version 3.7 (Medium) Components Apposite Netropy 1g2 version 2.1.1a3 (WAN Simulator) Storage Components 2 XtremIO X-Bricks connected via 8X1GE iscsi ports to the servers (link to technical datasheet online) Hardware Components HP ProLiant BL46c G7, Dual Six-Core Intel Xeon X565 @ 2.67GHz, 192GB RAM HP ProLiant BL46c Gen8, Dual Eight-Core Intel Xeon E5-267 @ 2.6GHz, 192GB RAM HP BL46 Gen8 blades were used for the VDA hypervisor and an XtremIO storage array for VDA and infrastructure storage, while using HP BL46 G7 for our infrastructure components. Scalability Parameters We conducted numerous test runs over the course of the project. All XenDesktop users were configured to run a VSI medium workload The test runs that were conducted consisted of the following scenarios: Full Bandwidth (1Gbps) and no Latency Full Bandwidth and Latency (5ms) Reduced Bandwidth (622Mbps) Reduced Bandwidth (622Mbps) and Latency (5ms) The average 5,1-user test run took approximately 25 minutes. 85 sessions launched per minute. After all users were logged on, the Login VSI workload ran for 6 minutes on all sessions before sessions started to be logged off. Testing Tools The main tools used during testing are STAT and Login VSI 3.7. STAT is an Citrix proprietary tool used to capture performance data of components under test. VSI is a publicly available tool from Login VSI Inc. which provides an in session workload representative of a typical user and session launching and orchestration. Login VSI Test Ochestration and Workload Generation The primary test tool we used in the environment was Login VSI 3.7, consisting of the VSI Client Launcher, VSI Workload, VSI Console and VSI Analyzer applications. We used Login VSI 3.7 as the test orchestration and workload generation system to create the load of multiple users accessing the XenDesktop environment and executing a typical user workflow within a session. To establish a user session a Python script was utilized to connect to the Web Interface, login user, launch an ICA session to the virtual desktop, which then initiates the Login VSI workload. In our test environment, Login VSI client launchers with Citrix Receiver were used to create multiple users sessions to XenDesktop virtual desktops. Login VSI provides an in session workload representative of a typical user. We used a predefined medium workload that simulates a knowledge user working for this test. We configured the test to simulate each user executing the Login VSI loops for 6 minutes after all users have logged on, with each loop lasting between 12-15 minutes. Login VSI Console and Analyzer was used to orchestrate the test launch and analyze test results for all executed test runs. For more information, see the Login VSI 3.x documentation available on the Login VSI website. Test Setup and Configuration This section provides details of the test environment, how it was set up, and what variables/configuration changes were required to successfully push 5,1 desktops through a branch to the datacenter in the Solutions Lab test environment. Details of specific software configuration details and settings required to run and test against a large scale environment are also included. 4
The appendix contains specific hardware configuration details for each component in the environment. XenServer Host Hardware Configuration HP BL46c Gen8 192GB RAM Intel Xeon E5-267 8 Core CPUs @ 2.6Ghz were utilized as the virtual desktop hypervisor hosts. Host Software Configurations The following is a list of items that were installed/configured on the XenServer hosts: XenServer 6.1 Dom this setting was changed based on http://support.citrix.com/article/ctx134951 and was changed to a value of 496MB RAM Emulex Drivers The drivers were updated based on http://support.citrix.com/article/ctx137633 Storage Repositories Each pool of XenServer host was configured to attach to a specific port on the XtremIO storage using the XenCenter SR Configuration wizard. Hotfixes (XenServer hosts are kept current with the most current hotfixes): XS61E1 - http://support.citrix.com/article/ctx13541 XS61E3 - http://support.citrix.com/article/ctx135596 XS61E4 - http://support.citrix.com/article/ctx135469 XS61E6 - http://support.citrix.com/article/ctx13579 XS61E9 - http://support.citrix.com/article/ctx136252 XS61E1 - http://support.citrix.com/article/ctx136253 XS61E12 - http://support.citrix.com/article/ctx136674 XS61E13 - http://support.citrix.com/article/ctx136482 XS61E14 - http://support.citrix.com/article/ctx136483 XS61E15 - http://support.citrix.com/article/ctx137996 XS61E17 - http://support.citrix.com/article/ctx137168 XS61E18 - http://support.citrix.com/article/ctx137645 XS61E19 - http://support.citrix.com/article/ctx137487 XS61E2 - http://support.citrix.com/article/ctx137843 XS61E22 - http://support.citrix.com/article/ctx137675 XS61E24 - http://support.citrix.com/article/ctx13838 XS61E26 - http://support.citrix.com/article/ctx138348 Host Configuration Each hypervisor host is configured with a custom Virtual Connect Server Profile. The network in the server profile was configured as follows: NIC Flex 1 Port Speed (Gbps) 1 Management A1 1 2 STORAGE1 A1 4.5 3 STORAGE2 B1 4.5 4 GUEST A1 4.5 Each XenServer pool was configured to use a specific Storage interface. Three of the pools used the Storage1 interface and the remaining three pools used Storage2 interface. 5
XenServer Pool The XenServer hosts were configured into six pools of eight hosts. Each pool was configured with a shared storage repository using iscsi to connect to the XtremIO storage unit. Each pool was assigned a 4.85TB share on XtremIO storage unit for virtual desktop storage. XenDesktop Site Configuration In this environment, 2 virtual XenDesktop 5.6 FP 1Desktop Delivery Controllers (DDCs) were deployed in a single XenDesktop Site environment. XenDesktop site was configured with a shared virtual SQL 28 R2 Database Server to host the site database. For production deployment, customers would typically ensure that the SQL server is highly available through use of SQL Mirroring or Clustering. The following key changes have been implemented from the default install for testing on large environments and for the VSI Logon/Logoff test. By default, Pooled Desktop groups reboot workstations after a user logs off (auto tainting). This feature has been disabled via the XenDesktop SDK using the command below: Set-BrokerDesktopGroup -name <Desktop Group Name> -ShutDownDesktopsAfterUse $false This setting was implemented as the rebooting of desktops at the end of the Login VSI test impacts the final measurements of desktops still completing their Login VSI loop. In a real deployment a mass logoff of pooled desktops would not be typical. If customers have environments which would lead to mass logoffs, pooled desktops may not be suitable. The logoff storm combined with idle pool set to power on all the desktops would lead to significant resource utilization on both the Hypervisor host and the storage By default, Pooled Desktop groups have configurable buffers to increase resource efficiency; these buffers are 1% of the size of the desktop group. This feature has been disabled via the XenDesktop SDK using the below command, so it would not introduce any unnecessary load on the systems during test preparations. set-brokerdesktopgroup -name <Desktop Group Name> -OffPeakBufferSizePercent set-brokerdesktopgroup -name <Desktop Group Name> -PeakBufferSizePercent The XD DDC has a single pooled-random catalog. Desktops from catalog are assigned to a single pooled desktop group which contained 5,1 regular virtual desktops. Advanced Host Details were set up to improve DDC XenServer operation for large scale deployment. There were a total of six XD host connections (one connection per XS pool). The configuration for each one of the host connections were as follows: Figure 1 XD Advanced Host Details Virtual Desktops Configuration Details The test environment consisted of 5,1 VDAs. Windows 7 SP1 x86 Enterprise image was created with Microsoft Office 21 Service Pack 1. The following is a breakdown of the VDAs: The images were configured with a 4GB hard drive, 1 vcpu and 1GB RAM. These configurations were used in order to allow us to utilize more virtual desktops per blade server without running into hardware RAM limitations. They were optimized for VDI performance according to: http://community.citrix.com/display/ocb/21/1/15/optimizing+windows+7+for+flexcast+delivery Some of the optimization settings include: 6
Windows Update and Restore were disabled Windows Search was uninstalled Indexing Service was disabled Page file was configured to twice the RAM (2GB) and set to static size. These optimizations were done following current Citrix best practices and does not preclude customers from running desktops without these optimizations. XtremIO is easily able to support the IO requirements of desktops running without these optimizations. Figure 2 XenDesktop Configuration The write cache for each desktop was 4GB each and configured on XtremIO. Configuration Details This environment contained the following network elements: HP Switches Apposite Netropy (Linktropy) a WAN emulator STAT Configuration Details STAT was configured based on the following items: STAT agents collected Performance Monitor data every 7 seconds. STAT was configured to capture performance data from most of the Window-based servers used in the test environment, including Citrix infrastructure servers (DDC s, License Server, PVS servers), SQL server, ICA Clients, and Active Directory domain controllers. In addition to that, STAT was also configured to gather performance data from the XenServer hosts. STAT system consists of: STAT Version used: 12.12 7
STAT Database on a dedicated SQL 28 R2 server STAT MMC-based console STAT Agents on servers to collect Performance Monitor data STAT Agents to capture XenServer metrics STAT Client dedicated XenServer hypervisors (HP BL28 G7 and Gen8) STAT Client Provisioning Servers Configuring XtremIO XtremIO configures in three easy steps and does not require any RAID sizing or configuration of any kind. XtremIO data protection is custom built for flash and extends the longevity of flash while ensuring very low wasted capacity to parity overhead. There are no caches to worry about despite the high skew of WRITE IOs. A demonstration of the configuration steps is available at: http://www.youtube.com/watch?v=pxvteeiivao&feature=youtu.be 3. TEST RESULTS 3.1 Performance: End users at a local campus Full Bandwidth (1Gbps): Users at an Enterprise campus accessing virtual desktops hosted at a local Data Center had the following traffic/io characteristics. Figure 3 Full Bandwidth (LoginVSI Graph) X Axis is the number of launched sessions Y Axis is Response time measured in milliseconds 8
IOPS Bandwidth (MB/s) 8::18 AM 8:9:45 AM 8:19:12 AM 8:28:39 AM 8:38:6 AM 8:47:33 AM 8:57: AM 9:6:27 AM 9:15:54 AM 9:25:21 AM 9:34:48 AM 9:44:15 AM 9:53:42 AM 1:3:9 AM 1:12:36 AM 1:22:3 AM 25 2 15 1 5 Full Bandwdth Overall - Port 1 to Port 2 tx_mbps (I) Full Bandwdth Overall - Port 2 to Port 1 tx_mbps (I) Figure 4 Full Bandwidth (Netropy Graph) X Axis is the timeframe Y Axis is Bandwidth measured in Mbps 6, 5, 6 5 Write IOPS Read IOPS Write Bandwidth Read Bandwidth 4, 4 3, 3 2, 2 1, 1 Time Figure 5 XtremIO IOPS and Bandwidth plot with all 5,1 desktops running concurrently 9
Figure 6 XtremIO management console bandwidth view note the smooth test ramp up Figure 7 XtremIO management console IOPS view note the smooth test ramp up 1
IOPS Bandwidth (MB/s) 14, 12, 1, 1, 9 8 7 Write IOPS Read IOPS Write Bandwidth Read Bandwidth 8, 6 5 6, 4 4, 3 2 2, 1 Time Figure 8 XtremIO IOPS and Bandwidth plot with all 5,1 desktops booting concurrently Highlights of XtremIO s performance: The total aggregate peak write bandwidth of 4MB/s 5 MB/s during boot storms and steady operations is only about 5% - 6% of the total write bandwidth supported by two X-Bricks at less than 1 millisecond of latency per IO on average. 3.2 Performance: End users at a remote branch office but high speed WAN Full Bandwidth (1Gbps) with Latency (5ms): Users at a branch office accessing virtual desktops hosted in a remote Data Center over a high speed WAN link had the following traffic/io characteristics. Figure 9 Full Bandwidth with Latency (LoginVSI Graph) 11
IOPS Bandwidth (MB/s) 1:4: PM 1:49:52 PM 1:59:44 PM 2:9:36 PM 2:19:28 PM 2:29:2 PM 2:39:12 PM 2:49:4 PM 2:58:56 PM 3:8:48 PM 3:18:4 PM 3:28:32 PM 3:38:24 PM 3:48:16 PM 3:58:8 PM X Axis is the number of launched sessions Y Axis is Response time measured in milliseconds 25 2 15 1 5 Full Bandwidth Opt 5ms Latency Overall - Port 1 to Port 2 tx_mbps (I) Full Bandwidth Opt 5ms Latency Overall - Port 2 to Port 1 tx_mbps (I) Figure 1 Full Bandwidth with Latency (Netropy Graph) X Axis is the timeframe Y Axis is Bandwidth measured in Mbps 6, 5, 4, 6 5 4 Write IOPS Read IOPS Write Bandwidth Read Bandwidth 3, 3 2, 2 1, 1 Time Figure 11 XtremIO IOPS and Bandwidth plot with all 5,1 desktops running concurrently Highlights of XtremIO s performance: The total aggregate peak write bandwidth of 4MB/s 5 MB/s during steady operations and boot storms is only about 5% - 6% of the total write bandwidth supported by two X-Bricks at less than 1 millisecond of latency per IO on average. 12
8:46: AM 8:57:1 AM 9:8:2 AM 9:19:3 AM 9:3:4 AM 9:41:5 AM 9:52:6 AM 1:3:7 AM 1:14:8 AM 1:25:9 AM 1:36:1 AM 1:47:11 AM 1:58:12 AM 11:9:13 AM 11:2:14 AM Note: The plot during the boot storm resembles the boot storms during the other test scenarios and is not show here for the sake of brevity. 3.3 Performance: End users at a campus but low bandwidth Reduced Bandwidth (622Mbps): Users at a campus accessing virtual desktops hosted in a local Data Center over a low speed link had the following traffic/io characteristics. Figure 12 Reduced Bandwidth (LoginVSI Graph) X Axis is the number of launched sessions Y Axis is Response time measured in milliseconds 25 2 15 1 5 Reduced Bandwidth Opt Overall - Port 1 to Port 2 tx_mbps (I) Reduced Bandwidth Opt Overall - Port 2 to Port 1 tx_mbps (I) Figure 13 Reduced Bandwidth (Netropy Graph) X Axis is the timeframe Y Axis is Bandwidth measured in Mbps 13
IOPS Bandwidth (MB/s) IOPS Bandwidth (MB/s) 6, 5, 6 5 Write IOPS Read IOPS Write Bandwidth Read Bandwidth 4, 4 3, 3 2, 2 1, 1 Time Figure 14 XtremIO IOPS and Bandwidth plot with all 5,1 desktops running concurrently 14, 12, 1,2 1, Write IOPS Read IOPS Write Bandwidth Read Bandwidth 1, 8 8, 6, 6 4, 4 2, 2 Time Figure 15 XtremIO IOPS and Bandwidth plot with all 5,1 desktops booting concurrently 14
11:48:5 AM 12:6:2 PM 12:23:14 PM 12:4:26 PM 12:57:38 PM 1:14:5 PM 1:32:2 PM 1:49:14 PM 2:6:26 PM 2:23:38 PM 2:4:5 PM 2:58:2 PM 3:15:14 PM 3:32:26 PM 3:49:38 PM 4:6:5 PM Highlights of XtremIO s performance: The total aggregate peak write bandwidth of 4MB/s 5 MB/s during steady operations and boot storms is only about 5% - 6% of the total write bandwidth supported by two X-Bricks at less than 1 millisecond of latency per IO on average. 3.4 Performance: End users at a remote branch office Reduced Bandwidth (622Mbps) with Latency (5ms): Users at a branch office accessing virtual desktops hosted in a remote Data Center over a low bandwidth high latency link had the following traffic/io characteristics. Figure 16 Reduced Bandwidth with Latency (LoginVSI Graph) X Axis is the number of launched sessions Y Axis is Response time measured in milliseconds 25 2 15 1 5 Reduced Bandwidth and Latency Overall - Port 1 to Port 2 tx_mbps (I) Reduced Bandwidth and Latency Overall - Port 2 to Port 1 tx_mbps (I) Figure 17 Reduced Bandwidth with Latency (Netropy Graph) X Axis is the timeframe Y Axis is Bandwidth measured in Mbps 15
IOPS Bandwidth (MB/s) 6, 5, 4, 6 5 4 Write IOPS Read IOPS Write Bandwidth Read Bandwidth 3, 3 2, 2 1, 1 Time Figure 18 XtremIO IOPS and Bandwidth plot with all 5,1 desktops running concurrently Highlights of XtremIO s performance: The total aggregate peak write bandwidth of 4MB/s 5 MB/s during steady operations is only about 5% - 6% of the total write bandwidth supported by two X-Bricks at less than 1 millisecond of latency per IO on average. Note: The plot during the boot storm resembles the boot storms during the other test scenarios and is not show here for the sake of brevity. 4 INLINE DATA REDUCTION WITH XTREMIO As shown in the screen capture of the XMS on the left: XtremIO resulted in data reduction of about 3:1 resulting in large space savings. XtremIO reduces all data first before writing to SSDs for the optimal storage footprint for customers XenDesktop deployments. There is no post process garbage collection or post process data reduction that causes unpredictable performance drop as demonstrated in the earlier performance section. 16
5 CONCLUSIONS XtremIO delivers a much faster and more enjoyable user experience to XenDesktop end users at a lower $/desktop than traditional storage or other flash arrays. By leveraging industry-leading innovations such as high-performance data reduction and flash-optimized data protection, XtremIO radically lowers the $/desktop while bringing high performance to PVS XenDesktops. It provides unprecedented simplicity and acceleration to ongoing administrative activities such as desktop rollout and maintenance and enterprises gain the ability to deploy full clones with the efficiency of linked clones. With XtremIO, Citrix XenDesktop has no limits and XenDesktop projects will roll out faster and with assured success. 6 APPENDIX PHYSICAL SYSTEM CONFIGURATION Hardware Model CPU Storage Misc. Domain Active Directory (2 in DC) HP BL46c G7 Dual six core Intel Xeon X565 2.67GHz 192GB 136 GB 1 x 1Gbps Microsoft Windows Server 28 Enterprise R2, SP1, x64 Domain al level: Windows Server 28 R2 Hardware Model CPU STAT Console Server HP DL36 G7 Dual Intel Xeon X567 2.93GHz 144GB Storage 6 x 146 GB SAS (RAID 1) 2 x 1Gbps Microsoft Windows Server 28 Enterprise R2, SP1, x64 Hardware Model STAT SQL Server HP DL36 G7 17
CPU Dual Intel Xeon X567 2.93GHz 144GB Storage 6 x 146 GB SAS (RAID 1) Software 2 x 1Gbps Microsoft Windows Server 28 Enterprise R2, SP1, x64 SQL 28 R2 Enterprise, x64 Hardware Model VDA Hypervisor Host (x48) HP BL46c Gen8 CPU Dual 8 Core CPUs @ 2.6Ghz Intel Xeon E5-267 192Gb Storage 2x 73GB 6Gb SAS 15K RPM SFF HDD (RAID 1) Misc. 8 x HP FlexFabric 1Gb 2-port 554FLB Adapter Citrix XenServer 6.1 Increased Dom to 4GB Hardware Model CPU Client Hypervisor Host (x32) HP BL46c G7 Dual Six-Core Intel Xeon X565 2.67GHz 192GB Storage 2 x 5 GB SATA (RAID 1) Misc. 8 x HP FlexFabric Embedded Citrix XenServer 6.1 Increased Dom to 4GB Infrastructure Hypervisor Host 18
Hardware Model CPU HP DL46c G7 Dual Six-Core Intel Xeon X565 2.67GHz 192GB Storage 6 x 146 GB SAS (RAID 1) Misc. 8 x HP FlexFabric Embedded Citrix XenServer 6.1 Increased Dom to 4GB Hardware Model Quantity Storage Storage for all virtual desktops, entire infrastructure of Login VSI and all virtual machines EMC XtremIO all flash-array 2 X-Bricks 25 X 4GB MLC drives in each X-Brick VIRTUALIZED SYSTEM CONFIGURATION Hardware Model CPU Storage Misc. Domain Active Directory (2 in Branch) XenServer 6.1 Virtual Machine 2 vcpus 2GB 6GB on iscsi Shared Volume Storage Repository 1 x 1Gbps vnic Microsoft Windows Server 28 Enterprise R2, SP1, x64 Domain al level: Windows Server 28 R2 19
Hardware Model CPU Storage XenDesktop Desktop Delivery Controller (x2) XenServer 6.1 Virtual Machine 4 vcpus 8GB 6GB on iscsi Shared Volume Storage Repository 1 x 1Gbps vnic Microsoft Windows Server 28 Enterprise R2, SP1, x64 Hardware Model CPU Storage Citrix SQL Server XenServer 6.1 Virtual Machine 4 vcpus 12GB 6GB on iscsi Shared Volume Storage Repository 122GB iscsi Shared Storage (XtremIO) 3 x 1Gbps (1 used) Microsoft Windows Server 28 Enterprise R2, SP1, x64 Software SQL 28 R2 Enterprise, x64 Misc XenDesktop Database PVS CL Database PVS VDA Database (Chassis 1) PVS VDA Database (Chassis 2) PVS VDA Database (Chassis 3) CPU Storage Citrix Virtual Desktop VM Template (One per XenServer Pool) 1 vcpu 1GB 4GB on iscsi Shared Volume Storage Repository 1Gbps vnic Microsoft Windows 7 x86 SP1 Provisioning Server (x11) 2
Hardware Model CPU Storage XenServer Virtual Machine 4 vcpu s 16GB 6GB on iscsi Shared Volume Storage Repository 2 x 1Gbps Microsoft Windows Server 28 Enterprise R2, SP1, x64 Software PVS 6.1 Hardware Model CPU Citrix File Server XenServer Virtual Machine 2 vcpu s 8GB Storage 6GB on iscsi Shared Volume Storage Repository 2TB on iscsi Shared Storage Repository (UPM) 55GB on iscsi Shared Storage Repository (CL PVS Store) 55GB on iscsi Shared Storage Repository (Chassis 1 PVS Store) 55GB on iscsi Shared Storage Repository (Chassis 2 PVS Store) 55GB on iscsi Shared Storage Repository (Chassis 3 PVS Store) 3 x 1Gbps Microsoft Windows Server 28 Enterprise R2, SP1, x64 Hardware Model CPU Citrix Virtual Desktop VM (x5,1) XenServer Virtual Machine (Streamed via PVS) 1vCPU 1GB Storage 1Gbps vnic Microsoft Windows 7 x86 SP1 Software Citrix Virtual Desktop Agent 5.6 PVS Target Device 6.1.16 Citrix Profile Management 4.1.1.5 VSI 3.7 21
Microsoft Office 21 Microsoft.NET Framework Client Profile 4..3.319 HOW TO LEARN MORE For a detailed presentation explaining XtremIO s storage array capabilities and how it substantially improves performance, operational efficiency, ease-of-use, and total cost of ownership, please contact XtremIO at info@xtremio.com. We will schedule a private briefing in person or via web meeting. XtremIO has benefits in many environments, but is particularly effective for virtual server, virtual desktop, and database applications. EMC 2, EMC, the EMC logo, XtremIO and the XtremIO logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc., in the United States and other jurisdictions. Copyright 213 EMC Corporation. All rights reserved. Published in the USA. 2/13 EMC White Paper CONTACT US To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller or visit us at www.emc.com. EMC believes the information in this document is accurate as of its publication date. The information is subject to change without notice. 22