Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop

Size: px
Start display at page:

Download "Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop"

Transcription

1 Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop Last update: 24 June 2015 Version 1.2 Reference Architecture for Citrix XenDesktop Contains performance data and sizing recommendations for servers, storage, and networking Describes variety of storage models including SAN storage and hyper-converged systems Contains detailed bill of materials for servers, storage, networking, and racks Mike Perks Kenny Bain Pawan Sharma

2 Table of Contents 1 Introduction Architectural overview Component model Citrix XenDesktop provisioning Provisioning Services Machine Creation Services Storage model Atlantis Computing Atlantis hyper-converged volume Atlantis simple hybrid volume Atlantis simple in-memory volume Operational model Operational model scenarios Enterprise operational model SMB operational model Hyper-converged operational model Hypervisor support VMware ESXi Citrix XenServer Microsoft Hyper-V Compute servers for virtual desktops Intel Xeon E v3 processor family servers Intel Xeon E v3 processor family servers with Atlantis USX Compute servers for hosted shared desktops Intel Xeon E v3 processor family servers Compute servers for SMB virtual desktops Intel Xeon E v3 processor family servers with local storage Compute servers for hyper-converged systems Intel Xeon E v3 processor family servers with Atlantis USX Graphics Acceleration Management servers ii

3 4.9 Systems management Shared storage IBM Storwize V7000 and IBM Storwize V3700 storage IBM FlashSystem 840 with Atlantis USX storage acceleration Networking GbE networking GbE FCoE networking Fibre Channel networking GbE administration networking Racks Deployment models Deployment example 1: Flex Solution with single Flex System chassis Deployment example 2: Flex System with 4500 stateless users Deployment example 3: System x server with Storwize V7000 and FCoE Deployment example 4: Hyper-converged using System x servers Deployment example 5: Graphics acceleration for 120 CAD users Appendix: Bill of materials BOM for enterprise compute servers BOM for hosted desktops BOM for SMB compute servers BOM for hyper-converged compute servers BOM for enterprise and SMB management servers BOM for shared storage BOM for networking BOM for racks BOM for Flex System chassis BOM for OEM storage hardware BOM for OEM networking hardware Resources Document history iii

4 1 Introduction The intended audience for this document is technical IT architects, system administrators, and managers who are interested in server-based desktop virtualization and server-based computing (terminal services or application virtualization) that uses Citrix XenDesktop. In this document, the term client virtualization is used as to refer to all of these variations. Compare this term to server virtualization, which refers to the virtualization of server-based business logic and databases. This document describes the reference architecture for Citrix XenDesktop 7.6 and also supports the previous versions of Citrix XenDesktop 5.6, 7.0, and 7.1. This document should be read with the Lenovo Client Virtualization (LCV) base reference architecture document that is available at this website: lenovopress.com/tips1275 The business problem, business value, requirements, and hardware details are described in the LCV base reference architecture document and are not repeated here for brevity. This document gives an architecture overview and logical component model of Citrix XenDesktop. The document also provides the operational model of Citrix XenDesktop by combining Lenovo hardware platforms such as Flex System, System x, NeXtScale System, and RackSwitch networking with OEM hardware and software such as IBM Storwize and FlashSystem storage, and Atlantis Computing software. The operational model presents performance benchmark measurements and discussion, sizing guidance, and some example deployment models. The last section contains detailed bill of material configurations for each piece of hardware. 1

5 2 Architectural overview Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with Citrix XenDesktop. This reference architecture does not address the issues of remote access and authorization, data traffic reduction, traffic monitoring, and general issues of multi-site deployment and network management. This document limits the description to the components that are inside the customer s intranet. Active Directory, DNS SQL Database Server Provisioning Server (PVS) Web Interface Connection Broker (Delivery Controller) Stateless Virtual Desktops Hypervisor Internet Clients FIREWALL 3 rd party VPN FIREWALL Hosted Desktops and Apps Dedicated Virtual Desktops Hypervisor Hypervisor Shared Storage XenDesktop Pools License Server Machine Creation Services Internal Clients Figure 1: LCV reference architecture with Citrix XenDesktop 2

6 3 Component model Figure 2 is a layered view of the LCV solution that is mapped to the Citrix XenDesktop virtualization infrastructure. Desktop Studio Client Receiver Client Devices Client Receiver Client Receiver Administrator GUIs for Support Services HTTP/HTTPS ICA Web Interface Delivery Controller PVS and MCS License Server XenDesktop SQL Server Management Services vcenter Server vcenter SQL Server Hypervisor Management Management Protocols Dedicated Virtual Desktops VM Agent VM Agent Accelerator VM Hypervisor NFS and CIFS Stateless Virtual Desktops VM Agent VM Agent Local SSD Storage Accelerator VM Hypervisor Hosted Shared Desktops and Apps Shared Desktop Shared Desktop Shared Application Accelerator VM Hypervisor Support Service Protocols Directory DNS DHCP OS Licensing Lenovo Thin Client Manager XenDesktop Data Store VM Repository Difference and Identity Disks Shared Storage User Profiles Lenovo Client Virtualization- Citrix XenDesktop User Data Files Support Services Figure 2: Component model with Citrix XenDesktop Citrix XenDesktop features the following main components: Desktop Studio Web Interface Desktop Studio is the main administrator GUI for Citrix XenDesktop. It is used to configure and manage all of the main entities, including servers, desktop pools and provisioning, policy, and licensing. The Web Interface provides the user interface to the XenDesktop environment. The Web Interface brokers user authentication, enumerates the available desktops and, upon start, delivers a.ica file to the Citrix Receiver on the user s local device to start a connection. The Independent Computing Architecture (ICA) file contains configuration information for the Citrix receiver to communicate with the virtual desktop. Because the Web Interface is a critical component, redundant servers must be available to provide fault tolerance. 3

7 Delivery controller The Delivery controller is responsible for maintaining the proper level of idle desktops to allow for instantaneous connections, monitoring the state of online and connected desktops, and shutting down desktops as needed. A XenDesktop farm is a larger grouping of virtual machine servers. Each delivery controller in the XenDesktop acts as an XML server that is responsible for brokering user authentication, resource enumeration, and desktop starting. Because a failure in the XML service results in users being unable to start their desktops, it is recommended that you configure multiple controllers per farm. PVS and MCS License Server XenDesktop SQL Server vcenter Server Provisioning Services (PVS) is used to provision stateless desktops at a large scale. Machine Creation Services (MCS) is used to provision dedicated or stateless desktops in a quick and integrated manner. For more information, see Citrix XenDesktop provisioning section on page 5. The Citrix License Server is responsible for managing the licenses for all XenDesktop components. XenDesktop has a 30-day grace period that allows the system to function normally for 30 days if the license server becomes unavailable. This grace period offsets the complexity of otherwise building redundancy into the license server. Each Citrix XenDesktop site requires an SQL Server database that is called the data store, which used to centralize farm configuration information and transaction logs. The data store maintains all static and dynamic information about the XenDesktop environment. Because the XenDeskop SQL server is a critical component, redundant servers must be available to provide fault tolerance. By using a single console, vcenter Server provides centralized management of the virtual machines (VMs) for the VMware ESXi hypervisor. VMware vcenter can be used to perform live migration (called VMware vmotion), which allows a running VM to be moved from one physical server to another without downtime. Redundancy for vcenter Server is achieved through VMware high availability (HA). The vcenter Server also contains a licensing server for VMware ESXi. vcenter SQL Server vcenter Server for VMware ESXi hypervisor requires an SQL database. The vcenter SQL server might be Microsoft Data Engine (MSDE), Oracle, or SQL Server. Because the vcenter SQL server is a critical component, redundant servers must be available to provide fault tolerance. Customer SQL databases (including respective redundancy) can be used. 4

8 Client devices VDA Hypervisor Accelerator VM Shared storage Citrix XenDesktop supports a broad set of devices and all major device operating platforms, including Apple ios, Google Android, and Google ChromeOS. XenDesktop enables a rich, native experience on each device, including support for gestures and multi-touch features, which customizes the experience based on the type of device. Each client device has a Citrix Receiver, which acts as the agent to communicate with the virtual desktop by using the ICA/HDX protocol. Each VM needs a Citrix Virtual Desktop Agent (VDA) to capture desktop data and send it to the Citrix Receiver in the client device. The VDA also emulates keyboard and gestures sent from the receiver. ICA is the Citrix remote display protocol for VDI. XenDesktop has an open architecture that supports the use of several different hypervisors, such as VMware ESXi (vsphere), Citrix XenServer, and Microsoft Hyper-V. The optional accelerator VM in this case is Atlantis Computing, For more information, see Atlantis Computing on page 7. Shared storage is used to store user profiles and user data files. Depending on the provisioning model that is used, different data is stored for VM images. For more information, see Storage model on section page 7. For more information, see the Lenovo Client Virtualization base reference architecture document that is available at this website: lenovopress.com/tips Citrix XenDesktop provisioning Citrix XenDesktop features the following primary provisioning models: Provisioning Services (PVS) Machine Creation Services (MCS) Provisioning Services Hosted VDI desktops can be deployed with or without Citrix PVS. The advantage of the use of PVS is that you can stream a single desktop image to create multiple virtual desktops on one or more servers in a data center. Figure 3 shows the sequence of operations that are run by XenDesktop to deliver a hosted VDI virtual desktop. When the virtual disk (vdisk) master image is available from the network, the VM on a target device no longer needs its local hard disk drive (HDD) to operate; it boots directly from the network and behaves as if it were running from a local drive on the target device, which is why PVS is recommended for stateless virtual desktops. PVS often is not used for dedicated virtual desktops because the write cache is not stored on shared storage. PVS is also used with Microsoft Roaming Profiles (MSRPs) so that the user s profile information can be separated out and reused. Profile data is available from shared storage. It is a best practice to use snapshots for changes to the master VM images and also keep copies as a backup. 5

9 Request for vdisk Streaming vdisk Virtual Desktop Provisioning Services Server Write Cache Master Image vdisk Snapshot Figure 3: Using PVS for a stateless model Machine Creation Services Unlike PVS, MCS does not require more servers. Instead, it uses integrated functionality that is built into the hypervisor (VMware ESXi, Citrix XenServer, or Microsoft Hyper-V) and communicates through the respective APIs. Each desktop has one difference disk and one identity disk (as shown in Figure 4). The difference disk is used to capture any changes that are made to the master image. The identity disk is used to store information, such as device name and password. Figure 4: MCS image and difference/identity disk storage model The following types of Image Assignment Models for MCS are available: 6

10 Pooled-random: Desktops are assigned randomly. When they log off, the desktop is free for another user. When rebooted, any changes that were made are destroyed. Pooled-static: Desktops are permanently assigned to a single user. When a user logs off, only that user can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are made are destroyed. Dedicated: Desktops are permanently assigned to a single user. When a user logs off, only that user can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are made persist across subsequent restarts. MCS thin provisions each desktop from a master image by using built-in technology to provide each desktop with a unique identity. Only changes that are made to the desktop use more disk space. For this reason, MCS dedicated desktops are used for dedicated desktops. 3.2 Storage model This section describes the different types of shared data stored for stateless and dedicated desktops. Stateless and dedicated virtual desktops should have the following common shared storage items: The master VM image and snapshots are stored by using Network File System (NFS) or block I/O shared storage. The paging file (or vswap) is transient data that can be redirected to NFS storage. In general, it is recommended to disable swapping, which reduces storage use (shared or local). The desktop memory size should be chosen to match the user workload rather than depending on a smaller image and swapping, which reduces overall desktop performance. User profiles (from MSRP) are stored by using Common Internet File System (CIFS). User data files are stored by using CIFS. Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on NFS or block I/O shared storage: Difference disks are used to store user s changes to the base VM image. The difference disks are per user and can become quite large for dedicated desktops. Identity disks are used to store the computer name and password and are small. Stateless desktops can use local solid-state drive (SSD) storage for the PVS write cache, which is used to store all image writes on local SSD storage. These image writes are discarded when the VM is shut down. 3.3 Atlantis Computing Atlantis Computing provides a software-defined storage solution, which can deliver better performance than physical PC and reduce storage requirements by up to 95% in virtual desktop environments of all types. The key is Atlantis HyperDup content-aware data services, which fundamentally changes the way VMs use storage. This change reduces the storage footprints by up to 95% while minimizing (and in some cases, entirely eliminating) I/O to external storage. The net effect is a reduced CAPEX and a marked increase in performance to start, log in, start applications, search, and use virtual desktops or hosted desktops and applications. Atlantis software uses random access memory (RAM) for write-back caching of data blocks, real-time inline 7

11 de-duplication of data, coalescing of blocks, and compression, which significantly reduces the data that is cached and persistently stored in addition to greatly reducing network traffic. Atlantis software works with any type of heterogeneous storage, including server RAM, direct-attached storage (DAS), SAN, or network-attached storage (NAS). It is provided as a VMware ESXi or Citrix XenServer compatible VM that presents the virtualized storage to the hypervisor as a native data store, which makes deployment and integration straightforward. Atlantis Computing also provides other utilities for managing VMs and backing up and recovering data stores. Atlantis provides a number of volume types suitable for virtual desktops and shared desktops. Different volume types support different application requirements and deployment models. Table 1 compares the Atlantis volume types. Table 1: Atlantis Volume Types Volume Type Min Number of Servers Performance Tier Capacity Tier HA Comments Hyperconverged Cluster of 3 Memory or local flash DAS USX HA Good balance between performance and capacity Simple Hybrid 1 Memory or flash Shared storage Hypervisor HA Functionally equivalent to Atlantis ILIO Persistent VDI Simple All Flash 1 Local flash Shared flash Hypervisor HA Good performance, but lower capacity Simple In-Memory 1 Memory or flash Memory or flash N/A (daily backup) Functionally equivalent to Atlantis ILIO Diskless VDI Atlantis hyper-converged volume Atlantis hyper-converged volumes are a hybrid between memory or local flash for accelerating performance and direct access storage (DAS) for capacity and provide a good balance between performance and capacity needed for virtual desktops. As shown in Figure 5, hyper-converged volumes are clustered across three or more servers and have built-in resiliency in which the volume can be migrated to other servers in the cluster in case of server failure or entering maintenance mode. Hyper-converged volumes are supported for ESXi and XenServer. Figure 5: Atlantis USX Hyper-converged Cluster 8

12 3.3.2 Atlantis simple hybrid volume Atlantis simple hybrid volumes are targeted at dedicated virtual desktop environments. This volume type provides the optimal solution for desktop virtualization customers that are using traditional or existing storage technologies that are optimized by Atlantis software with server RAM. In this scenario, Atlantis employs memory as a tier and uses a small amount of server RAM for all I/O processing while using the existing SAN, NAS, or all-flash arrays storage as the primary storage. Atlantis storage optimizations increase the number of desktops that the storage can support by up to 20 times while improving performance. Disk-backed configurations can use various different storage types, including host-based flash memory cards, external all-flash arrays, and conventional spinning disk arrays. A variation of the simple hybrid volume type is the simple all-flash volume that uses fast, low-latency shared flash storage whereby very little RAM is used and all I/O requests are sent to the flash storage after the inline de-duplication and compression are performed. This reference architecture concentrates on the simple hybrid volume type for dedicated desktops, stateless desktops that use local SSDs, and host-shared desktops and applications. To cover the widest variety of shared storage, the simple all-flash volume type is not considered Atlantis simple in-memory volume Atlantis simple in-memory volumes eliminate storage from stateless VDI deployments by using local server RAM and the in-memory storage optimization technology. Server RAM is used as the primary storage for stateless virtual desktops, which ensures that read and write I/O occurs at memory speeds and eliminates network traffic. An option allows for in-line compression and decompression to reduce the RAM usage. SnapClone technology is used to persist the data store in case of VM reboots, power outages, or other failures. 9

13 4 Operational model This section describes the options for mapping the logical components of a client virtualization solution onto hardware and software. The Operational model scenarios section gives an overview of the available mappings and has pointers into the other sections for the related hardware. Each subsection contains performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM configurations that are described in section 0 on page 47. The last part of this section contains some deployment models for example customer scenarios. 4.1 Operational model scenarios Figure 6 shows the following operational models (solutions) in Lenovo Client Virtualization: enterprise, small-medium business (SMB), and hyper-converged. Traditional Converged Hyper-converged Enterprise (>600 users) Compute/Management Servers x3550, x3650, nx360 Hypervisor ESXi, XenServer Graphics Acceleration NVIDIA GRID K1 or K2 Shared Storage IBM Storwize V7000 IBM FlashSystem 840 Networking 10GbE 10GbE FCoE 8 or 16 Gb FC Flex System Chassis Compute/Management Servers Flex System x240 Hypervisor ESXi Shared Storage IBM Storwize V7000 IBM FlashSystem 840 Networking 10GbE 10GbE FCoE 8 or 16 Gb FC Compute/Management Servers x3650 Hypervisor ESXi, XenServer Graphics Acceleration NVIDIA GRID K1 or K2 Shared Storage Not applicable Networking 10GbE SMB (<600 users) Compute/Management Servers x3550, x3650, nx360 Hypervisor ESXi, XenServer, Hyper-V Graphics Acceleration NVIDIA GRID K1 or K2 Shared Storage IBM Storwize V3700 Networking 10GbE Not Recommended Figure 6: Operational model scenarios The vertical axis is split into two halves: greater than 600 users is termed Enterprise and less than 600 is termed SMB. The 600 user split is not exact and provides rough guidance between Enterprise and SMB. The last column in Figure 6 (labelled hyper-converged ) spans both halves because a hyper-converged solution can be deployed in a linear fashion from a small number of users (100) up to a large number of users (>4000). The horizontal axis is split into three columns. The left-most column represents traditional rack-based systems with top-of-rack (TOR) switches and shared storage. The middle column represents converged systems where the compute, networking, and sometimes storage are converged into a chassis, such as the Flex System. The 10

14 right-most column represents hyper-converged systems and the software that is used in these systems. For the purposes of this reference architecture, the traditional and converged columns are merged for enterprise solutions; the only significant differences are the networking, form factor and capabilities of the compute servers. Converged systems are not generally recommended for the SMB space because the converged hardware chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the converged chassis can be used for other workloads to make this hardware architecture more cost-effective Enterprise operational model For the enterprise operational model, see the following sections for more information about each component, its performance, and sizing guidance: 4.2 Hypervisor support 4.3 Compute servers for virtual desktops 4.4 Compute servers for hosted shared desktops 4.7 Graphics Acceleration 4.8 Management servers 4.9 Systems management 4.10 Shared storage 4.11 Networking 4.12 Racks To show the enterprise operational model for different sized customer environments, four different sizing models are provided for supporting 600, 1500, 4500, and users SMB operational model For the SMB operational model, see the following sections for more information about each component, its performance, and sizing guidance: 4.2 Hypervisor support 4.5 Compute servers for SMB virtual desktops 4.7 Graphics Acceleration 4.8 Management servers (may not be needed) 4.9 Systems management 4.10 Shared storage (see section on IBM Storwize V3700) 4.11 Networking (see section on 10 GbE and iscsi) 4.12 Racks (use smaller rack or no rack at all) To show the SMB operational model for different sized customer environments, four different sizing models are provided for supporting 75, 150, 300, and 600 users Hyper-converged operational model For the hyper-converged operational model, see the following sections for more information about each component, its performance, and sizing guidance: 11

15 4.2 Hypervisor support 4.6 Compute servers for hyper-converged 4.7 Graphics Acceleration 4.8 Management servers 4.9 Systems management GbE networking 4.12 Racks To show the hyper-converged operational model for different sized customer environments, four different sizing models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage. 4.2 Hypervisor support The Citrix XenDesktop reference architecture was tested with VMware ESXi 6.0, XenServer 6.5, and Microsoft Hyper-V VMware ESXi The VMware ESXi hypervisor is convenient because it can start from a USB flash drive and does not require any more local storage. Alternatively, ESXi can be booted from SAN shared storage. For the VMware ESXi hypervisor, the number of vcenter clusters depends on the number of compute servers. For stateless desktops, local SSDs can be used to store the base image and pooled VMs for improved performance. Two replicas must be stored for each base image. Each stateless virtual desktop requires a linked clone, which tends to grow over time until it is refreshed at log out. Two enterprise high-speed 400 GB SSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 800 GB SSDs might be needed. Because of the stateless nature of the architecture, there is little added value in configuring reliable SSDs in more redundant configurations. To use the latest version ESXi, obtain the image from the following website and upgrade by using vcenter: ibm.com/systems/x/os/vmware/ Citrix XenServer For the Citrix XenServer hypervisor, it is necessary to install the OS onto drives in each compute server. For stateless desktops, local SSDs can be used to improve performance by storing the delta disk for MCS or the write-back cache for PVS. Each stateless virtual desktop requires a cache, which tends to grow over time until the virtual desktop is rebooted. The size of the write-back cache depends on the environment. Two enterprise high-speed 200 GB SSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 400 GB (or even 800 GB) SSDs might be needed. Because of the stateless nature of the architecture, there is little added value in configuring reliable SSDs in more redundant configurations. The Flex System x240 compute nodes has two 2.5 drives. It is possible to both install XenServer and store stateless virtual desktops on the same two drives by using larger capacity SSDs. The System x3550 and System x3560 rack servers do not have this restriction and use separate HDDs for Hyper-V and SSDs for local stateless virtual desktops. 12

16 For Internet Explorer 11, the software rendering option must be used for flash videos to play correctly with Login VSI Microsoft Hyper-V For the Microsoft Hyper-V hypervisor, it is necessary to install the OS onto drives in each compute server. For anything more than a small number of compute servers, it is worth the effort of setting up an automated installation from a USB flash drive by using Windows Assessment and Deployment Kit (ADK). The Flex System x240 compute nodes has two 2.5 drives. It is possible to both install XenServer and store stateless virtual desktops on the same two drives by using larger capacity SSDs. The System x3550 and System x3560 rack servers do not have this restriction and use separate HDDs for Hyper-V and SSDs for local stateless virtual desktops. Dynamic memory provides the capability to overcommit system memory and allow more VMs at the expense of paging. In normal circumstances, each compute server must have 20% extra memory to allow failover. When a server goes down and the VMs are moved to other servers, there should be no noticeable performance degradation to VMs that is already on that server. The recommendation is to use dynamic memory, but it should only affect the system when a compute server fails and its VMs are moved to other compute servers. For Internet Explorer 11, the software rendering option must be used for flash videos to play correctly with Login VSI The following configuration considerations are suggested for Hyper-V installations: Disable Large Send Offload (LSO) by using the Disable-NetadapterLSO command on the Hyper-V compute server. Disable virtual machine queue (VMQ) on all interfaces by using the Disable-NetAdapterVmq command on the Hyper-V compute server. Apply registry changes as per the Microsoft article that is found at this website: support.microsoft.com/kb/ The changes apply to Windows Server 2008 and Windows Server Disable VMQ and Internet Protocol Security (IPSec) task offloading flags in the Hyper-V settings for the base VM. By default, storage is shared as hidden Admin shares (for example, e$) on Hyper-V compute server and XenDesktop does not list Admin shares while adding the host. To make shared storage available to XenDesktop, the volume should be shared on the Hyper-V compute server. Because the SCVMM library is large, it is recommended that it is accessed by using a remote share. 13

17 4.3 Compute servers for virtual desktops This section describes stateless and dedicated virtual desktop models. Stateless desktops that allow live migration of a VM from one physical server to another are considered the same as dedicated desktops because they both require shared storage. In some customer environments, stateless and dedicated desktop models might be required, which requires a hybrid implementation. Compute servers are servers that run a hypervisor and host virtual desktops. There are several considerations for the performance of the compute server, including the processor family and clock speed, the number of processors, the speed and size of main memory, and local storage options. The use of the Aero theme in Microsoft Windows 7 or other intensive workloads has an effect on the maximum number of virtual desktops that can be supported on each compute server. Windows 8 also requires more processor resources than Windows 7, whereas little difference was observed between 32-bit and 64-bit Windows 7. Although a slower processor can be used and still not exhaust the processor power, it is a good policy to have excess capacity. Another important consideration for compute servers is system memory. For stateless users, the typical range of memory that is required for each desktop is 2 GB - 4 GB. For dedicated users, the range of memory for each desktop is 2 GB - 6 GB. Designers and engineers that require graphics acceleration might need 8 GB - 16 GB of RAM per desktop. In general, power users that require larger memory sizes also require more virtual processors. This reference architecture standardizes on 2 GB per desktop as the minimum requirement of a Windows 7 desktop. The virtual desktop memory should be large enough so that swapping is not needed and vswap can be disabled. For more information, see BOM for enterprise compute servers on page Intel Xeon E v3 processor family servers This section shows the performance results and sizing guidance for Lenovo compute servers that are based on the Intel Xeon E V3 processors (Haswell). ESXi 6.0 performance results Table 2 lists the Login VSI performance of E v3 processors from Intel that use the Login VSI 4.1 office worker workload with ESXi 6.0. Table 2: ESXi 6.0 performance with office worker workload Processor with office worker workload Hypervisor MCS Stateless MCS Dedicated Two E v GHz, 10C 105W ESXi users 234 users Two E v GHz, 12C 120W ESXi users 291 users Two E v GHz, 12C 120W ESXi users 301 users Two E v GHz, 12C 135W ESXi users 306 users 14

18 Table 3 lists the results for the Login VSI 4.1 knowledge worker workload. Table 3: ESXI 6.0 performance with knowledge worker workload Processor with knowledge worker workload Hypervisor MCS Stateless MCS Dedicated Two E v GHz, 12C 120W ESXi users 237 users Two E v GHz, 12C 135W ESXi users 246 users Table 4 lists the results for the Login VSI 4.1 power worker workload. Table 4: ESXI 6.0 performance with power worker workload Processor with power worker workload Hypervisor MCS Stateless MCS Dedicated Two E v GHz, 12C 120W ESXi users 202 users These results indicate the comparative processor performance. The following conclusions can be drawn: The performance for stateless and dedicated virtual desktops is similar. The Xeon E5-2650v3 processor has performance that is similar to the previously recommended Xeon E5-2690v2 processor (IvyBridge), but uses less power and is less expensive. The Xeon E5-2690v3 processor does not have significantly better performance than the Xeon E5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost. Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) series processors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W) series processors. The cost per user increases with each processor but with a corresponding increase in user density. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost might outweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor might not provide any added value. Some users require the fastest processor and for those users the Xeon E5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for an average configuration. The Xeon E5-2680v3 processor is recommended for power workers. Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 5 gives a comparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows that Login VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1. Table 5: Comparison of Login VSI 3.7 and 4.1 Workloads Processor Workload MCS Stateless MCS Dedicated Two E v GHz, 10C 105W 4.1 Office worker 239 users 234 users Two E v GHz, 10C 105W 3.7 Medium 286 users 286 users Two E v GHz, 12C 135W 4.1 Office worker 301 users 306 users Two E v GHz, 12C 135W 3.7 Medium 394 users 379 users Two E v GHz, 12C 135W 4.1 Knowledge worker 252 users 246 users Two E v GHz, 12C 135W 3.7 Heavy 348 users 319 users 15

19 Table 6 compares the E v3 processors with the previous generation E v2 processors by using the Login VSI 3.7 workloads to show the relative performance improvement. On average, the E v3 processors are 25% - 40% faster than the previous generation with the equivalent processor names. Table 6: Comparison of E v2 and E v3 processors Processor Workload MCS Stateless MCS Dedicated Two E v GHz, 8C 85W 3.7 Medium 204 users 204 users Two E v GHz, 10C 105W 3.7 Medium 286 users 286 users Two E v2 3.0 GHz, 10C 130W 3.7 Medium 268 users 257 users Two E v GHz, 12C 135W 3.7 Medium 394 users 379 users Two E v2 3.0 GHz, 10C 130W 3.7 Heavy 224 users 229 users Two E v GHz, 12C 135W 3.7 Heavy 348 users 319 users XenServer 6.0 performance results Table 7 lists the Login VSI performance of E v3 processors from Intel that uses the Office worker workload with XenServer 6.5. Table 7: XenServer 6.5 performance with Office worker workload Processor with office worker workload Hypervisor MCS stateless MCS dedicated Two E v GHz, 10C 105W XenServer users 224 users Two E v GHz, 12C 120W XenServer users 278 users Table 8 shows the results for the same comparison that uses the Knowledge worker workload. Table 8: XenServer 6.5 performance with Knowledge worker workload Processor with knowledge worker workload Hypervisor MCS stateless MCS dedicated Two E v GHz, 12C 120W XenServer users 208 users Table 9 lists the results for the Login VSI 4.1 power worker workload. Table 9: XenServer 6.5 performance with power worker workload Processor with power worker workload Hypervisor MCS Stateless MCS Dedicated Two E v GHz, 12C 120W XenServer users 179 users Hyper-V Server 2012 R2 performance results 16

20 Table 10 lists the Login VSI performance of E v3 processors from Intel that uses the Office worker workload with Hyper-V Server 2012 R2. Table 10: Hyper-V Server 2012 R2 performance with Office worker workload Processor with office worker workload Hypervisor MCS stateless MCS dedicated Two E v GHz, 10C 105W Hyper-V 270 users 272 users Table 11 shows the results for the same comparison that uses the Knowledge worker workload. Table 11: Hyper-V Server 2012 R2 performance with Knowledge worker workload Processor with knowledge worker workload Hypervisor MCS stateless MCS dedicated Two E v GHz, 12C 120W Hyper-V 250 users 247 users Table 12 lists the results for the Login VSI 4.1 power worker workload. Table 12: Hyper-V Server 2012 R2 performance with power worker workload Processor with power worker workload Hypervisor MCS Stateless MCS Dedicated Two E v GHz, 12C 120W Hyper-V 216 users 214 users Compute server recommendation for Office and Knowledge workers The default recommendation is the Xeon E5-2650v3 processor and 512 GB of system memory because this configuration provides the best coverage for a range of users up to 3GB of memory. For users who need VMs that are larger than 3 GB, Lenovo recommends the use of up to 768 GB and the Xeon E5-2680v3 processor. Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of 89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 13 lists the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 13: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E v3 Office worker 150 normal node 78% 75% Two E v3 Office worker 180 failover mode 88% 87% Two E v3 Knowledge worker 150 normal node 78% 77% Two E v3 Knowledge worker 180 failover mode 86% 86% Table 14 lists the recommended number of virtual desktops per server for different VM memory. The number of users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced system of compute and memory. 17

21 Table 14: Recommended number of virtual desktops per server Processor E5-2650v3 E5-2650v3 E5-2680v3 VM memory size 2 GB (default) 3 GB 4 GB System memory 384 GB 512 GB 768 GB Desktops per server (normal mode) Desktops per server (failover mode) Table 15 lists the approximate number of compute servers that are needed for different numbers of users and VM sizes. Table 15: Compute servers needed for different numbers of users and VM sizes Desktop memory size (2 GB or 4 GB) 600 users 1500 users 4500 users users Compute users (normal) Compute users (failover) Failover ratio 4:1 4:1 5:1 5:1 Desktop memory size (3 GB) 600 users 1500 users 4500 users users Compute users (normal) Compute users (failover) Failover ratio 4:1 4.5:1 4.5:1 5:1 Compute server recommendation for Power workers For power workers, the default recommendation is the Xeon E5-2680v3 processor and 384 GB of system memory. For users who need VMs that are larger than 3 GB, Lenovo recommends up to 768 GB of system memory. Lenovo testing shows that 125 users per server is a good baseline and has an average of 79% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of 88% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 16 lists the processor usage with ESXi for the recommended user counts for normal mode and failover mode. 18

22 Table 16: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E v3 Power worker 125 normal node 78% 80% Two E v3 Power worker 150 failover mode 88% 88% Table 17 lists the recommended number of virtual desktops per server for different VM memory. The number of power users is reduced to fit within the available memory and still maintain a reasonably balanced system of compute and memory. Table 17: Recommended number of virtual desktops per server Processor E5-2680v3 E5-2680v3 E5-2680v3 VM memory size 3 GB (default) 4 GB 6 GB System memory 384 GB 512 GB 768 GB Desktops per server (normal mode) Desktops per server (failover mode) Table 18 lists the approximate number of compute servers that are needed for different numbers of power users and VM sizes. Table 18: Compute servers needed for different numbers of power users and VM sizes Desktop memory size 600 users 1500 users 4500 users users Compute users (normal) Compute users (failover) Failover ratio 5:1 4:1 5:1 5: Intel Xeon E v3 processor family servers with Atlantis USX Atlantis USX provides storage optimization by using a 100% software solution. There is a cost for processor and memory usage while offering decreased storage usage and increased input/output operations per second (IOPS). This section contains performance measurements for processor and memory utilization of USX simple hybrid volumes and gives an indication of the storage usage and performance. For persistent desktops, USX simple hybrid volumes provide acceleration to shared storage. For environments that are not using Atlantis USX, it is recommended to use linked clones to conserve shared storage space. However, with Atlantis USX hybrid volumes, it is recommended to use full clones for persistent desktops because they de-duplicate more efficiently than the linked clones and can support more desktops per server. For stateless desktops, USX simple hybrid volumes provide storage acceleration for local SSDs. USX simple in-memory volumes are not considered for stateless desktops because they require a large amount of memory. Table 19 lists the Login VSI performance of USX simple hybrid volumes using two E v3 processors and ESXi

23 Table 19: ESXi 6.0 performance with USX hybrid volume Login VSI 4.1 Workload Hypervisor MCS Stateless MCS Dedicated Office worker ESXi users 206 users Knowledge worker ESXi users 193 users Power worker ESXi users 156 users A comparison of performance with and without the Atlantis VM shows that increase of 20-30% with the Atlantis VM. This result is to be expected and it is recommended that higher-end processors (like the E5-2680v3) are used to maximize density. For office workers and knowledge workers, Lenovo testing shows that 125 users per server is a good baseline and has an average of 74% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of 83% usage of the processor. For power workers, Lenovo testing shows that 100 users per server is a good baseline and has an average of 74% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 120 users per server have an average of 85% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 20 lists the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 20: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E v3 Knowledge worker 125 normal node 72% 75% Two E v3 Knowledge worker 150 failover mode 82% 83% Two E v3 Power worker 100 normal node 74% 73% Two E v3 Power worker 120 failover mode 85% 85% It is still a best practice to separate the user folder and any other shared folders into separate storage. This configuration leaves all of the other possible changes that might occur in a full clone to be stored in the simple hybrid volume. This configuration is highly dependent on the environment. Testing by Atlantis Computing suggests that 3.5 GB of unique data per persistent desktop is sufficient. Atlantis USX uses de-duplication and compression to reduce the storage that is required for VMs. Experience shows that a 85% - 95% reduction is possible depending on the VMs. Assuming each VM is 30 GB, the uncompressed storage requirement for 150 full clones is 4500 GB. The simple hybrid volume must be only 10% of this volume, which is 450 GB. To allow some room for growth and the Atlantis VM, a 600 GB volume should be allocated. Atlantis has some requirements on memory. The VM requires 5 GB, and 5% of the storage volume is used for in memory metadata. In the example of a 450 GB volume, this amount is 22.5 GB. The default size for the RAM 20

24 cache is 15%, but it is recommended to increase this size to 25% for the write heavy VDI workload. In the example of a 450 GB volume, this size is GB. Assuming 4 GB for the hypervisor, 144 GB ( ) of system memory should be reserved. It is recommended that at least 384 GB of server memory is used for USX simple hybrid VDI deployments. Proof of concept (POC) testing can help determine the actual amount of RAM. Table 21 lists the recommended number of virtual desktops per server for different VM memory sizes. Table 21: Recommended number of virtual desktops per server with USX simple hybrid volumes Processor E5-2680v3 E5-2680v3 E5-2680v3 VM memory size 2 GB (default) 3 GB 4 GB Total system memory 384 GB 512 GB 768 GB Reserved system memory 144 GB 144 GB 144 GB System memory for desktop VMs 240 GB 368 GB 624 GB Desktops per server (normal mode) Desktops per server (failover mode) Table 22 lists the number of compute servers that are needed for different numbers of users and VM sizes. A server with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and 768 GB system memory is used for 4 GB VMs. Table 22: Compute servers needed for different numbers of users with USX simple hybrid volumes 600 users 1500 users 4500 users users Compute servers for 100 users (normal) Compute servers for 120 users (failover) Failover ratio 5:1 6.5:1 5:1 5:1 As a result of the use of Atlantis simple hybrid volumes, the only read operations are to fill the cache for the first time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent storage are still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS count is substantially reduced. Assuming the use of a fast, low-latency shared storage device, a single VM boot can take seconds to get past the display of the logon window and get all of the other services fully loaded. This process takes this time because boot operations are mainly read operations, although the actual boot time can vary depending on the VM. Citrix XenDesktop boots VMs in batches of 10 at a time, which reduces IOPS for most storage systems but is actually an inhibitor for Atlantis simple hybrid volumes. Without the use of XenDesktop, a boot of 100 VMs in a single data store is completed in 3.5 minutes; that is, only 11 times longer than the boot of a single VM and far superior to existing storage solutions. Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases, the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new login 21

25 every 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiple logins underway and the main bottleneck is the processor. 4.4 Compute servers for hosted shared desktops This section describes compute servers for hosted shared desktops that use Citrix XenDesktop 7.x. This functionality was in the separate XenApp product, but it is now integrated into the Citrix XenDesktop product. Hosted shared desktops are more suited to task workers that require little in desktop customization. As the name implies, multiple desktops share a single VM; however, because of this sharing, the compute resources often are exhausted before memory. Lenovo testing showed that 128 GB of memory is sufficient for servers with two processors. Other testing showed that the performance differences between four, six, or eight VMs is minimal; therefore, four VMs are recommended to reduce the license costs for Windows Server 2012 R2. For more information, see BOM for hosted desktops section on page Intel Xeon E v3 processor family servers Table 23 lists the processor performance results for different size workloads that use four Windows Server 2012 R2 VMs with the Xeon E5-2600v3 series processors and ESXi 6.0 hypervisor. Table 23: ESXi 6.0 results for shared hosted desktops Processor Hypervisor Workload Hosted Desktops Two E v GHz, 10C 105W ESXi 6.0 Office Worker 222 users Two E v GHz, 12C 120W ESXi 6.0 Office Worker 255 users Two E v GHz, 12C 120W ESXi 6.0 Office Worker 264 users Two E v GHz, 12C 135W ESXi 6.0 Office Worker 280 users Two E v GHz, 12C 120W ESXi 6.0 Knowledge Worker 231 users Two E v GHz, 12C 120W ESXi 6.0 Knowledge Worker 237 users Table 24 lists the processor performance results for different size workloads that use four Windows Server 2012 R2 VMs with the Xeon E5-2600v3 series processors and XenServer 6.5 hypervisor. Table 24: XenServer 6.5 results for shared hosted desktops Processor Hypervisor Workload Hosted Desktops Two E v GHz, 10C 105W XenServer 6.5 Office Worker 225 users Two E v GHz, 12C 120W XenServer 6.5 Office Worker 243 users Two E v GHz, 12C 120W XenServer 6.5 Office Worker 262 users Two E v GHz, 12C 135W XenServer 6.5 Office Worker 271 users Two E v GHz, 12C 120W XenServer 6.5 Knowledge Worker 223 users Two E v GHz, 12C 120W XenServer 6.5 Knowledge Worker 226 users 22

26 Table 25 lists the processor performance results for different size workloads that use four Windows Server 2012 R2 VMs with the Xeon E5-2600v3 series processors and Hyper-V Server 2012 R2. Table 25: Hyper-V Server 2012 R2 results for shared hosted desktops Processor Hypervisor Workload Hosted Desktops Two E v GHz, 10C 105W Hyper-V Office Worker 337 users Two E v GHz, 12C 120W Hyper-V Office Worker 295 users Two E v GHz, 12C 120W Hyper-V Knowledge Worker 264 users Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends 204 hosted desktops per server. It is important to keep 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 26 lists the processor usage for the recommended number of users. Table 26: Processor usage Processor Workload Users per Server Utilization Two E v3 Office worker 170 normal node 78% Two E v3 Office worker 204 failover mode 88% Two E v3 Knowledge worker 170 normal node 65% Two E v3 Knowledge worker 204 failover mode 78% Table 27 lists the number of compute servers that are needed for different numbers of users. Each compute server has 128 GB of system memory for the four VMs. Table 27: Compute servers needed for different numbers of users and VM sizes 600 users 1500 users 4500 users users Compute servers for 170 users (normal) Compute servers for 204 users (failover) Failover ratio 3:1 4:1 4.5:1 5:1 23

27 4.5 Compute servers for SMB virtual desktops This section presents an alternative for compute servers to be used in an SMB environment that can be more cost-effective than the enterprise environment that is described in 4.3 Compute servers for virtual desktops and 4.4 Compute servers for hosted shared desktops. The following methods can be used to cut costs from a VDI solution: Use a more restrictive XenDesktop license Use hypervisor with a lower license cost Do not use dedicated management servers for management VMs Use local HDDs in compute servers and use shared storage only when necessary The Citrix XenDesktop VDI edition has fewer features than the other XenDesktop versions and might be sufficient for the customer SMB environment. For more information and a comparison, see this website: citrix.com/go/products/xendesktop/feature-matrix.html Citrix XenServer has no other license cost and is an alternative to other hypervisors. The performance measurements in this section show XenServer and ESXi. Providing that there is some kind of HA for the management server VMs, the number of compute servers that are needed can be reduced at the cost of less user density. There is a cross-over point on the number of users where it makes sense to have dedicated compute servers for the management VMs. That cross-over point varies by customer, but often it is in the range of users. A good assumption is a reduction in user density of 20% for the management VMs; for example, 125 users reduces to 100 per compute server. Shared storage is expensive. Some shared storage is needed to ensure user recovery if there is a failure and the IBM Storwize V3700 with an iscsi connection is recommended. Dedicated virtual desktops always must be on a server in case of failure, but stateless virtual desktops can be provisioned to HDDs on the local server. Only the user data and profile information must be on shared storage Intel Xeon E v3 processor family servers with local storage The performance measurements and recommendations in this section are for the use of stateless virtual machines with local storage on the compute server. For more information about for persistent virtual desktops as persistent users must use shared storage for resilience, see Compute servers for virtual desktops on page 14. XenServer 6.5 formats a local datastore by using the LVMoHBA file system. As a consequence XenServer 6.5 supports only thick provisioning and not thin provisioning. This fact means that VMs that are created with MCS are large and take up too much local disk space. Instead, only PVS was used to provision stateless virtual desktops. Table 28 shows the processor performance results for the Xeon E5-2650v3 series of processors by using stateless virtual machines on local HDDs with XenServer 6.5. This configuration used 12 or GB 15k RPM HDDs in a RAID 10 array. Measurements showed that 8 HDDs is not sufficient for the required IOPS and the capacity requirements. 24

28 Table 28: Performance results for PVS stateless virtual desktops with XenServer 6.5 Processor Workload 12 HDDs 16 HDDs Two E v GHz, 12C 120W Office worker 211 users 211 users Two E v GHz, 12C 120W Knowledge worker 117 users 162 users For Office worker, 12 HDDs have sufficient IOPS but the storage capacity with 300 GB HDDs was 96%. If the number of VMs is reduced to 125, there should be sufficient capacity for a default 6GB write cache per VM with some room left over for VM growth. For VMs that are larger than 30 GB, it is recommended that 600 GB 15k RPM HDDs are used. For Knowledge worker, 12 HDDs do not have sufficient IOPS. In this case, at least 16 HDDs should be used. Because an SMB environment is used with a range of user sizes from less than 75 to 600, the following configurations are recommended: For small user counts, each server can support 75 users at most by using two E5-2650v3 processors, 256 GB of memory, and 12 HDDs. The extra memory is needed for the VM for management servers. For average user counts, each server supports 125 users at most by using two E5-2650v3 processors, 256 GB of memory, and 16 HDDs. These configurations also need two extra HDDs for installing XenServer 6.5. Table 29 shows the number of compute servers that are needed for different number of medium users. Table 29: SMB Compute servers needed for different numbers of users and VM sizes 75 users 150 users 250 users 500 users Compute servers for 75 users (normal) 2 3 N/A N/A Compute servers for 75 users (failover) 1 2 N/A N/A Compute servers for 100 users (normal) N/A N/A 3 5 Compute servers for 125 users (failover) N/A N/A 2 4 Includes VMs for management servers Yes Yes No No For more information, see BOM for SMB compute servers on page

29 4.6 Compute servers for hyper-converged systems This section presents the compute servers for Atlantis USX hyper-converged system. Additional processing and memory is often required to support storing data locally, as well as additional SSDs or HDDs. Typically HDDs are used for data capacity and SSDs and memory is used for provide performance. As the price per GB for flash memory continues to reduce, there is a trend to also use all SSDs to provide the overall best performance for hyper-converged systems. For more information, see BOM for hyper-converged compute servers on page Intel Xeon E v3 processor family servers with Atlantis USX Atlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5 servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX was installed and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume across the four servers that were running ESXi 5.5 U2. This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to see the difference if one server is unavailable. Table 30 lists the processor usage for the recommended number of users. Table 30: Processor usage for Atlantis USX Processor Workload Servers Users per Server Utilization Two E v3 Knowledge worker normal node 66% Two E v3 Knowledge worker failover mode 89% From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users per server in failover mode. Lenovo recommends a general failover ratio of 5:1. Table 31 lists the recommended number of virtual desktops per server for different VM memory sizes. Table 31: Recommended number of virtual desktops per server for Atlantis USX Processor E v3 E v3 E v3 VM memory size 2 GB (default) 3 GB 4 GB System memory 384 GB 512 GB 768 GB Memory for ESXi and Atlantis USX 63 GB 63 GB 63 GB Memory for VMs 321 GB 449 GB 705 GB Desktops per server (normal mode) Desktops per server (failover mode) Table 32 lists the approximate number of compute servers that are needed for different numbers of users and VM sizes. Table 32: Compute servers needed for different numbers of users for Atlantis USX 26

30 Desktop memory size 300 users 600 users 1500 users 3000 users Compute servers for 125 users (normal) Compute servers for 150 users (failover) Failover ratio 3:1 4:1 5:1 5:1 An important part of a hyper-converged system is the resiliency to failures when a compute server is unavailable. Login VSI was run and then the compute server was powered off. This process was done during the steady state phase. For the steady state phase, VMs were migrated from the failed server to the other three servers with each server gaining VMs. Figure 7 shows the processor usage for the four servers during the steady state phase when one of the servers is powered off. The processor spike for the three remaining servers is noticeable. Figure 7: Atlantis USX processor usage: server power off There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure, yet VSAN can recover quite quickly. However, this situation is best avoided as it is important to build in redundancy at multiple levels for all mission critical systems. 27

31 4.7 Graphics Acceleration The XenServer 6.5 hypervisor supports the following options for graphics acceleration: Dedicated GPU with one GPU per user which is called pass-through mode. GPU hardware virtualization (vgpu) that partitions each GPU for 1 to 8 users. The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by using the Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these servers supports up to two GRID adapters. No significant performance differences were found between these two servers when they were used for graphics acceleration and the results apply to both. Because pass-through mode offers a low user density (eight for GRID K1 and four for GRID K2), it is recommended that this mode is only used for power users, designers, engineers, or scientists that require powerful graphics acceleration. Lenovo recommends that a high-powered CPU, such as the E5-2680v3, is used for pass-through mode and vgpu because accelerated graphics tends to put an extra load on the processor. For the pass-through mode, with only four or eight users per server, 128 GB of server memory should be sufficient even for the high end GRID K2 users who might need 16 GB or even 24 GB per VM. See also NVidia s release notes on vgpu for Citrix XenServer 6.0: The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or knowledge workers usually have less intense graphics workloads and can achieve higher frame rates. All tests were done by using the default H.264 compatibility display mode for the clients. This configuration offers the best compromise between data size (compression) and performance. Table 33 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each user with the GRID K1 adapter by using pass-through mode with DirectX 11. Table 33: Performance of GRID K1 pass-through mode by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution K1 High Normal x High Normal x High Normal x High Normal x High Normal x Table 34 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2 adapter by using pass-through mode with DirectX

32 Table 34: Performance of GRID K2 pass-through mode by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution K2 High Normal x Ultra Extreme x Ultra Extreme x Ultra Extreme x Ultra Extreme x Table 35 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K1 adapter by using vgpu mode with DirectX 11. The K180Q profile has similar performance to the K1 pass-through mode. Table 35: Performance of GRID K1 vgpu modes by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution K180Q K160Q K140Q High Normal x High Normal x High Normal x High Normal x N/A High Normal x N/A Table 36 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2 adapter by using vgpu mode with DirectX 11. The K280Q profile has similar performance to the K2 pass-through mode. Table 36: Performance of GRID K2 vgpu modes by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution K280Q K260Q K240Q High Normal x High Normal x1200 N/A Ultra Extreme x N/A N/A Ultra Extreme x N/A N/A Ultra Extreme x N/A Ultra Extreme x N/A N/A The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality, tessellation, and anti-aliasing options. This result is expected because of the relative performance characteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolution increases. 29

33 Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is done in the customer environment to verify the performance for the required user workloads. For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650 M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs: BOM for enterprise compute servers on page 47 BOM for SMB compute servers on page 56 BOM for hyper-converged compute servers on page Management servers Management servers should have the same hardware specification as compute servers so that they can be used interchangeably in a worst-case scenario. The Citrix XenDesktop management servers also use the same hypervisor, but have management VMs instead of user desktops. Table 37 lists the VM requirements and performance characteristics of each management service for Citrix XenDesktop. Table 37: Characteristics of XenDesktop and ESXi management services Management Virtual System Storage Windows HA Performance service VM processors memory OS needed characteristic Delivery controller 4 8 GB 60 GB 2012 R2 Yes 5000 user connections Web Interface 4 4 GB 60 GB 2012 R2 Yes 30,000 connections per hour Citrix licensing server XenDesktop SQL server 2 4 GB 60 GB 2012 R2 No 170 licenses per second 2 8 GB 60 GB 2012 R2 Yes 5000 users PVS servers 4 32 GB 60 GB (depends on number of images) 2012 R2 Yes Up to 1000 desktops, memory should be a minimum of 2 GB plus 1.5 GB per image served vcenter server 8 16 GB 60 GB 2012 R2 No Up to 2000 desktops vcenter SQL server 4 8 GB 200 GB 2012 R2 Yes Double the virtual processors and memory for more than 2500 users PVS servers often are run natively on Windows servers. The testing showed that they can run well inside a VM, if it is sized per Table 37. The disk space for PVS servers is related to the number of provisioned images. Table 38 lists the number of management VMs for each size of users following the high availability and performance characteristics. The number of vcenter servers is half of the number of vcenter clusters because each vcenter server can handle two clusters of up to 1000 desktops. 30

34 Table 38: Management VMs needed XenDesktop management service VM 600 users 1500 users 4500 users users Delivery Controllers 2 (1+1) 2 (1+1) 2 (1+1) 2(1+1) Includes Citrix Licensing server Y N N N Includes Web server Y N N N Web Interface N/A 2 (1+1) 2 (1+1) 2 (1+1) Citrix licensing servers N/A XenDesktop SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) PVS servers for stateless case only 2 (1+1) 4 (2+2) 8 (6+2) 14 (10+4) ESXi management service VM 600 users 1500 users 4500 users users vcenter servers vcenter SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough capacity in the management servers for all of these VMs. Table 39 lists an example mapping of the management VMs to the four physical management servers for 4500 users. Table 39: Management server VM mapping (4500 users) Management service for 4500 Management Management Management Management stateless users server 1 server 2 server 3 server 4 vcenter servers (3) vcenter database (2) 1 1 XenDesktop database (2) 1 1 Delivery controller (2) 1 1 Web Interface (2) 1 1 License server (1) 1 PVS servers for stateless desktops (8) It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol (DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment. For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O servers that support CIFS or NFS shares and translate file requests to the block storage system. For high availability, two or more Windows storage servers are clustered. Based on the number and type of desktops, Table 40 lists the recommended number of physical management servers. In all cases, there is redundancy in the physical management servers and the management VMs. 31

35 Table 40: Management servers needed Management servers 600 users 1500 users 4500 users users Stateless desktop model Dedicated desktop model Windows Storage Server For more information, see BOM for enterprise and SMB management servers on page Systems management Lenovo XClarity Administrator is a centralized resource management solution that reduces complexity, speeds up response, and enhances the availability of Lenovo server systems and solutions. The Lenovo XClarity Administrator provides agent-free hardware management for Lenovo s System x rack servers and Flex System compute nodes and components, including the Chassis Management Module (CMM) and Flex System I/O modules. Figure 8 shows the Lenovo XClarity administrator interface, where Flex System components and rack servers are managed and are seen on the dashboard. Lenovo XClarity Administrator is a virtual appliance that is quickly imported into a virtualized environment server configuration. Figure 8: XClarity Administrator interface 32

36 4.10 Shared storage VDI workloads such as virtual desktop provisioning, VM loading across the network, and access to user profiles and data files place huge demands on network shared storage. Experimentation with VDI infrastructures shows that the input/output operation per second (IOPS) performance takes precedence over storage capacity. This precedence means that more slower speed drives are needed to match the performance of fewer higher speed drives. Even with the fastest HDDs available today (15k rpm), there can still be excess capacity in the storage system because extra spindles are needed to provide the IOPS performance. From experience, this extra storage is more than sufficient for the other types of data such as SQL databases and transaction logs. The large rate of IOPS, and therefore, large number of drives needed for dedicated virtual desktops can be ameliorated to some extent by caching data in flash memory or SSD drives. The storage configurations are based on the peak performance requirement, which usually occurs during the so-called logon storm. This is when all workers at a company arrive in the morning and try to start their virtual desktops, all at the same time. It is always recommended that user data files (shared folders) and user profile data are stored separately from the user image. By default, this has to be done for stateless virtual desktops and should also be done for dedicated virtual desktops. It is assumed that 100% of the users at peak load times require concurrent access to user data and profiles. Stateless virtual desktops are provisioned from shared storage using PVS. The PVS write cache is maintained on a local SSD. Table 41 summarizes the peak IOPS and disk space requirements for stateless virtual desktops on a per-user basis. Table 41: Stateless virtual desktop shared storage performance requirements Stateless virtual desktops Protocol Size IOPS Write % vswap (recommended to be disabled) NFS or Block User data files CIFS/NFS 5 GB 1 75% User profile (through MSRP) CIFS 100 MB % Table 42 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtual desktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount of disk space. Stateless users that require mobility and have no local SSDs also fall into this category. The last three rows of Table 42 are the same as Table 41 for stateless desktops. 33

37 Table 42: Dedicated or shared stateless virtual desktop shared storage performance requirements Dedicated virtual desktops Protocol Size IOPS Write % Master image NFS or Block 30 GB Difference disks User AppData folder NFS or Block 10 GB 18 85% vswap (recommended to be disabled) NFS or Block User files CIFS/NFS 5 GB 1 75% User profile (through MSRP) CIFS 100 MB % The sizes and IOPS for user data files and user profiles that are listed in Table 41 and Table 42 can vary depending on the customer environment. For example, power users might require 10 GB and five IOPS for user files because of the applications they use. It is assumed that 100% of the users at peak load times require concurrent access to user data files and profiles. Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any storage controller configuration. The storage configurations that are presented in this section include conservative assumptions about the VM size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most demanding user scenarios. This reference architecture describes the following different shared storage solutions: Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Fibre Channel (FC) Block I/O to IBM Storwize V7000 / Storwize V3700 storage using FC over Ethernet (FCoE) Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Internet Small Computer System Interface (iscsi) Block I/O to IBM FlashSystem 840 with Atlantis USX storage acceleration IBM Storwize V7000 and IBM Storwize V3700 storage The IBM Storwize V7000 generation 2 storage system supports up to 504 drives by using up to 20 expansion enclosures. Up to four controller enclosures can be clustered for a maximum of 1056 drives (44 expansion enclosures). The Storwize V7000 generation 2 storage system also has a 64 GB cache, which is expandable to 128 GB. The IBM Storwize V3700 storage system is somewhat similar to the Storwize V7000 storage, but is restricted to a maximum of five expansion enclosures for a total of 120 drives. The maximum size of the cache for the Storwize V3700 is 8 GB. The Storwize cache acts as a read cache and a write-through cache and is useful to cache commonly used data for VDI workloads. The read and write cache are managed separately. The write cache is divided up across the storage pools that are defined for the Storwize storage system. 34

38 In addition, Storwize storage offers the IBM Easy Tier function, which allows commonly used data blocks to be transparently stored on SSDs. There is a noticeable improvement in performance when Easy Tier is used, which tends to tail off as SSDs are added. It is recommended that approximately 10% of the storage space is used for SSDs to give the best balance between price and performance. The tiered storage support of Storwize storage also allows a mixture of different disk drives. Slower drives can be used for shared folders and profiles; faster drives and SSDs can be used for persistent virtual desktops and desktop images. To support file I/O (CIFS and NFS) into Storwize storage, Windows storage servers must be added, as described in Management servers on page 30. The fastest HDDs that are available for Storwize storage are 15k rpm drives in a RAID 10 array. Storage performance can be significantly improved with the use of Easy Tier. If this performance is insufficient, SSDs or alternatives (such as a flash storage system) are required. For this reference architecture, it is assumed that each user has 5 GB for shared folders and profile data and uses an average of 2 IOPS to access those files. Investigation into the performance shows that 600 GB 10k rpm drives in a RAID 10 array give the best ratio of input/output operation performance to disk space. If users need more than 5 GB for shared folders and profile data then 900 GB (or even 1.2 TB), 10k rpm drives can be used instead of 600 GB. If less capacity is needed, the 300 GB 15k rpm drives can be used for shared folders and profile data. Persistent virtual desktops require both: a high number of IOPS and a large amount of disk space for the linked clones. The linked clones can grow in size over time as well. For persistent desktops, 300 GB 15k rpm drives configured as RAID 10 were not sufficient and extra drives were required to achieve the necessary performance. Therefore, it is recommended to use a mixture of both speeds of drives for persistent desktops and shared folders and profile data. Depending on the number of master images, one or more RAID 1 array of SSDs can be used to store the VM master images. This configuration help with performance of provisioning virtual desktops; that is, a boot storm. Each master image requires at least double its space. The actual number of SSDs in the array depends on the number and size of images. In general, more users require more images. Table 43: VM images and SSDs 600 users 1500 users 4500 users users Image size 30 GB 30 GB 30 GB 30 GB Number of master images Required disk space (doubled) 120 GB 240 GB 480 GB 960 GB 400 GB SSD configuration RAID 1 (2) RAID 1 (2) Two RAID 1 arrays (4) Four RAID 1 arrays (8) Table 44 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only one Storwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 44, the IBM Storwize V3700 storage system can support up to 7000 users only. 35

39 Table 44: Storwize storage configuration for stateless users Stateless storage 600 users 1500 users 4500 users users 400 GB SSDs in RAID 1 for master images Hot spare SSDs GB 10k rpm in RAID 10 for users Hot spare 600 GB drives Storwize control enclosures Storwize expansion enclosures Table 45 lists the Storwize storage configuration that is needed for each of the dedicated or shared stateless user counts. The top four rows of Table 45 are the same as for stateless desktops. Lenovo recommends clustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500 or so dedicated virtual desktops. For the 4500 and user solutions, the drives are divided equally across all of the controllers. Based on the assumptions in Table 45, the IBM Storwize V3700 storage system can support up to 1200 users. Table 45: Storwize storage configuration for dedicated or shared stateless users Dedicated or shared stateless storage 600 users 1500 users 4500 users users 400 GB SSDs in RAID 1 for master images Hot spare SSDs GB 10k rpm in RAID 10 for users Hot spare 600 GB 10k rpm drives GB 15k rpm in RAID 10 for persistent desktops Hot spare 300 GB 15k rpm drives GB SSDs for Easy Tier Storwize control enclosures Storwize expansion enclosures (2 x 8) 36 (4 x 9) Refer to the BOM for shared storage on page 64 for more details IBM FlashSystem 840 with Atlantis USX storage acceleration The IBM FlashSystem 840 storage has low latencies and supports high IOPS. It can be used for solutions; however, it is not cost effective. However, if the Atlantis simple hybrid VM is used to provide storage optimization (capacity reduction and IOPS reduction), it becomes a much more cost-effective solution. 36

40 Each FlashSystem 840 storage device supports up to 20 GB or 40 GB of storage, depending on the size of the flash modules. To maintain the integrity and redundancy of the storage, it is recommended to use RAID 5. It is not recommended to use this device for user counts because it is not cost-efficient. Persistent virtual desktops require the most storage space and are the best candidate for this storage device. The device also can be used for user folders, snap clones, and image management, although these items can be placed on other slower shared storage. The amount of required storage for persistent virtual desktops varies and depends on the environment. Table 46 is provided for guidance purposes only. Table 46: FlashSystem 840 storage configuration for dedicated users with Atlantis USX Dedicated storage 1000 users 3000 users 5000 users users IBM FlashSystem 840 storage TB flash module TB flash module Capacity 4 TB 12 TB 20 TB 40 TB Refer to the BOM for OEM storage hardware on page 68 for more details Networking The main driver for the type of networking that is needed for VDI is the connection to shared storage. If the shared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or 16 Gbps FC, 10 GbE FCoE, or 10 GbE iscsi connection is needed. Other types of storage can be network attached by using 1 Gb or 10 Gb Ethernet. Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this website: lenovopress.com/tips1275. Automated failover and redundancy of the entire network infrastructure and shared storage is important. This failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths between the compute servers, management servers, and shared storage. If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required. For more information, see BOM for networking on page GbE networking For 10 GbE networking, the use of CIFS, NFS, or iscsi, the Lenovo RackSwitch G8124E, and G8264R TOR switches are recommended because they support VLANs by using Virtual Fabric. Redundancy and automated failover is available by using link aggregation, such as Link Aggregation Control Protocol (LACP) and two of everything. For the Flex System chassis, pairs of the EN4093R switch should be used and connected to a G8124 or G8264 TOR switch. The TOR 10GbE switches are needed for multiple Flex chassis 37

41 or external connectivity. iscsi also requires converged network adapters (CNAs) that have the LOM extension. Table 47 lists the TOR 10 GbE network switches for each user size. Table 47: TOR 10 GbE network switches needed 10 GbE TOR network switch 600 users 1500 users 4500 users users G8124E 24-port switch G8264R 64-port switch GbE FCoE networking FCoE on a 10 GbE network requires converged networking switches such as pairs of the CN4093 switch or the Flex System chassis and the G8264CS TOR converged switch. The TOR converged switches are needed for multiple Flex System chassis or for clustering of multiple IBM Storwize V7000 storage systems. FCoE also requires CNAs that have the LOM extension. Table 48 summarizes the TOR converged network switches for each user size. Table 48: TOR 10 GbE converged network switches needed 10 GbE TOR network switch 600 users 1500 users 4500 users users G8264CS 64-port switch (including up to 12 fibre channel ports) Fibre Channel networking Fibre Channel (FC) networking for shared storage requires switches. Pairs of the FC Gbps FC or FC Gbps FC SAN switch are needed for the Flex System chassis. For top of rack, the Lenovo Gbps FC switches should be used. The TOR SAN switches are needed for multiple Flex System chassis or for clustering of multiple IBM V7000 Storwize storage systems. Table 49 lists the TOR FC SAN network switches for each user size. Table 49: TOR FC network switches needed Fibre Channel TOR network switch 600 users 1500 users 4500 users users Lenovo 3873 AR2 24-port switch 2 2 Lenovo 3873 BR1 48-port switch GbE administration networking A 1 GbE network should be used to administer all of the other devices in the system. Separate 1 GbE switches are used for the IT administration network. Lenovo recommends that redundancy is also built into this network at the switch level. At minimum, a second switch should be available in case a switch goes down. Table 50 lists the number of 1 GbE switches for each user size. Table 50: TOR 1 GbE network switches needed 1 GbE TOR network switch 600 users 1500 users 4500 users users G port switch

42 Table 51 shows the number of 1 GbE connections that are needed for the administration network and switches for each type of device. The total number of connections is the sum of the device counts multiplied by the number of each device. Table 51: 1 GbE connections needed Device Number of 1 GbE connections for administration System x rack server 1 Flex System Enterprise Chassis CMM 2 Flex System Enterprise Chassis switches 1 per switch (optional) IBM Storwize V7000 storage controller 4 TOR switches Racks The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex System Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on the total height of all of the components. For more information, see the BOM for racks section on page Deployment models This section describes the following examples of different deployment models: Flex Solution with single Flex System chassis Flex System with 4500 stateless users System x server with Storwize V7000 and FCoE Deployment example 1: Flex Solution with single Flex System chassis As shown in Table 52, this example is for 1250 stateless users that are using a single Flex System chassis. There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up to two nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the Flex System chassis. 39

43 Table 52: Deployment configuration for 1250 stateless users with Flex System x240 Compute Nodes Stateless virtual desktop 1250 users x240 compute servers 10 x240 management servers 2 x240 Windows storage servers (WSS) 2 IBM Storwize V7000 expansion enclosure IBM Storwize V7000 controller enclosure Total 300 GB 15k rpm drives 40 Hot spare 300 GB 15k drives 2 Total 400 GB SSDs 6 Storwize V7000 controller enclosure 1 Storwize V7000 expansion enclosures 1 Flex System EN4093R switches 2 Flex System FC5022 switches 2 Flex System Enterprise Chassis 1 Compute Compute Compute Compute Compute Manage Compute Compute Compute Compute Compute Manage Total height 14U WSS WSS Number of Flex System racks Deployment example 2: Flex System with 4500 stateless users As shown in Table 53, this example is for 4500 stateless users who are using a Flex System based chassis with each of the 36 compute nodes supporting 125 users in normal mode and 150 in the failover case. Table 53: Deployment configuration for 4500 stateless users Stateless virtual desktop 4500 users Compute servers 36 Management servers 4 V7000 Storwize controller 1 V7000 Storwize expansion 3 Flex System EN4093R switches 6 Flex System FC3171 switches 6 Flex System Enterprise Chassis 3 10 GbE network switches 2 x G8264R 1 GbE network switches 2 x G8052 SAN network switches Total height Number of racks 2 2 x SAN24B-5 44U Figure 9 shows the deployment diagram for this configuration. The first rack contains the compute and management servers and the second rack contains the shared storage. 40

44 M1 VMs vcenter Server vcenter SQL Server Desktop controller PVS Servers (2) TOR Switches 2 x G x SAN24B-5 2 x G8124 M4 VMs Desktop SQL Server Desktop controller Web Server PVS Servers (2) Cxx Each compute server has 125 user VMs M2 VMs vcenter Server vcenter SQL Server License Server PVS Servers (2) M3 VMs vcenter Server Desktop SQL Server Web Server PVS Servers (2) Figure 9: Deployment diagram for 4500 stateless users using Storwize V7000 shared storage Figure 10 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex System Enterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middle and abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for the purpose of clarity. Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the rack level by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switches and two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connected to each of the SAN24B-5 switches. 41

45 Figure 10: Network diagram for 4500 stateless users using Storwize V7000 shared storage 42

46 Deployment example 3: System x server with Storwize V7000 and FCoE This deployment example is derived from an actual customer deployment with 3000 users, 90% of which are stateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, the average VM size is 2.1 GB. Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24 compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each compute server needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to 384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs. Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, GB of 1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI (A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgrade and two S GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs. In addition, there are three management servers. For interchangeability in case of server failure, these extra servers are configured in the same way as the compute servers. All the servers have a USB key with ESXi. There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for the operating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storage server if one should fail. The replacement server can be one of the compute servers. The idea is to quickly get a replacement online if the second one fails. Although there is a low likelihood of this situation occurring, it reduces the window of failure for the two critical Windows storage servers. All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TOR RackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fully redundant. As an alternative, iscsi with G GbE switches can be used. For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the users require space for user folders and profile data. Stateless users need space for master images and persistent users need space for the virtual clones. Stateless users have local SSDs to cache everything else, which substantially decreases the amount of shared storage. For stateless servers with SSDs, a server must be taken offline and only have maintenance performed on it after all of the users are logged off rather than being able to use vmotion. If a server crashes, this issue is immaterial. It is estimated that this configuration requires the following IBM Storwize V7000 drives: Two 400 GB SSD RAID 1 for master images Thirty 300 GB 15K drives RAID 10 for persistent images Four 400 GB SSD Easy Tier for persistent images Sixty 600 GB 10K drives RAID 10 for user folders This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansion enclosures. Figure 11 shows the deployment configuration for this example in a single rack. Because the rack has 36 items, it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 12 C13 sockets. 43

47 Figure 11: Deployment configuration for 3000 stateless users with System x servers 44

48 3550 M M M M M M5 MB MS MB MS SP L/A Stacking SP L/A Stacking Reset Mgmt Reset Mgmt B A B A Deployment example 4: Hyper-converged using System x servers This deployment example supports 750 dedicated virtual desktops with VMware VSAN in hybrid mode (SSDs and HDDs). Each VM needs 3 GB of RAM. Assuming 125 users per server, six servers are needed with a minimum of five servers in case of failures (5:1 ratio). Each virtual desktop is 40 GB. The total required capacity is 86 GB, assuming full clones of 1500 VMs, one complete mirror of the data (FFT=1), space of swapping, and 30% extra capacity. However, the use of linked clones for VSAN means that a substantially smaller capacity is required and 43 TB is more than sufficient. This example uses six System x3550 M5 1U servers and 2 G8124E RackSwitch TOR switches. Each server has two E5-2680v3 CPUs, 512 GB of memory, two network cards, two 400 GB SSDs, and six 1.2TB HDDs. Two disk groups of 1 SSD and 3 HDDs are used for VSAN. Figure 12 shows the deployment configuration for this example. Figure 12: Deployment configuration for 750 dedicated users using hyper-converged System x servers Deployment example 5: Graphics acceleration for 120 CAD users This example is for medium level computer-aided design (CAD) users. Each user requires a persistent virtual desktop with 12 GB of memory and 80 GB of disk space. Two CAD users share a K2 GPU that uses virtual GPU support. Therefore, a maximum of eight users can use a single NeXtScale nx360 M5 server with two GRID K2 graphics adapters. Each server needs 128 GB of system memory that uses eight 16 GB DDR4 DIMMs. For 120 CAD users, 15 servers are required and three extra servers are used for failover in case of a hardware problem. In total, three NeXtScale n1200 chassis are needed for these 18 servers in 18U of rack space. Storwize V7000 is used for shared storage for virtual desktops and for the CAD data store. This example uses a total of six Storwize enclosures, each with 22 HDDs and four SSDs. Each HDD is a 15k rpm 600 GB and each SDD is 400 GB. Assuming RAID 10 for the HDDs, the six enclosures can store approximately 36 TB of data. A total of SSDs is more than the recommended 10% of the data capacity and is used for Easytier function to improve read and write performance. The 120 VMs for the CAD users can use as much as 10 TB, but that use still leaves a reasonable 26 TB of storage for CAD data. 45

49 In this example, it is assumed that two redundant x3550 M5 servers are used to manage the CAD data store; that is, all accesses for data from the CAD virtual desktop are routed into these servers and data is downloaded and uploaded from the data store into the virtual desktops. The total rack space for the compute servers, storage, and TOR switches for 1 GbE management network, 10 GbE user network and 8 Gbps FC SAN is 39U, as shown in Figure 13. Figure 13: Deployment configuration for 120 CAD users using NeXtScale servers with GPUs. 46

Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Reference Architecture: Lenovo Client Virtualization with VMware Horizon Reference Architecture: Lenovo Client Virtualization with VMware Horizon Last update: 24 June 2015 Version 1.2 Reference Architecture for VMware Horizon (with View) Describes variety of storage models

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Atlantis ILIO Persistent VDI for Flash Arrays Administration Guide Atlantis Computing Inc. Atlantis ILIO 4.1. Document Version 1.0

Atlantis ILIO Persistent VDI for Flash Arrays Administration Guide Atlantis Computing Inc. Atlantis ILIO 4.1. Document Version 1.0 Atlantis ILIO Persistent VDI for Flash Arrays Atlantis Computing Inc. Atlantis ILIO 4.1 Document Version 1.0 Copyrights Atlantis ILIO Persistent VDI for Flash Arrays Document Version 1.0 2007-2015 Atlantis

More information

Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop

Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop atlantiscomputing.com Table of Contents Executive Summary... 3 Introduction... 4 Solution Overview... 5 Why use Atlantis HyperScale

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski Hyperscale Use Cases for Scaling Out with Flash David Olszewski Business challenges Performanc e Requireme nts Storage Budget Balance the IT requirements How can you get the best of both worlds? SLA Optimized

More information

Delivering SDS simplicity and extreme performance

Delivering SDS simplicity and extreme performance Delivering SDS simplicity and extreme performance Real-World SDS implementation of getting most out of limited hardware Murat Karslioglu Director Storage Systems Nexenta Systems October 2013 1 Agenda Key

More information

Desktop Virtualization and Storage Infrastructure Optimization

Desktop Virtualization and Storage Infrastructure Optimization Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................

More information

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights

More information

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions Citrix XenDesktop Modular Reference Architecture Version 2.0 Prepared by: Worldwide Consulting Solutions TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 3 Design Planning... 9 Design Examples...

More information

SIZING EMC VNX SERIES FOR VDI WORKLOAD

SIZING EMC VNX SERIES FOR VDI WORKLOAD White Paper SIZING EMC VNX SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure

More information

Outline. Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models

Outline. Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models 1 2 Outline Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models 3 Introduction What is Virtualization Station? Allows users to create and operate

More information

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array A Dell Storage Reference Architecture Enterprise Storage Solutions Cloud Client Computing December 2013

More information

VMware Horizon 6 on Coho DataStream 1000. Reference Architecture Whitepaper

VMware Horizon 6 on Coho DataStream 1000. Reference Architecture Whitepaper VMware Horizon 6 on Coho DataStream 1000 Reference Architecture Whitepaper Overview This document is intended as a reference architecture for implementing Virtual Desktop Infrastructure (VDI) solutions

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Maximizing Your Investment in Citrix XenDesktop With Sanbolic Melio By Andrew Melmed, Director of Enterprise Solutions, Sanbolic Inc. White Paper September 2011 www.sanbolic.com Introduction This white

More information

Flash Storage Optimizing Virtual Desktop Deployments

Flash Storage Optimizing Virtual Desktop Deployments Flash Storage Optimizing Virtual Desktop Deployments Ashok Rajagopalan UCS Product Management May 2014 In Collaboration with Intel Old Fashioned VDI (circa 2012) was Financially Unattractive to Most Average

More information

Why ClearCube Technology for VDI?

Why ClearCube Technology for VDI? Why ClearCube Technology for VDI? January 2014 2014 ClearCube Technology, Inc. All Rights Reserved 1 Why ClearCube for VDI? There are many VDI platforms to choose from. Some have evolved inefficiently

More information

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Storage Infrastructure and Solutions

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

CVE-401/CVA-500 FastTrack

CVE-401/CVA-500 FastTrack CVE-401/CVA-500 FastTrack Description The CVE-400-1I Engineering a Citrix Virtualization Solution course teaches Citrix engineers how to plan for and perform the tasks necessary to successfully integrate

More information

Sizing of Virtual Desktop Infrastructures

Sizing of Virtual Desktop Infrastructures White Paper Sizing of Virtual Desktop Infrastructures Contents List of tables 2 List of figures 2 Introduction 3 Use of terms and names 4 Tasks 4 Hypervisor components and functions 6 Citrix XenServer

More information

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View Page 1 of 16 Introduction A Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end user experience and

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances

Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances Last update: 31 March 2016 Provides a technical overview of Lenovo Converged HX Series Nutanix appliances Contains

More information

Vblock Solution for Citrix XenDesktop and XenApp

Vblock Solution for Citrix XenDesktop and XenApp www.vce.com Vblock Solution for Citrix XenDesktop and XenApp Version 1.3 April 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

Analysis of VDI Storage Performance During Bootstorm

Analysis of VDI Storage Performance During Bootstorm Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process

More information

IBM SmartCloud Desktop Infrastructure: Citrix XenDesktop on IBM Flex System Solution Guide

IBM SmartCloud Desktop Infrastructure: Citrix XenDesktop on IBM Flex System Solution Guide IBM SmartCloud Desktop Infrastructure: Citrix XenDesktop on IBM Flex System Solution Guide The IBM SmartCloud Desktop Infrastructure offers robust, cost-effective, and manageable virtual desktop solutions

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide 605: Design and implement a desktop virtualization solution based on a mock scenario Hands-on Lab Exercise Guide Contents Overview... 2 Scenario... 5 Quick Design Phase...11 Lab Build Out...12 Implementing

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group Russ Fellows, Evaluator Group SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material

More information

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged

More information

Introduction to VMware EVO: RAIL. White Paper

Introduction to VMware EVO: RAIL. White Paper Introduction to VMware EVO: RAIL White Paper Table of Contents Introducing VMware EVO: RAIL.... 3 Hardware.................................................................... 4 Appliance...............................................................

More information

MS Exchange Server Acceleration

MS Exchange Server Acceleration White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba

More information

Citrix XenServer 7 Feature Matrix

Citrix XenServer 7 Feature Matrix Citrix XenServer 7 Matrix Citrix XenServer 7 Matrix A list of Citrix XenServer 7 features by product edition, including entitlements XenApp and XenDesktop license holders. The most comprehensive application

More information

PERFORMANCE STUDY. NexentaConnect View Edition Branch Office Solution. Nexenta Office of the CTO Murat Karslioglu

PERFORMANCE STUDY. NexentaConnect View Edition Branch Office Solution. Nexenta Office of the CTO Murat Karslioglu PERFORMANCE STUDY NexentaConnect View Edition Branch Office Solution Nexenta Office of the CTO Murat Karslioglu Table of Contents Desktop Virtualization for Small and Medium Sized Office... 3 Cisco UCS

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

XenDesktop 4 Product Review

XenDesktop 4 Product Review XenDesktop 4 Product Review Virtual Desktop software is technology that is designed to run a desktop operating system, on a virtual cluster while attempting to provide the same user experience as a physical

More information

Cost Efficient VDI. XenDesktop 7 on Commodity Hardware

Cost Efficient VDI. XenDesktop 7 on Commodity Hardware Cost Efficient VDI XenDesktop 7 on Commodity Hardware 1 Introduction An increasing number of enterprises are looking towards desktop virtualization to help them respond to rising IT costs, security concerns,

More information

Hyper-converged Solutions for ROBO, VDI and Transactional Databases Using Microsoft Hyper-V and DataCore Hyper-converged Virtual SAN

Hyper-converged Solutions for ROBO, VDI and Transactional Databases Using Microsoft Hyper-V and DataCore Hyper-converged Virtual SAN Hyper-converged Solutions for ROBO, VDI and Transactional Databases Using Microsoft Hyper-V and DataCore Hyper-converged Virtual SAN EXECUTIVE SUMMARY By Dan Kusnetzky Microsoft Hyper-V together with DataCore

More information

Getting Real about Virtual Desktops

Getting Real about Virtual Desktops Getting Real about Virtual Desktops VDI Overview Important Features Addressing User Requirements Competitive Analysis Determining Costs and ROI Keeping it Simple and Cost Effective Factors in selecting

More information

Citrix XenDesktop Deploying XenDesktop with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, November 2012

Citrix XenDesktop Deploying XenDesktop with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, November 2012 Citrix XenDesktop Deploying XenDesktop with Tintri VMstore TECHNICAL SOLUTION OVERVIEW, Revision 1.1, November 2012 Table of Contents Introduction... 1 Before VDI and XenDesktop... 1 Critical Success Factors...

More information

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures Technology Insight Paper FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures By Leah Schoeb January 16, 2013 FlashSoft Software from SanDisk: Accelerating Virtual Infrastructures 1 FlashSoft

More information

White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware

White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware The flexible and scalable vshape VDI FC VMware solution combines all aspects of a virtual desktop environment,

More information

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4

More information

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

MaxDeploy Hyper- Converged Reference Architecture Solution Brief MaxDeploy Hyper- Converged Reference Architecture Solution Brief MaxDeploy Reference Architecture solutions are configured and tested for support with Maxta software- defined storage and with industry

More information

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS Number: 1Y0-A14 Passing Score: 800 Time Limit: 90 min File Version: 42.2 http://www.gratisexam.com/ CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS Exam Name: Implementing

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0 Dell Virtual Remote Desktop Reference Architecture Technical White Paper Version 1.0 July 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

IBM PureFlex and Atlantis ILIO: Cost-effective, high-performance and scalable persistent VDI

IBM PureFlex and Atlantis ILIO: Cost-effective, high-performance and scalable persistent VDI IBM PureFlex and Atlantis ILIO: Cost-effective, high-performance and scalable persistent VDI Highlights Lower than PC cost: saves hundreds of dollars per desktop, as storage capacity and performance requirements

More information

Making the Move to Desktop Virtualization No More Reasons to Delay

Making the Move to Desktop Virtualization No More Reasons to Delay Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Windows Server 2012 R2 VDI - Virtual Desktop Infrastructure. Ori Husyt Agile IT Consulting Team Manager orih@agileit.co.il

Windows Server 2012 R2 VDI - Virtual Desktop Infrastructure. Ori Husyt Agile IT Consulting Team Manager orih@agileit.co.il Windows Server 2012 R2 VDI - Virtual Desktop Infrastructure Ori Husyt Agile IT Consulting Team Manager orih@agileit.co.il Today s challenges Users Devices Apps Data Users expect to be able to work in any

More information

Provisioning Server High Availability Considerations

Provisioning Server High Availability Considerations Citrix Provisioning Server Design Considerations Citrix Consulting Provisioning Server High Availability Considerations Overview The purpose of this document is to give the target audience an overview

More information

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Exam : VCP5-DCV Title : VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Version : DEMO 1 / 9 1.Click the Exhibit button. An administrator has deployed a new virtual machine on

More information

Nimble Storage for VMware View VDI

Nimble Storage for VMware View VDI BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important

More information

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY TECHNICAL WHITE PAPER MAY 1 ST, 2012 GRIDCENTRIC S VIRTUAL MEMORY STREAMING (VMS) TECHNOLOGY SIGNIFICANTLY IMPROVES THE COST OF THE CLASSIC VIRTUAL MACHINE

More information

Integration Guide: Using Unidesk 3.x with Citrix XenDesktop

Integration Guide: Using Unidesk 3.x with Citrix XenDesktop TECHNICAL WHITE PAPER Integration Guide: Using Unidesk 3.x with Citrix XenDesktop This document provides a high- level overview of the Unidesk product as well as design considerations for deploying Unidesk

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Microsoft SMB File Sharing Best Practices Guide

Microsoft SMB File Sharing Best Practices Guide Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents

More information

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Table of Contents About this Document.... 3 Introduction... 4 Baseline Existing Desktop Environment... 4 Estimate VDI Hardware Needed.... 5

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Vertex Virtual Desktops

Vertex Virtual Desktops Vertex Virtual Desktops Whitepaper Revision 3.1 Updated 9-1-2012 Tangent, Inc. 2009. V.2 To the reader, I m Ron Perkes and I run the engineering group at Tangent responsible for our virtual desktop solutions.

More information

IBM FlashSystem and Atlantis ILIO

IBM FlashSystem and Atlantis ILIO IBM FlashSystem and Atlantis ILIO Cost-effective, high performance, and scalable VDI Highlights Lower-than-PC cost Better-than-PC user experience Lower project risks Fast provisioning and better management

More information

White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration

White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration This flexible and scalable VDI VMware solution combines all aspects of a virtual desktop

More information

A virtual SAN for distributed multi-site environments

A virtual SAN for distributed multi-site environments Data sheet A virtual SAN for distributed multi-site environments What is StorMagic SvSAN? StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical

More information

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft 2010 Virsto www.virsto.com Virsto: Improving VDI with Citrix and Microsoft Virsto Software, developer

More information

Desktop Virtualization. The back-end

Desktop Virtualization. The back-end Desktop Virtualization The back-end Will desktop virtualization really fit every user? Cost? Scalability? User Experience? Beyond VDI with FlexCast Mobile users Guest workers Office workers Remote workers

More information

Balancing CPU, Storage

Balancing CPU, Storage TechTarget Data Center Media E-Guide Server Virtualization: Balancing CPU, Storage and Networking Demands Virtualization initiatives often become a balancing act for data center administrators, who are

More information

CITRIX 1Y0-A16 EXAM QUESTIONS & ANSWERS

CITRIX 1Y0-A16 EXAM QUESTIONS & ANSWERS CITRIX 1Y0-A16 EXAM QUESTIONS & ANSWERS Number: 1Y0-A16 Passing Score: 550 Time Limit: 165 min File Version: 37.5 http://www.gratisexam.com/ CITRIX 1Y0-A16 EXAM QUESTIONS & ANSWERS Exam Name: Architecting

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

DVS Enterprise. Reference Architecture. VMware Horizon View Reference DVS Enterprise Reference Architecture VMware Horizon View Reference THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED

More information

Windows Server 2012 授 權 說 明

Windows Server 2012 授 權 說 明 Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor

More information

WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity

WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity Technical White Paper 1 The Role of a Flash Hypervisor in Today s Virtual Data Center Virtualization has been the biggest trend

More information

Virtual server management: Top tips on managing storage in virtual server environments

Virtual server management: Top tips on managing storage in virtual server environments Tutorial Virtual server management: Top tips on managing storage in virtual server environments Sponsored By: Top five tips for managing storage in a virtual server environment By Eric Siebert, Contributor

More information

StarWind Virtual SAN for Microsoft SOFS

StarWind Virtual SAN for Microsoft SOFS StarWind Virtual SAN for Microsoft SOFS Cutting down SMB and ROBO virtualization cost by using less hardware with Microsoft Scale-Out File Server (SOFS) By Greg Schulz Founder and Senior Advisory Analyst

More information

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide www.citrix.com Overview XenDesktop offers IT administrators many options in order to implement virtual

More information

Lab Validations: Optimizing Storage for XenDesktop with XenServer IntelliCache Reducing IO to Reduce Storage Costs

Lab Validations: Optimizing Storage for XenDesktop with XenServer IntelliCache Reducing IO to Reduce Storage Costs Lab Validations: Optimizing Storage for XenDesktop with XenServer IntelliCache Reducing IO to Reduce Storage Costs www.citrix.com Table of Contents 1. Introduction... 2 2. Executive Summary... 3 3. Reducing

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI) WHITE PAPER SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI) August 214 951 SanDisk Drive, Milpitas, CA 9535 214 SanDIsk Corporation. All rights reserved www.sandisk.com 2 Table

More information

VMware Software-Defined Storage & Virtual SAN 5.5.1

VMware Software-Defined Storage & Virtual SAN 5.5.1 VMware Software-Defined Storage & Virtual SAN 5.5.1 Peter Keilty Sr. Systems Engineer Software Defined Storage pkeilty@vmware.com @keiltypeter Grant Challenger Area Sales Manager East Software Defined

More information

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Nutanix Virtual Computing Platform. Solution Design

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Nutanix Virtual Computing Platform. Solution Design Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Nutanix Virtual Computing Platform Solution Design Citrix Validated Solutions June 25 th 2014 Prepared by: Citrix APAC Solutions TABLE OF CONTENTS

More information

Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform

Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform XenDesktop Storage Validation on Nimble Storage Whitepaper XenDesktop Storage Validation on Nimble Storage Whitepaper Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform 1 2 Executive

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

VirtualclientTechnology 2011 July

VirtualclientTechnology 2011 July WHAT S NEW IN VSPHERE VirtualclientTechnology 2011 July Agenda vsphere Platform Recap vsphere 5 Overview Infrastructure Services Compute, Storage, Network Applications Services Availability, Security,

More information

Cisco Unified Computing System, Citrix XenDesktop, and Atlantis ILIO

Cisco Unified Computing System, Citrix XenDesktop, and Atlantis ILIO Cisco Unified Computing System, Citrix XenDesktop, and Atlantis ILIO White Paper November 2010 Contents Executive Summary... 4 The Challenge... 6 Delivering a Quality VDI Desktop Experience at the Cost

More information

Virtual Desktop Infrastructure (VDI) made Easy

Virtual Desktop Infrastructure (VDI) made Easy Virtual Desktop Infrastructure (VDI) made Easy HOW-TO Preface: Desktop virtualization can increase the effectiveness of information technology (IT) teams by simplifying how they configure and deploy endpoint

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

Cisco Desktop Virtualization Solution with Atlantis ILIO Diskless VDI and VMware View on Cisco UCS

Cisco Desktop Virtualization Solution with Atlantis ILIO Diskless VDI and VMware View on Cisco UCS Cisco Desktop Virtualization Solution with Atlantis ILIO Diskless VDI and VMware View on Cisco UCS WHITE PAPER April 2013 At-a-Glance Validated Components: Atlantis ILIO Diskless VDI 3.2 Cisco Desktop

More information

Transforming Data Economics with Flash and Data Virtualization

Transforming Data Economics with Flash and Data Virtualization Transforming Data Economics with Flash and Data Virtualization Levi Norman Manager, IBM FlashSytem EcoSystem and WW Marketing IBM Systems & Technology Group lbnorman@us.ibm.com Santa Clara, CA August 2014

More information