Reference Architecture: Lenovo Client Virtualization with VMware Horizon



Similar documents
Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

WHITE PAPER 1

Delivering SDS simplicity and extreme performance

Atlantis ILIO Persistent VDI for Flash Arrays Administration Guide Atlantis Computing Inc. Atlantis ILIO 4.1. Document Version 1.0

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

The next step in Software-Defined Storage with Virtual SAN

Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

VMware Horizon 6 on Coho DataStream Reference Architecture Whitepaper

Why ClearCube Technology for VDI?

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

VMware Software-Defined Storage & Virtual SAN 5.5.1

VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014

Pivot3 Reference Architecture for VMware View Version 1.03

VMware Software-Defined Storage Vision

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Introduction to VMware EVO: RAIL. White Paper

(R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure

VMware View Design Guidelines. Russel Wilkinson, Enterprise Desktop Solutions Specialist, VMware

Technology Insight Series

White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

A virtual SAN for distributed multi-site environments

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Nimble Storage for VMware View VDI

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

New Generation of IT self service vcloud Automation Center

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

Analysis of VDI Storage Performance During Bootstorm

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

EMC XTREMIO EXECUTIVE OVERVIEW

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES

Understanding Data Locality in VMware Virtual SAN

IBM FlashSystem and Atlantis ILIO

Cisco Desktop Virtualization Solution with Atlantis ILIO Diskless VDI and VMware View on Cisco UCS

Microsoft SMB File Sharing Best Practices Guide

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

VMware Virtual SAN 6.0 Design and Sizing Guide

Frequently Asked Questions: EMC UnityVSA

Maxta Storage Platform Enterprise Storage Re-defined

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

EMC VSPEX END-USER COMPUTING

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Outline. Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models

SIZING EMC VNX SERIES FOR VDI WORKLOAD

VMware VMware Inc. All rights reserved.

Monitoring Databases on VMware

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

Virtual server management: Top tips on managing storage in virtual server environments

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

VMware vsphere Design. 2nd Edition

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

Virtual SAN Design and Deployment Guide

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Atlantis USX Hyper- Converged Solution for Microsoft SQL 2014

Storage Solutions to Maximize Success in VDI Environments

Cloud Optimize Your IT

White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration

VirtualclientTechnology 2011 July

A KAMINARIO WHITE PAPER. Changing the Data Center Economics with Kaminario s K2 All-Flash Storage Array

Hyper-converged Solutions for ROBO, VDI and Transactional Databases Using Microsoft Hyper-V and DataCore Hyper-converged Virtual SAN

SimpliVity OmniStack with the HyTrust Platform

Balancing CPU, Storage

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Software-Defined Storage & VMware Virtual SAN 5.5

VMware vsphere 5.1 Advanced Administration

Nimble Storage VDI Solution for VMware Horizon (with View)

Flash Storage Optimizing Virtual Desktop Deployments

The Power of Deduplication-Enabled Per-VM Data Protection SimpliVity s OmniCube Aligns VM and Data Management

Windows Server 2012 授 權 說 明

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card

White Paper. Recording Server Virtualization

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

SQL Server Virtualization

RUBRIK CONVERGED DATA MANAGEMENT. Technology Overview & How It Works

Flash Storage Roles & Opportunities. L.A. Hoffman/Ed Delgado CIO & Senior Storage Engineer Goodwin Procter L.L.P.

SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX

Software-Defined Storage & VMware Virtual SAN

SimpliVity OmniStack with Vormetric Transparent Encryption

VMware vsphere 5.0 Boot Camp

Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau

VMware and Primary Data: Making the Software-Defined Datacenter a Reality

Technical Paper. Leveraging VMware Software to Provide Failover Protection for the Platform for SAS Business Analytics April 2011

Transcription:

Reference Architecture: Lenovo Client Virtualization with VMware Horizon Last update: 24 June 2015 Version 1.2 Reference Architecture for VMware Horizon (with View) Describes variety of storage models including SAN storage and hyper-converged systems Contains performance data and sizing recommendations for servers, storage, and networking otrageincluding Lenovo clients, servers, storage, and networking hardware used in LCV solutions recommended, 3 lines maximum Contains detailed bill of materials for servers, storage, networking, and racks Mike Perks Kenny Bain Chandrakandh Mouleeswaran

Table of Contents 1 Introduction... 1 2 Architectural overview... 2 3 Component model... 3 3.1 VMware Horizon provisioning... 5 3.2 Storage model... 5 3.3 Atlantis Computing... 6 3.3.1 Atlantis hyper-converged volume... 6 3.3.2 Atlantis simple hybrid volume... 7 3.3.3 Atlantis simple in-memory volume... 7 3.4 VMware Virtual SAN... 7 3.4.1 Virtual SAN Storage Policies... 8 4 Operational model... 10 4.1 Operational model scenarios... 10 4.1.1 Enterprise operational model... 11 4.1.2 Hyper-converged operational model... 11 4.2 Hypervisor support... 12 4.3 Compute servers for virtual desktops... 12 4.3.1 Intel Xeon E5-2600 v3 processor family servers... 13 4.3.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX... 16 4.4 Compute servers for hosted desktops... 19 4.4.1 Intel Xeon E5-2600 v3 processor family servers... 19 4.5 Compute servers for hyper-converged systems... 20 4.5.1 Intel Xeon E5-2600 v3 processor family servers with VMware VSAN... 20 4.5.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX... 27 4.6 Graphics Acceleration... 28 4.7 Management servers... 31 4.8 Systems management... 32 4.9 Shared storage... 33 4.9.1 IBM Storwize V7000 and IBM Storwize V3700 storage... 35 4.9.2 IBM FlashSystem 840 with Atlantis USX storage acceleration... 37 ii Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4.10 Networking... 38 4.10.1 10 GbE networking... 38 4.10.2 10 GbE FCoE networking... 38 4.10.3 Fibre Channel networking... 39 4.10.4 1 GbE administration networking... 39 4.11 Racks... 40 4.12 Proxy server... 40 4.13 Deployment models... 41 4.13.1 Deployment example 1: Flex Solution with single Flex System chassis... 41 4.13.2 Deployment example 2: Flex System with 4500 stateless users... 42 4.13.3 Deployment example 3: System x server with Storwize V7000 and FCoE... 45 4.13.4 Deployment example 4: Hyper-converged using System x servers... 47 4.13.5 Deployment example 5: Graphics acceleration for 120 CAD users... 47 5 Appendix: Bill of materials... 49 5.1 BOM for enterprise and SMB compute servers... 49 5.2 BOM for hosted desktops... 54 5.3 BOM for hyper-converged compute servers... 58 5.4 BOM for enterprise and SMB management servers... 61 5.5 BOM for shared storage... 65 5.6 BOM for networking... 67 5.7 BOM for racks... 68 5.8 BOM for Flex System chassis... 68 5.9 BOM for OEM storage hardware... 69 5.10 BOM for OEM networking hardware... 69 Resources... 70 Document history... 71 iii Reference Architecture: Lenovo Client Virtualization with VMware Horizon

1 Introduction The intended audience for this document is technical IT architects, system administrators, and managers who are interested in server-based desktop virtualization and server-based computing (terminal services or application virtualization) that uses VMware Horizon (with View). In this document, the term client virtualization is used as to refer to all of these variations. Compare this term to server virtualization, which refers to the virtualization of server-based business logic and databases. This document describes the reference architecture for VMware Horizon 6.x and also supports the previous versions of VMware Horizon 5.x. This document should be read with the Lenovo Client Virtualization (LCV) base reference architecture document that is available at this website: lenovopress.com/tips1275. The business problem, business value, requirements, and hardware details are described in the LCV base reference architecture document and are not repeated here for brevity. This document gives an architecture overview and logical component model of VMware Horizon. The document also provides the operational model of VMware Horizon by combining Lenovo hardware platforms such as Flex System, System x, NeXtScale System, and RackSwitch networking with OEM hardware and software such as IBM Storwize and FlashSystem storage, VMware Virtual SAN, and Atlantis Computing software. The operational model presents performance benchmark measurements and discussion, sizing guidance, and some example deployment models. The last section contains detailed bill of material configurations for each piece of hardware. 1 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

2 Architectural overview Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with VMware Horizon on VMware ESXi hypervisor. It also shows remote access, authorization, and traffic monitoring. This reference architecture does not address the general issues of multi-site deployment and network management. Active Directory, DNS SQL Database Server View Connection Server Stateless Virtual Desktops Hypervisor Internet Clients FIREWALL Proxy Server Hosted Desktops and Apps Hypervisor Shared Storage Dedicated Virtual Desktops Hypervisor vcenter Pools vcenter Server Internal Clients Figure 1: LCV reference architecture with VMware Horizon 2 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

3 Component model Figure 2 is a layered view of the LCV solution that is mapped to the VMware Horizon virtualization infrastructure. View Horizon Administrator vcenter Operations for View Client Agent Client Agent View Client Devices Client Agent Client Agent Administrator GUIs for Support Services HTTP/HTTPS RDP and PCoIP View Connection Server vcenter Server View Composer Management Services Management Protocols Dedicated Virtual Desktops VM Agent VM Agent Accelerator VM Hypervisor NFS and CIFS Stateless Virtual Desktops VM Agent VM Agent Local SSD Storage Accelerator VM Hypervisor Hosted Desktops and Apps Hosted Desktop Hosted Desktop Hosted Application Accelerator VM Hypervisor Support Service Protocols Directory DNS DHCP OS Licensing Lenovo Thin Client Manager View Event Database VM Repository VM Linked Clones Shared Storage User Profiles Lenovo Client Virtualization- VMware Horizon User Data Files Support Services Figure 2: Component model with VMware Horizon VMware Horizon with the VMware ESXi hypervisor features the following main components: View Horizon Administrator vcenter Operations for View By using this web-based application, administrators can configure ViewConnection Server, deploy and manage View desktops, control user authentication, and troubleshoot user issues. It is installed during the installation of ViewConnection Server instances and is not required to be installed on local (administrator) devices. This tool provides end-to-end visibility into the health, performance, and efficiency of the virtual desktop infrastructure (VDI) configuration. It enables administrators to proactively ensure the best user experience possible, avert incidents, and eliminate bottlenecks before they become larger issues. 3 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

View Connection Server The VMware Horizon Connection Server is the point of contact for client devices that are requesting virtual desktops. It authenticates users and directs the virtual desktop request to the appropriate virtual machine (VM) or desktop, which ensures that only valid users are allowed access. After the authentication is complete, users are directed to their assigned VM or desktop. If a virtual desktop is unavailable, the View Connection Server works with the management and the provisioning layer to have the VM ready and available. View Composer vcenter Server In a VMware vcenter Server instance, View Composer is installed. View Composer is required when linked clones are created from a parent VM. By using a single console, vcenter Server provides centralized management of the virtual machines (VMs) for the VMware ESXi hypervisor. VMware vcenter can be used to perform live migration (called VMware vmotion), which allows a running VM to be moved from one physical server to another without downtime. Redundancy for vcenter Server is achieved through VMware high availability (HA). The vcenter Server also contains a licensing server for VMware ESXi. vcenter SQL Server View Event database Clients RDP, PCoIP Hypervisor ESXi Accelerator VM Shared storage vcenter Server for VMware ESXi hypervisor requires an SQL database. The vcenter SQL server might be Microsoft Data Engine (MSDE), Oracle, or SQL Server. Because the vcenter SQL server is a critical component, redundant servers must be available to provide fault tolerance. Customer SQL databases (including respective redundancy) can be used. VMware Horizon can be configured to record events and their details into a Microsoft SQL Server or Oracle database. Business intelligence (BI) reporting engines can be used to analyze this database. VMware Horizon supports a broad set of devices and all major device operating platforms, including Apple ios, Google Android, and Google ChromeOS. Each client device has a VMware View Client, which acts as the agent to communicate with the virtual desktop. The virtual desktop image is streamed to the user access device by using the display protocol. Depending on the solution, the choice of protocols available are Remote Desktop Protocol (RDP) and PC over IP (PCoIP). ESXi is a bare-metal hypervisor for the compute servers. ESXi also contains support for VSAN storage. For more information, see VMware Virtual SAN on page 6. The optional accelerator VM in this case is Atlantis Computing. For more information, see Atlantis Computing on page 6. Shared storage is used to store user profiles and user data files. Depending on the provisioning model that is used, different data is stored for VM images. For more information, see Storage model. 4 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

For more information, see the Lenovo Client Virtualization base reference architecture document that is available at this website: lenovopress.com/tips1275. 3.1 VMware Horizon provisioning VMware Horizon supports stateless and dedicated models. Provisioning for VMware Horizon is a function of vcenter server and View Composer for linked clones. vcenter Server allows for manually created pools and automatic pools. It allows for provisioning full clones and linked clones of a parent image for dedicated and stateless virtual desktops. Because dedicated virtual desktops use large amounts of storage, linked clones can be used to reduce the storage requirements. Linked clones are created from a snapshot (replica) that is taken from a golden master image. The golden master image and replica should be on shared storage area network (SAN) storage. One pool can contain up to 1000 linked clones. This document describes the use of automated pools (with linked clones) for dedicated and stateless virtual desktops. The deployment requirements for full clones are beyond the scope of this document. 3.2 Storage model This section describes the different types of shared data stored for stateless and dedicated desktops. Stateless and dedicated virtual desktops should have the following common shared storage items: The paging file (or vswap) is transient data that can be redirected to Network File System (NFS) storage. In general, it is recommended to disable swapping, which reduces storage use (shared or local). The desktop memory size should be chosen to match the user workload rather than depending on a smaller image and swapping, which reduces overall desktop performance. User profiles (from Microsoft Roaming Profiles) are stored by using Common Internet File System (CIFS). User data files are stored by using CIFS. Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on NFS or block I/O shared storage: NFS or block I/O is used to store all virtual desktops associated data, such as the master image, replicas, and linked clones. NFS is used to store View Composer persistent disks when View Persona Management is used for user profile and user data files. This feature is not recommended. NFS is used to store all virtual images for linked clones. The replicas and linked clones can be stored on local solid-state drive (SSD) storage. These items are discarded when the VM is shut down. For more information, see the following VMware knowledge base article about creating linked clones on NFS storage: http://kb.vmware.com/kb/2046165 5 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

3.3 Atlantis Computing Atlantis Computing provides a software-defined storage solution, which can deliver better performance than physical PC and reduce storage requirements by up to 95% in virtual desktop environments of all types. The key is Atlantis HyperDup content-aware data services, which fundamentally changes the way VMs use storage. This change reduces the storage footprints by up to 95% while minimizing (and in some cases, entirely eliminating) I/O to external storage. The net effect is a reduced CAPEX and a marked increase in performance to start, log in, start applications, search, and use virtual desktops or hosted desktops and applications. Atlantis software uses random access memory (RAM) for write-back caching of data blocks, real-time inline de-duplication of data, coalescing of blocks, and compression, which significantly reduces the data that is cached and persistently stored in addition to greatly reducing network traffic. Atlantis software works with any type of heterogeneous storage, including server RAM, direct-attached storage (DAS), SAN, or network-attached storage (NAS). It is provided as a VMware ESXi compatible VM that presents the virtualized storage to the hypervisor as a native data store, which makes deployment and integration straightforward. Atlantis Computing also provides other utilities for managing VMs and backing up and recovering data stores. Atlantis provides a number of volume types suitable for virtual desktops and shared desktops. Different volume types support different application requirements and deployment models. Table 1 compares the Atlantis volume types. Table 1: Atlantis Volume Types Volume Type Min Number of Servers Performance Tier Capacity Tier HA Comments Hyperconverged Cluster of 3 Memory or local flash DAS USX HA Good balance between performance and capacity Simple Hybrid 1 Memory or flash Shared storage Hypervisor HA Functionally equivalent to Atlantis ILIO Persistent VDI Simple All Flash 1 Local flash Shared flash Hypervisor HA Good performance, but lower capacity Simple In-Memory 1 Memory or flash Memory or flash N/A (daily backup) Functionally equivalent to Atlantis ILIO Diskless VDI 3.3.1 Atlantis hyper-converged volume Atlantis hyper-converged volumes are a hybrid between memory or local flash for accelerating performance and direct access storage (DAS) for capacity and provide a good balance between performance and capacity needed for virtual desktops. As shown in Figure 3, hyper-converged volumes are clustered across three or more servers and have built-in resiliency in which the volume can be migrated to other servers in the cluster in case of server failure or entering maintenance mode. Hyper-converged volumes are supported for ESXi. 6 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Figure 3: Atlantis USX Hyper-converged Cluster 3.3.2 Atlantis simple hybrid volume Atlantis simple hybrid volumes are targeted at dedicated virtual desktop environments. This volume type provides the optimal solution for desktop virtualization customers that are using traditional or existing storage technologies that are optimized by Atlantis software with server RAM. In this scenario, Atlantis employs memory as a tier and uses a small amount of server RAM for all I/O processing while using the existing SAN, NAS, or all-flash arrays storage as the primary storage. Atlantis storage optimizations increase the number of desktops that the storage can support by up to 20 times while improving performance. Disk-backed configurations can use various different storage types, including host-based flash memory cards, external all-flash arrays, and conventional spinning disk arrays. A variation of the simple hybrid volume type is the simple all-flash volume that uses fast, low-latency shared flash storage whereby very little RAM is used and all I/O requests are sent to the flash storage after the inline de-duplication and compression are performed. This reference architecture concentrates on the simple hybrid volume type for dedicated desktops, stateless desktops that use local SSDs, and host-shared desktops and applications. To cover the widest variety of shared storage, the simple all-flash volume type is not considered. 3.3.3 Atlantis simple in-memory volume Atlantis simple in-memory volumes eliminate storage from stateless VDI deployments by using local server RAM and in-memory storage optimization technology. Server RAM is used as the primary storage for stateless virtual desktops, which ensures that read and write I/O occurs at memory speeds and eliminates network traffic. An option allows for in-line compression and decompression to reduce the RAM usage. SnapClone technology is used to persist the data store in case of VM reboots, power outages, or other failures. 3.4 VMware Virtual SAN VMware Virtual SAN (VSAN) is a Software Defined Storage (SDS) solution embedded in the ESXi hypervisor. Virtual SAN pools flash caching devices and magnetic disks across three or more 10 GbE connected servers into a single shared datastore that is resilient and simple to manage. Virtual SAN can be scaled to 64 servers, with each server supporting up to 5 disk groups, with each disk group consisting of a single flash caching device (SSD) and up to 7 HDDs. Performance and capacity can easily be increased simply by adding more components: disks, flash or servers. 7 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

The flash cache is used to accelerate both reads and writes. Frequently read data is kept in read cache; writes are coalesced in cache and destaged to disk efficiently, greatly improving application performance. VSAN manages data in the form of flexible data containers that are called objects and the following types of objects for VMs are available: VM Home VM swap (.vswp) VMDK (.vmdk) Snapshots (.vmsn) Internally, VM objects are split into multiple components that are based on performance and availability requirements that are defined in the VM storage profile. These components are distributed across multiple hosts in a cluster to tolerate simultaneous failures and meet performance requirements. VSAN uses a distributed RAID architecture to distribute data across the cluster. Components are distributed with the use of the following main techniques: Striping (RAID 0): Number of stripes per object Mirroring (RAID 1): Number of failures to tolerate For more information about VMware Horizon virtual desktop types, objects, and components, see VMware Virtual SAN Design and Sizing Guide for Horizon Virtual Desktop Infrastructures, which is available at this website: vmware.com/files/pdf/products/vsan/vmw-tmd-virt-san-dsn-szing-guid-horizon-view.pdf 3.4.1 Virtual SAN Storage Policies Virtual SAN uses Storage Policy-based Management (SPBM) function in vsphere to enable policy driven virtual machine provisioning, and uses vsphere APIs for Storage Awareness (VASA) to expose VSAN's storage capabilities to vcenter. This approach means that storage resources are dynamically provisioned based on requested policy, and not pre-allocated as with many traditional storage solutions. Storage services are precisely aligned to VM boundaries; change the policy, and VSAN will implement the changes for the selected VMs. VMware Horizon has predefined storage policies and default values for linked clones and full clones. Table 2 lists the VMware Horizon default storage policies for linked clones. 8 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 2: VMware Horizon default storage policy values for linked clones Persistent (Dedicated Pool) Stateless (Floating Pool) Storage Policy Definition Min Max VM_HOME Replica OS Disk Persistent Disk VM_HOME Replica OS Disk Number of disk stripes per object Flash-memory read cache reservation Number of failures to tolerate (FTT) Force provisioning Object-space reservation Defines the number of HDDs across which each replica of a storage object is distributed. Defines the flash memory capacity reserved as the read cache for the storage object. Defines the number of server, disk, and network failures that a storage object can tolerate. For n failures tolerated, n + 1 copies of the object are created, and 2n + 1 hosts of contributing storage are required. Determines whether the object is provisioned, even when available resources do not meet the VM storage policy requirements Defines the percentage of the logical size of the storage object that must be reserved (thick provisioned) upon VM provisioning (the remainder of the storage object is thin provisioned) 1 12 1 1 1 1 1 1 1 0% 100% 0% 10% 0% 0% 0% 10% 0% 0 3 1 1 1 1 1 1 0 No Yes No No No No No No No 0% 100% 0% 0% 0% 100% 0% 0% 0% 9 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4 Operational model This section describes the options for mapping the logical components of a client virtualization solution onto hardware and software. The Operational model scenarios section gives an overview of the available mappings and has pointers into the other sections for the related hardware. Each subsection contains performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM configurations that are described in section 5 on page 49. The last part of this section contains some deployment models for example customer scenarios. 4.1 Operational model scenarios Figure 4 shows the following operational models (solutions) in Lenovo Client Virtualization: enterprise, small-medium business (SMB), and hyper-converged. Traditional Converged Hyper-converged Enterprise (>600 users) Compute/Management Servers x3550, x3650, nx360 Hypervisor ESXi Graphics Acceleration NVIDIA GRID K1 or K2 Shared Storage IBM Storwize V7000 IBM FlashSystem 840 Networking 10GbE 10GbE FCoE 8 or 16 Gb FC Flex System Chassis Compute/Management Servers Flex System x240 Hypervisor ESXi Shared Storage IBM Storwize V7000 IBM FlashSystem 840 Networking 10GbE 10GbE FCoE 8 or 16 Gb FC Compute/Management Servers x3650 Hypervisor ESXi Graphics Acceleration NVIDIA GRID K1 or K2 Shared Storage Not applicable Networking 10GbE SMB (<600 users) Compute/Management Servers x3550, x3650, nx360 Hypervisor ESXi Graphics Acceleration NVIDIA GRID K1 or K2 Shared Storage IBM Storwize V3700 Networking 10GbE Not Recommended Figure 4: Operational model scenarios The vertical axis is split into two halves: greater than 600 users is termed Enterprise and less than 600 is termed SMB. The 600 user split is not exact and provides rough guidance between Enterprise and SMB. The last column in Figure 4 (labelled hyper-converged ) spans both halves because a hyper-converged solution can be deployed in a linear fashion from a small number of users (100) up to a large number of users (>4000). The horizontal axis is split into three columns. The left-most column represents traditional rack-based systems with top-of-rack (TOR) switches and shared storage. The middle column represents converged systems where the compute, networking, and sometimes storage are converged into a chassis, such as the Flex System. The 10 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

right-most column represents hyper-converged systems and the software that is used in these systems. For the purposes of this reference architecture, the traditional and converged columns are merged for enterprise solutions; the only significant differences are the networking, form factor, and capabilities of the compute servers. Converged systems are not generally recommended for the SMB space because the converged hardware chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the converged chassis can be used for other workloads to make this hardware architecture more cost-effective. The VMware ESXi 6.0 hypervisor is recommended for all operational models. Similar performance results were also achieved with the ESXi 5.5 U2 hypervisor. The ESXi hypervisor is convenient because it can boot from a USB flash drive or boot from SAN and does not require any extra local storage. 4.1.1 Enterprise operational model For the enterprise operational model, see the following sections for more information about each component, its performance, and sizing guidance: 4.2 Hypervisor support 4.3 Compute servers for virtual desktops 4.4 Compute servers for hosted desktops 4.6 Graphics Acceleration 4.7 Management servers 4.8 Systems management 4.9 Shared storage 4.10 Networking 4.11 Racks 4.12 Proxy server To show the enterprise operational model for different sized customer environments, four different sizing models are provided for supporting 600, 1500, 4500, and 10000 users. The SMB model is the same as the Enterprise model for traditional systems. 4.1.2 Hyper-converged operational model For the hyper-converged operational model, see the following sections for more information about each component, its performance, and sizing guidance: 4.2 Hypervisor support 4.5 Compute servers for hyper-converged 4.6 Graphics Acceleration 4.7 Management servers 4.8 Systems management 4.10.1 10 GbE networking 4.11 Racks 4.12 Proxy server 11 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

To show the hyper-converged operational model for different sized customer environments, four different sizing models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage. 4.2 Hypervisor support The VMware Horizon reference architecture was tested with VMware ESXi 6.0 hypervisor. The hypervisor is convenient because it can start from a USB flash drive and does not require any more local storage. Alternatively, ESXi can be booted from SAN shared storage. For the VMware ESXi hypervisor, the number of vcenter clusters depends on the number of compute servers. For stateless desktops, local SSDs can be used to store the VMware replicas and linked clones for improved performance. Two replicas must be stored for each master image. Each stateless virtual desktop requires a linked clone, which tends to grow over time until it is refreshed at log out. Two enterprise high speed 200 GB SSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 400 GB (or even 800 GB SSDs) might be needed. Because of the stateless nature of the architecture, there is little added value in configuring reliable SSDs in more redundant configurations. To use the latest version ESXi, download the image from the following website and upgrade by using vcenter: ibm.com/systems/x/os/vmware/. 4.3 Compute servers for virtual desktops This section describes stateless and dedicated virtual desktop models. Stateless desktops that allow live migration of a VM from one physical server to another are considered the same as dedicated desktops because they both require shared storage. In some customer environments, stateless and dedicated desktop models might be required, which requires a hybrid implementation. Compute servers are servers that run a hypervisor and host virtual desktops. There are several considerations for the performance of the compute server, including the processor family and clock speed, the number of processors, the speed and size of main memory, and local storage options. The use of the Aero theme in Microsoft Windows 7 or other intensive workloads has an effect on the maximum number of virtual desktops that can be supported on each compute server. Windows 8 also requires more processor resources than Windows 7, whereas little difference was observed between 32-bit and 64-bit Windows 7. Although a slower processor can be used and still not exhaust the processor power, it is a good policy to have excess capacity. Another important consideration for compute servers is system memory. For stateless users, the typical range of memory that is required for each desktop is 2 GB - 4 GB. For dedicated users, the range of memory for each desktop is 2 GB - 6 GB. Designers and engineers that require graphics acceleration might need 8 GB - 16 GB of RAM per desktop. In general, power users that require larger memory sizes also require more virtual processors. This reference architecture standardizes on 2 GB per desktop as the minimum requirement of a Windows 7 desktop. The virtual desktop memory should be large enough so that swapping is not needed and vswap can be disabled. For more information, see BOM for enterprise and SMB compute servers section on page 49. 12 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4.3.1 Intel Xeon E5-2600 v3 processor family servers This section shows the performance results and sizing guidance for Lenovo compute servers based on the Intel Xeon E5-2600 v3 processors (Haswell). ESXi 6.0 performance results Table 3 lists the Login VSI performance of E5-2600 v3 processors from Intel that use the Login VSI 4.1 office worker workload with ESXi 6.0. Similar performance results were also achieved with ESXi 5.5 U2. Table 3: Performance with office worker workload Processor with office worker workload Stateless Dedicated Two E5-2650 v3 2.30 GHz, 10C 105W 188 users 197 users Two E5-2670 v3 2.30 GHz, 12C 120W 232 users 234 users Two E5-2680 v3 2.50 GHz, 12C 120W 239 users 244 users Two E5-2690 v3 2.60 GHz, 12C 135W 243 users 246 users Table 4 lists the results for the Login VSI 4.1 knowledge worker workload. Table 4: Performance with knowledge worker workload Processor with knowledge worker workload Stateless Dedicated Two E5-2680 v3 2.50 GHz, 12C 120W 189 users 190 users Two E5-2690 v3 2.60 GHz, 12C 135W 191 users 200 users Table 5 lists the results for the Login VSI 4.1 power worker workload. Table 5: Performance with power worker workload Processor with power worker workload Stateless Dedicated Two E5-2680 v3 2.50 GHz, 12C 120W 178 users 174 users These results indicate the comparative processor performance. The following conclusions can be drawn: The performance for stateless and dedicated virtual desktops is similar. The Xeon E5-2650v3 processor has performance that is similar to the previously recommended Xeon E5-2690v2 processor (IvyBridge), but uses less power and is less expensive. The Xeon E5-2690v3 processor does not have significantly better performance than the Xeon E5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost. Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) series processors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W) series processors. The cost per user increases with each processor but with a corresponding increase in user density. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost might outweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor might not provide any added value. Some users require the fastest processor and for those users, the Xeon 13 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

E5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for an average configuration. The Xeon E5-2680v3 processor is recommended for power workers. Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 6 gives a comparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows that Login VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1. Table 6: Comparison of Login VSI 3.7 and 4.1 Workloads Processor Workload Stateless Dedicated Two E5-2650 v3 2.30 GHz, 10C 105W 4.1 Office worker 188 users 197 users Two E5-2650 v3 2.30 GHz, 10C 105W 3.7 Medium 254 users 260 users Two E5-2690 v3 2.60 GHz, 12C 135W 4.1 Office worker 243 users 246 users Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Medium 316 users 313 users Two E5-2690 v3 2.60 GHz, 12C 135W 4.1 Knowledge worker 191 users 200 users Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Heavy 275 users 277 users Table 7 compares the E5-2600 v3 processors with the previous generation E5-2600 v2 processors by using the Login VSI 3.7 workloads to show the relative performance improvement. On average, the E5-2600 v3 processors are 25% - 30% faster than the previous generation with the equivalent processor names. Table 7: Comparison of E5-2600 v2 and E5-2600 v3 processors Processor Workload Stateless Dedicated Two E5-2650 v2 2.60 GHz, 8C 85W 3.7 Medium 202 users 205 users Two E5-2650 v3 2.30 GHz, 10C 105W 3.7 Medium 254 users 260 users Two E5-2690 v2 3.0 GHz, 10C 130W 3.7 Medium 240 users 260 users Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Medium 316 users 313 users Two E5-2690 v2 3.0 GHz, 10C 130W 3.7 Heavy 208 users 220 users Two E5-2690 v3 2.60 GHz, 12C 135W 3.7 Heavy 275 users 277 users Compute server recommendation for Office and Knowledge workers The default recommendation for this processor family is the Xeon E5-2650v3 processor and 384 GB of system memory because this configuration provides the best coverage for a range of users. For users who need VMs that are larger than 3 GB, Lenovo recommends the use of up to 768 GB and the Xeon E5-2680v3 processor. Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of 89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. 14 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 8 lists the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 8: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E5-2650 v3 Office worker 150 normal node 79% 78% Two E5-2650 v3 Office worker 180 failover mode 86% 86% Two E5-2680 v3 Knowledge worker 150 normal node 76% 74% Two E5-2680 v3 Knowledge worker 180 failover mode 92% 90% Table 9 lists the recommended number of virtual desktops per server for different VM memory sizes. The number of users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced system of compute and memory. Table 9: Recommended number of virtual desktops per server Processor E5-2650v3 E5-2650v3 E5-2680v3 VM memory size 2 GB (default) 3 GB 4 GB System memory 384 GB 512 GB 768 GB Desktops per server (normal mode) 150 140 150 Desktops per server (failover mode) 180 168 180 Table 10 lists the approximate number of compute servers that are needed for different numbers of users and VM sizes. Table 10: Compute servers needed for different numbers of users and VM sizes Desktop memory size (2 GB or 4 GB) 600 users 1500 users 4500 users 10000 users Compute servers @150 users (normal) 5 10 30 68 Compute servers @180 users (failover) 4 8 25 56 Failover ratio 4:1 4:1 5:1 5:1 Desktop memory size (3 GB) 600 users 1500 users 4500 users 10000 users Compute servers @140 users (normal) 5 11 33 72 Compute servers @168 users (failover) 4 9 27 60 Failover ratio 4:1 4.5:1 4.5:1 5:1 Compute server recommendation for power workers For power workers, the default recommendation for this processor family is the Xeon E5-2680v3 processor and 384 GB of system memory. For users who need VMs that are larger than 3 GB, Lenovo recommends up to 768 GB of system memory. 15 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Lenovo testing shows that 125 users per server is a good baseline and has an average of 86% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of 92% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 11 lists the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 11: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E5-2680 v3 Power worker 125 normal node 87% 85% Two E5-2680 v3 Power worker 150 failover mode 93% 92% Table 12 lists the recommended number of virtual desktops per server for different VM memory. The number of power users is reduced to fit within the available memory and still maintain a reasonably balanced system of compute and memory. Table 12: Recommended number of virtual desktops per server Processor E5-2680v3 E5-2680v3 E5-2680v3 VM memory size 3 GB (default) 4 GB 6 GB System memory 384 GB 512 GB 768 GB Desktops per server (normal mode) 105 105 105 Desktops per server (failover mode) 126 126 126 Table 13 lists the approximate number of compute servers that are needed for different numbers of power users and VM sizes. Table 13: Compute servers needed for different numbers of power users and VM sizes Desktop memory size 600 users 1500 users 4500 users 10000 users Compute servers @105 users (normal) 6 15 43 96 Compute servers @126 users (failover) 5 12 36 80 Failover ratio 5:1 4:1 5:1 5:1 4.3.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX Atlantis USX provides storage optimization by using a 100% software solution. There is a cost for processor and memory usage while offering decreased storage usage and increased input/output operations per second (IOPS). This section contains performance measurements for processor and memory utilization of USX simple hybrid volumes and gives an indication of the storage usage and performance. For persistent desktops, USX simple hybrid volumes provide acceleration to shared storage. For environments that are not using Atlantis USX, it is recommended to use linked clones to conserve shared storage space. 16 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

However, with Atlantis USX hybrid volumes, it is recommended to use full clones for persistent desktops because they de-duplicate more efficiently than the linked clones and can support more desktops per server. For stateless desktops, USX simple hybrid volumes provide storage acceleration for local SSDs. USX simple in-memory volumes are not considered for stateless desktops because they require a large amount of memory. Table 14 lists the Login VSI performance of USX simple hybrid volumes that uses two E5-2680 v3 processors and ESXi 6.0. Table 14: ESXi 6.0 performance with USX hybrid volume Login VSI 4.1 Workload Hypervisor Stateless Dedicated Office worker ESXi 6.0 199 users 203 users Knowledge worker ESXi 6.0 179 users 179 users Power worker ESXi 6.0 158 users 152 users A comparison of performance with and without the Atlantis VM shows that increase of 20-30% with the Atlantis VM. This result is to be expected and it is recommended that higher-end processors (such as the E5-2680v3) are used to maximize density. For office workers and knowledge workers, Lenovo testing shows that 125 users per server is a good baseline and has an average of 80% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of 92% usage of the processor. For power workers, Lenovo testing shows that 100 users per server is a good baseline and has an average of 78% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 120 users per server have an average of 89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 15 lists the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 15: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E5-2680 v3 Knowledge worker 125 normal node 80% 80% Two E5-2680 v3 Knowledge worker 150 failover mode 91% 93% Two E5-2680 v3 Power worker 100 normal node 75% 81% Two E5-2680 v3 Power worker 120 failover mode 87% 90% It is still a best practice to separate the user folder and any other shared folders into separate storage. This configuration leaves all of the other possible changes that might occur in a full clone to be stored in the simple hybrid volume. This configuration is highly dependent on the environment. Testing by Atlantis Computing suggests that 3.5 GB of unique data per persistent desktop is sufficient. 17 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Atlantis USX uses de-duplication and compression to reduce the storage required for VMs. Experience shows that an 85% - 95% reduction is possible, depending on the VMs. Assuming each VM is 30 GB, the uncompressed storage requirement for 150 full clones is 4500 GB. The simple hybrid volume needs to be only 10% of this requirement, which is 450 GB. To allow some room for growth and the Atlantis VM, a 600 GB volume should be allocated. Atlantis has some requirements on memory. The VM requires 5 GB, and 5% of the storage volume is used for in memory metadata. In the example of a 450 GB volume, this size is 22.5 GB. The default size for the RAM cache is 15%, but it is recommended to increase this size to 25% for the write-heavy VDI workload. In the example of a 450 GB volume, this size is 112.5 GB. Assuming 4 GB for the hypervisor, 144 GB (22.5 + 112.5 + 5 + 4) of system memory should be reserved. It is recommended that at least 384 GB of server memory is used for USX simple hybrid VDI deployments. Proof of concept (POC) testing can help determine the actual amount of RAM. Table 16 lists the recommended number of virtual desktops per server for different VM memory sizes. Table 16: Recommended number of virtual desktops per server with USX simple hybrid volumes Processor E5-2680v3 E5-2680v3 E5-2680v3 VM memory size 2 GB (default) 3 GB 4 GB Total system memory 384 GB 512 GB 768 GB Reserved system memory 144 GB 144 GB 144 GB System memory for desktop VMs 240 GB 368 GB 624 GB Desktops per server (normal mode) 100 100 100 Desktops per server (failover mode) 120 120 120 Table 17 lists the number of compute servers that are needed for different numbers of users and VM sizes. A server with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and 768 GB system memory is used for 4 GB VMs. Table 17: Compute servers needed for different numbers of users with USX simple hybrid volumes 600 users 1500 users 4500 users 10000 users Compute servers for 100 users (normal) 6 15 45 100 Compute servers for 120 users (failover) 5 13 38 84 Failover ratio 5:1 6.5:1 5:1 5:1 As a result of the use of Atlantis simple hybrid volumes, the only read operations are to fill the cache for the first time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent storage are still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS count is substantially reduced. Assuming the use of a fast, low-latency shared storage device, a single VM boot can take 20-25 seconds to get past the display of the logon window and get all of the other services fully loaded. This process takes this 18 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

time because boot operations are mainly read operations, although the actual boot time can vary depending on the VM. Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases, the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new login every 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiple logins underway and the main bottleneck is the processor. 4.4 Compute servers for hosted desktops This section describes compute servers for hosted desktops, which is a new feature in VMware Horizon 6.x. Hosted desktops are more suited to task workers that require little in desktop customization. As the name implies, multiple desktops share a single VM; however, because of this sharing, the compute resources often are exhausted before memory. Lenovo testing showed that 128 GB of memory is sufficient for servers with two processors. Other testing showed that the performance differences between four, six, or eight VMs is minimal; therefore, four VMs are recommended to reduce the license costs for Windows Server 2012 R2. For more information, see BOM for hosted desktops section on page 54. 4.4.1 Intel Xeon E5-2600 v3 processor family servers Table 18 lists the processor performance results for different size workloads that use four Windows Server 2012 R2 VMs with the Xeon E5-2600v3 series processors and ESXi 6.0 hypervisor. Table 18: Performance results for hosted desktops using the E5-2600 v3 processors Processor Workload Hosted Desktops Two E5-2650 v3 2.30 GHz, 10C 105W Office Worker 222 users Two E5-2690 v3 2.60 GHz, 12C 135W Office Worker 298 users Two E5-2690 v3 2.60 GHz, 12C 135W Knowledge Worker 244 users Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends 204 hosted desktops per server. It is important to keep 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. Table 19 lists the processor usage for the recommended number of users. Table 19: Processor usage Processor Workload Users per Server Utilization Two E5-2650 v3 Office worker 170 normal node 73% Two E5-2650 v3 Office worker 204 failover mode 81% Two E5-2690 v3 Knowledge worker 170 normal node 64% Two E5-2690 v3 Knowledge worker 204 failover mode 79% 19 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 20 lists the number of compute servers that are needed for different number of users. Each compute server has 128 GB of system memory for the four VMs. Table 20: Compute servers needed for different numbers of users and VM sizes 600 users 1500 users 4500 users 10000 users Compute servers for 170 users (normal) 4 10 27 59 Compute servers for 204 users (failover) 3 8 22 49 Failover ratio 3:1 4:1 4.5:1 5:1 4.5 Compute servers for hyper-converged systems This section presents the compute servers for different hyper-converged systems including VMware VSAN and Atlantis USX. Additional processing and memory is often required to support storing data locally, as well as additional SSDs or HDDs. Typically HDDs are used for data capacity and SSDs and memory is used for provide performance. As the price per GB for flash memory continues to reduce, there is a trend to also use all SSDs to provide the overall best performance for hyper-converged systems. For more information, see BOM for hyper-converged compute servers on page 58. 4.5.1 Intel Xeon E5-2600 v3 processor family servers with VMware VSAN VMware VSAN is tested by using the office worker and knowledge worker workloads of Login VSI 4.1. Four Lenovo x3650 M5 servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch with two 10 GbE connections per server. Server Performance and Sizing Recommendations Each server was configured with two disk groups of different sizes. A single disk group does not provide the necessary resiliency. The two disk groups per server had one 400 GB SSD and three or six HDDs. Both disk groups provided more than enough capacity for the linked clone VMs. Table 21 lists the Login VSI results for stateless desktops by using linked clones and the VMware default storage policy of number of failures to tolerate (FTT) of 0 and stripe of 1. Table 21: Performance results for stateless desktops Workload Storage Policy OS Disk Stateless desktops tested Used Capacity VSI max for 2 disk groups 1 SSD and 1 SSD and 6 HDDs 3 HDDs Office worker FTT = 0 and Stripe = 1 1000 5.26 TB 888 886 Knowledge worker FTT = 0 and Stripe = 1 800 4.45 TB 696 674 Power worker FTT = 0 and Stripe = 1 800 4.45 TB N/A 590 Table 22 lists the Login VSI results for persistent desktops by using linked clones and the VMware default storage policy of fault tolerance of 1 and 1 stripe. 20 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 22: Performance results for dedicated desktops Test Scenario Storage Policy OS Persisten Disk t Disk Dedicated desktops tested Used Capacity VSI max for 2 disk groups 1 SSD and 1 SSD and 6 HDDs 3 HDDs Office worker FTT = 1 Stripe = 1 Knowledge worker FTT = 1 Stripe = 1 Power worker FTT = 1 Stripe = 1 FTT = 1 Stripe = 1 FTT = 1 Stripe = 1 FTT = 1 Stripe = 1 1000 10.86 TB 905 895 800 9.28 TB 700 668 800 9.28 TB N/A 590 These results show that there is no significant performance difference between disk groups with three and seven HDDs and there are enough IOPS for the disk writes. Persistent desktops might need more hard drive capacity or larger SSDs to improve performance for full clones or provide space for growth of linked clones. The Lenovo M5210 raid controller for the x3650 M5 can be used in two modes: integrated MegaRaid (imr) mode without the flash cache module or MegaRaid (MR) mode with the flash cache module and at least 1 GB of battery backed flash memory. In both modes, RAID 0 virtual drives were configured for use by the disk groups. For more information, see this website: lenovopress.com/tips1069.html. Table 23 lists the measured queue depth for the M5210 raid controller in imr and MR modes. Lenovo recommends using the M5210 raid controller with the flash cache module because it has a much better queue depth and has better IOPS performance. Table 23: Raid Controller Queue Depth Raid Controller imr mode queue depth MR mode queue depth Drive queue depth M5210 234 895 128 Compute server recommendation for Office workers and Knowledge workers Lenovo testing shows that 125 users per server is a good baseline and has an average of 77% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of 89% usage rate. It is important to keep 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. 21 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 24 lists the processor usage for the recommended number of users Table 24: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E5-2680 v3 Office worker 125 normal node 51% 50% Two E5-2680 v3 Office worker 150 failover mode 62% 59% Two E5-2680 v3 Knowledge worker 125 normal node 62% 61% Two E5-2680 v3 Knowledge worker 150 failover mode 81% 78% Table 25 lists the recommended number of virtual desktops per server for different VM memory sizes. The number of users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced system of compute and memory. Table 25: Recommended number of virtual desktops per server for VSAN Processor E5-2680 v3 E5-2680 v3 E5-2680 v3 VM memory size 2 GB (default) 3 GB 4 GB System memory 384 GB 512 GB 768 GB VSAN overhead 32 GB 32 GB 32 GB Desktops per server (normal mode) 125 125 125 Desktops per server (failover mode) 150 150 150 Table 26 shows the number of servers that is needed for different numbers of users. By using the target of 125 users per server, the maximum number of users is 4000. The minimum number of servers that is required for VSAN is 3 and this requirement is reflected in the extra capacity for 300 user case because the configuration can actually support up to 450 users. Table 26: Compute servers needed for different numbers of users for VSAN 300 users 600 users 1500 users 3000 users Compute servers for 125 users (normal) 4 5 12 24 Compute servers for 150 users (failover) 3 4 10 20 Failover ratio 3:1 4:1 5:1 5:1 Compute server recommendation for Power workers Lenovo testing shows that 100 power users per server is a good baseline and has an average of 66% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 120 users per server have an average of 79% usage rate. It is important to keep 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. 22 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 27 lists the processor usage for the recommended number of users Table 27: Processor usage Processor Workload Users per Server Stateless Utilization Dedicated Utilization Two E5-2680 v3 Power worker 100 normal node 69% 63% Two E5-2680 v3 Power worker 120 failover mode 81% 78% Table 28 lists the recommended number of virtual desktops per server for different VM memory sizes. The number of users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced system of compute and memory. Table 28: Recommended number of virtual desktops per server for VSAN Processor E5-2680 v3 E5-2680 v3 E5-2680 v3 VM memory size 3 GB 4 GB 6 GB System memory 384 GB 512 GB 768 GB VSAN overhead 24 GB 32 GB 32 GB Desktops per server (normal mode) 100 100 100 Desktops per server (failover mode) 120 120 120 Table 29 shows the number of servers that is needed for different numbers of users. By using the target of 100 users per server, the maximum number of power users is 3200. The minimum number of servers that is required for VSAN is three and this requirement is reflected in the extra capacity for 300 user case because the configuration can actually support up to 360 users. Table 29: Compute servers needed for different numbers of users for VSAN 300 users 600 users 1500 users 3000 users Compute servers for 100 users (normal) 4 6 15 30 Compute servers for 120 users (failover) 3 5 13 25 Failover ratio 3:1 5:1 6:1 5:1 VSAN processor and I/O usage graphs The processor and I/O usage graphs from esxtop are helpful to understand the performance characteristics of VSAN. The graphs are for 150 users per server and three servers to show the worst-case load in a failover scenario. Figure 5 shows the processor usage with three curves (one for each server). The Y axis is percentage usage 0% - 100%. The curves have the classic Login VSI shape with a gradual increase of processor usage and then flat during the steady state period. The curves then go close to 0 as the logoffs are completed. 23 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Stateless 450 users on 3 servers Dedicated 450 users on 3 servers Figure 5: VSAN processor usage for 450 virtual desktops Figure 6 shows the SSD reads and writes with six curves, one for each SSD. The Y axis is 0-10,000 IOPS for the reads and 0-3,000 IOPS for the writes. The read curves generally show a gradual increase of reads until the steady state and then drop off again for the logoff phase. This pattern is more well-defined for the second set of curves for the SSD writes. Stateless 450 users on 3 servers Dedicated 450 users on 3 servers Figure 6: VSAN SSD reads and writes for 450 virtual desktops Figure 7 shows the HDD reads and writes with 36 curves, one for each HDD. The Y axis is 0-2,000 IOPS for the reads and 0-1,000 IOPS for the writes. The number of read IOPS has an average peak of 200 IOPS and 24 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

many of the drives are idle much of the time. The number of write IOPS has a peak of 500 IOPS; however, as can be seen, the writes occur in batches as data is destaged from the SSD cache onto the greater capacity HDDs. The first group of write peaks is during the logon and last group of write peaks correspond to the logoff period. Stateless 450 users on 3 servers Dedicated 450 users on 3 servers Figure 7: VSAN HDD reads and writes for 450 virtual desktops VSAN Resiliency Tests An important part of a hyper-converged system is the resiliency to failures when a compute server is unavailable. System performance was measured for the following use cases that featured 450 users: Enter maintenance mode by using the VSAN migration mode of ensure accessibility (VSAN default) Enter maintenance mode by using the VSAN migration mode of no data migration Server power off by using the VSAN migration mode of ensure accessibility (VSAN default) For each use case, Login VSI was run and then the compute server was removed. This process was done during the login phase as new virtual desktops are logged in and during the steady state phase. For the steady state phase, 114-120 VMs must be migrated from the failed server to the other three servers with each server gaining 38-40 VMs. 25 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 30 lists the completion time or downtime for the VSAN system with the three different use cases. Table 30: VSAN Resiliency Testing and Recovery Time Use Case vsan Migration Mode Login Phase Completion Time (seconds) Downtime (seconds) Steady State Phase Completion Downtime Time (seconds) (seconds) Maintenance Mode Ensure Accessibility 605 0 598 0 Maintenance Mode No Data Migration 408 0 430 0 Server power off Ensure Accessibility N/A 226 N/A 316 For the two maintenance mode cases, all of the VMs migrated smoothly to the other servers and there was no significant interruption in the Login VSI test. For the power off use case, there is a significant period for the system to readjust. During the login phase for a Login VSI test, the following process was observed: 1. All logged in users were logged out from the failed node. 2. Login failed for all new users logging to the desktops running on the failed node. 3. Desktop status changed to "Agent Unreachable" for all desktops in the failed node. 4. All desktops are migrated to other nodes. 5. Desktops status changed to "Available" 6. Login continued successfully for all new users. In a production system, users with persistent desktops that are running on the failed server must login again after their VM was successfully migrated to another server. Stateless users can continue working almost immediately, assuming that the system is not at full capacity and other stateless VMs are ready to be used. Figure 8 shows the processor usage for the four servers during the login phase and steady state phase by using Login VSI with the knowledge worker workload when one of the servers is powered off. The processor spike for the three remaining servers is apparent. Login Phase Steady State Phase Figure 8: VSAN processor utilization: Server power off 26 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yet VSAN can recover quite quickly. However, this situation is best avoided and it is important to build in redundancy at multiple levels for all mission critical systems. 4.5.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX Atlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5 servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX was installed and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume across the four servers that were running ESXi 5.5 U2. This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to see the difference if one server is unavailable. Table 31 lists the processor usage for the recommended number of users. Table 31: Processor usage for Atlantis USX Processor Workload Servers Users per Server Utilization Two E5-2680 v3 Knowledge worker 4 125 normal node 66% Two E5-2680 v3 Knowledge worker 3 167 failover mode 89% From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users per server in failover mode. Lenovo recommends a general failover ratio of 5:1. Table 32 lists the recommended number of virtual desktops per server for different VM memory sizes. Table 32: Recommended number of virtual desktops per server for Atlantis USX Processor E5-2680 v3 E5-2680 v3 E5-2680 v3 VM memory size 2 GB (default) 3 GB 4 GB System memory 384 GB 512 GB 768 GB Memory for ESXi and Atlantis USX 63 GB 63 GB 63 GB Memory for VMs 321 GB 449 GB 705 GB Desktops per server (normal mode) 125 125 125 Desktops per server (failover mode) 150 150 150 Table 33 lists the approximate number of compute servers that are needed for different numbers of users and VM sizes. Table 33: Compute servers needed for different numbers of users for Atlantis USX Desktop memory size 300 users 600 users 1500 users 3000 users Compute servers for 125 users (normal) 4 5 12 24 Compute servers for 150 users (failover) 3 4 10 20 Failover ratio 3:1 4:1 5:1 5:1 27 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

An important part of a hyper-converged system is the resiliency to failures when a compute server is no unavailable. Login VSI was run and then the compute server was powered off. This process was done during the steady state phase. For the steady state phase, 114-120 VMs were migrated from the failed server to the other three servers with each server gaining 38-40 VMs. Figure 9 shows the processor usage for the four servers during the steady state phase and when one of the servers is powered off. The processor spike for the three remaining servers is noticeable. Figure 9: Atlantis USX processor usage: server power off There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yet VSAN can recover quite quickly. However, this situation is best avoided as it is important to build in redundancy at multiple levels for all mission critical systems. 4.6 Graphics Acceleration The VMware ESXi 6.0 hypervisor supports the following options for graphics acceleration: Dedicated GPU with one GPU per user, which is called virtual dedicated graphics acceleration (vdga) mode. Shared GPU with users sharing a GPU, which is called virtual shared graphics acceleration (vsga) mode and is not recommended because of user contention for shared use of the GPU. GPU hardware virtualization (vgpu) that partitions each GPU for 1-8 users. This option requires Horizon 6.1. VMware also provides software emulation of a GPU, which can be processor-intensive and disruptive to other users who have a choppy experience because of reduced processor performance. Software emulation is not recommended for any user who requires graphics acceleration. The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by using the Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these servers supports up to two GRID adapters. No significant performance differences were found between these two servers when used for graphics acceleration and the results apply to both. 28 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Because the vdga option offers a low user density (8 for GRID K1 and 4 for GRID K2), it is recommended that this configuration is used only for power users, designers, engineers, or scientists that require powerful graphics acceleration. Horizon 6.1 is needed to support higher user densities of up to 64 users per server with two GRID K1 adapters by using the hardware virtualization of the GPU (vgpu mode). Lenovo recommends that a high powered CPU, such as the E5-2680v3, is used for vdga and vgpu because accelerated graphics tends to put an extra load on the processor. For the vdga option, with only four or eight users per server, 128 GB of server memory should be sufficient even for the high end GRID K2 users who might need 16 GB or even 24 GB per VM. The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or knowledge workers usually have less intense graphics workloads and can achieve higher frame rates. Table 34 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each user with the GRID K1 adapter by using vdga mode with DirectX 11. Table 34: Performance of GRID K1 vdga mode by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution FPS High Normal 0 1024x768 15.8 High Normal 0 1280x768 13.1 High Normal 0 1280x1024 11.1 Table 35 lists the results of the Heaven benchmark as FPS that is available to each user with the GRID K2 adapter by using vdga mode with DirectX 11. Table 35: Performance of GRID K2 vdga mode by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution FPS Ultra Extreme 8 1680x1050 28.4 Ultra Extreme 8 1920x1080 24.9 Ultra Extreme 8 1920x1200 22.9 Ultra Extreme 8 2560x1600 13.8 Table 36 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K1 adapter by using vgpu mode with DirectX 11. The K180Q profile has worse performance than the K1 pass-through mode, which is a known problem. For more information, see #31 in the driver release notes from NVidia. 29 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 36: Performance of GRID K1 vgpu modes by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution K180Q K160Q K140Q High Normal 0 1024x768 15.6 6.6 3.5 High Normal 0 1280x1024 10.8 5.5 N/A High Normal 0 1680x1050 8.3 N/A N/A High Normal 0 1920x1200 6.7 N/A N/A Table 37 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2 adapter by using vgpu mode with DirectX 11. The K280Q profile has worse performance than the K2 pass-through mode, which is a known problem. For more information, see #31 in the driver release notes from NVidia. Table 37: Performance of GRID K2 vgpu modes by using DirectX 11 Quality Tessellation Anti-Aliasing Resolution K280Q K260Q K240Q High Normal 0 1680x1050 23.7 20.9 11.2 High Normal 0 1920x1200 N/A 17.3 8.9 Ultra Extreme 8 1680x1050 20.2 13.8 N/A Ultra Extreme 8 1920x1080 18.1 N/A N/A Ultra Extreme 8 1920x1200 18.4 10.8 N/A Ultra Extreme 8 2560x1600 9.0 N/A N/A The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality, tessellation, and anti-aliasing options. This result is expected because of the relative performance characteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolution increases. Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is done in the customer environment to verify the performance for the required user workloads. For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650 M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs: BOM for enterprise and SMB compute servers section on page 49. BOM for hyper-converged compute servers on page 58. 30 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4.7 Management servers Management servers should have the same hardware specification as compute servers so that they can be used interchangeably in a worst-case scenario. The VMware Horizon management servers also use the same ESXi hypervisor, but have management VMs instead of user desktops. Table 38 lists the VM requirements and performance characteristics of each management service. Table 38: Characteristics of VMware Horizon management services Management Virtual System Storage Windows HA Performance service VM processors memory OS needed characteristic vcenter Server 8 12 GB 60 GB 2012 R2 No Up to 2000 VMs. vcenter SQL Server View Connection Server 4 8 GB 200 GB 2012 R2 Yes Double the virtual processors and memory for more than 2500 users. 4 16 GB 60 GB 2012 R2 Yes Up to 2000 connections. Table 39 lists the number of management VMs for each size of users following the high-availability and performance characteristics. The number of vcenter servers is half of the number of vcenter clusters because each vcenter server can handle two clusters of up to 1000 desktops. Table 39: Management VMs needed Horizon management service VM 600 users 1500 users 4500 users 10000 users vcenter servers 1 1 3 7 vcenter SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) View Connection Server 2 (1 + 1) 2 (1 + 1) 4 (3 + 1) 7 (5 + 2) Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough capacity in the management servers for all of these VMs. Table 40 lists an example mapping of the management VMs to the four physical management servers for 4500 users. 31 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 40: Management server VM mapping (4500 users) Management service for 4500 stateless users Management server 1 Management server 2 Management server 3 Management server 4 vcenter servers (3) 1 1 1 vcenter database (2) 1 1 View Connection Server (4) 1 1 1 1 It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol (DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment. For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O servers that support CIFS or NFS shares and translate file requests to the block storage system. For high availability, two or more Windows storage servers are clustered. Based on the number and type of desktops, Table 41 lists the recommended number of physical management servers. In all cases, there is redundancy in the physical management servers and the management VMs. Table 41: Management servers needed Management servers 600 users 1500 users 4500 users 10000 users Stateless desktop model 2 2 4 7 Dedicated desktop model 2 2 2 4 Windows Storage Server 2012 2 2 3 4 For more information, see BOM for enterprise and SMB management servers on page 61. 4.8 Systems management Lenovo XClarity Administrator is a centralized resource management solution that reduces complexity, speeds up response, and enhances the availability of Lenovo server systems and solutions. The Lenovo XClarity Administrator provides agent-free hardware management for Lenovo s System x rack servers and Flex System compute nodes and components, including the Chassis Management Module (CMM) and Flex System I/O modules. Figure 10 shows the Lenovo XClarity administrator interface, where Flex System components and rack servers are managed and are seen on the dashboard. Lenovo XClarity Administrator is a virtual appliance that is quickly imported into a virtualized environment server configuration. 32 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Figure 10: XClarity Administrator interface 4.9 Shared storage VDI workloads, such as virtual desktop provisioning, VM loading across the network, and access to user profiles and data files place huge demands on network shared storage. Experimentation with VDI infrastructures shows that the input/output operation per second (IOPS) performance takes precedence over storage capacity. This precedence means that more slower speed drives are needed to match the performance of fewer higher speed drives. Even with the fastest HDDs available today (15k rpm), there can still be excess capacity in the storage system because extra spindles are needed to provide the IOPS performance. From experience, this extra storage is more than sufficient for the other types of data such as SQL databases and transaction logs. The large rate of IOPS, and therefore, large number of drives needed for dedicated virtual desktops can be ameliorated to some extent by caching data in flash memory or SSD drives. The storage configurations are based on the peak performance requirement, which usually occurs during the so-called logon storm. This is when all workers at a company arrive in the morning and try to start their virtual desktops, all at the same time. 33 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

It is always recommended that user data files (shared folders) and user profile data are stored separately from the user image. By default, this has to be done for stateless virtual desktops and should also be done for dedicated virtual desktops. It is assumed that 100% of the users at peak load times require concurrent access to user data and profiles. In View 5.1, VMware introduced the View Storage Accelerator (VSA) feature that is based on the ESXi Content-Based Read Cache (CBRC). VSA provides a per-host RAM-based solution for VMs, which considerably reduces the read I/O requests that are issued to the shared storage. Performance measurements by Lenovo show that VSA has a negligible effect on the number of virtual desktops that can be used on a compute server while it reduces the read requests to storage by one-fifth. Stateless virtual desktops can use SSDs for the linked clones and the replicas. Table 42 lists the peak IOPS and disk space requirements for stateless virtual desktops on a per-user basis. Table 42: Stateless virtual desktop shared storage performance requirements Stateless virtual desktops Protocol Size IOPS Write % vswap (recommended to be disabled) NFS or Block 0 0 0 User data files CIFS/NFS 5 GB 1 75% User profile (through MSRP) CIFS 100 MB 0.8 75% Table 43 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtual desktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount of disk space for the VMware linked clones. Note that the linked clones also can grow in size over time. Stateless users that require mobility and have no local SSDs also fall into this category. The last three rows of Table 43 are the same as in Table 42 for stateless desktops. Table 43: Dedicated or shared stateless virtual desktop shared storage performance requirements Dedicated virtual desktops Protocol Size IOPS Write % Replica Block/NFS 30 GB Linked clones Block/NFS 10 GB 18 85% User AppData folder vswap (recommended to be disabled) NFS or Block 0 0 0 User data files CIFS/NFS 5 GB 1 75% User profile (through MSRP) CIFS 100 MB 0.8 75% The sizes and IOPS for user data files and user profiles that are listed in Table 42 and Table 43 can vary depending on the customer environment. For example, power users might require 10 GB and five IOPS for user files because of the applications they use. It is assumed that 100% of the users at peak load times require concurrent access to user data files and profiles. Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any storage controller configuration. 34 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

The storage configurations that are presented in this section include conservative assumptions about the VM size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most demanding user scenarios. This reference architecture describes the following different shared storage solutions: Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Fibre Channel (FC) Block I/O to IBM Storwize V7000 / Storwize V3700 storage using FC over Ethernet (FCoE) Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Internet Small Computer System Interface (iscsi) Block I/O to IBM FlashSystem 840 with Atlantis USX storage acceleration 4.9.1 IBM Storwize V7000 and IBM Storwize V3700 storage The IBM Storwize V7000 generation 2 storage system supports up to 504 drives by using up to 20 expansion enclosures. Up to four controller enclosures can be clustered for a maximum of 1056 drives (44 expansion enclosures). The Storwize V7000 generation 2 storage system also has a 64 GB cache, which is expandable to 128 GB. The IBM Storwize V3700 storage system is somewhat similar to the Storwize V7000 storage, but is restricted to a maximum of five expansion enclosures for a total of 120 drives. The maximum size of the cache for the Storwize V3700 is 8 GB. The Storwize cache acts as a read cache and a write-through cache and is useful to cache commonly used data for VDI workloads. The read and write cache are managed separately. The write cache is divided up across the storage pools that are defined for the Storwize storage system. In addition, Storwize storage offers the IBM Easy Tier function, which allows commonly used data blocks to be transparently stored on SSDs. There is a noticeable improvement in performance when Easy Tier is used, which tends to tail off as SSDs are added. It is recommended that approximately 10% of the storage space is used for SSDs to give the best balance between price and performance. The tiered storage support of Storwize storage also allows a mixture of different disk drives. Slower drives can be used for shared folders and profiles; faster drives and SSDs can be used for persistent virtual desktops and desktop images. To support file I/O (CIFS and NFS) into Storwize storage, Windows storage servers must be added, as described in Management servers on page 31. The fastest HDDs that are available for Storwize storage are 15k rpm drives in a RAID 10 array. Storage performance can be significantly improved with the use of Easy Tier. If this performance is insufficient, SSDs or alternatives (such as a flash storage system) are required. For this reference architecture, it is assumed that each user has 5 GB for shared folders and profile data and uses an average of 2 IOPS to access those files. Investigation into the performance shows that 600 GB 10k rpm drives in a RAID 10 array give the best ratio of input/output operation performance to disk space. If users need more than 5 GB for shared folders and profile data then 900 GB (or even 1.2 TB), 10k rpm drives can be used instead of 600 GB. If less capacity is needed, the 300 GB 15k rpm drives can be used for shared folders and profile data. 35 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Persistent virtual desktops require both: a high number of IOPS and a large amount of disk space for the linked clones. The linked clones can grow in size over time as well. For persistent desktops, 300 GB 15k rpm drives configured as RAID 10 were not sufficient and extra drives were required to achieve the necessary performance. Therefore, it is recommended to use a mixture of both speeds of drives for persistent desktops and shared folders and profile data. Depending on the number of master images, one or more RAID 1 array of SSDs can be used to store the VM master images. This configuration help with performance of provisioning virtual desktops; that is, a boot storm. Each master image requires at least double its space. The actual number of SSDs in the array depends on the number and size of images. In general, more users require more images. Table 44: VM images and SSDs 600 users 1500 users 4500 users 10000 users Image size 30 GB 30 GB 30 GB 30 GB Number of master images 2 4 8 16 Required disk space (doubled) 120 GB 240 GB 480 GB 960 GB 400 GB SSD configuration RAID 1 (2) RAID 1 (2) Two RAID 1 arrays (4) Four RAID 1 arrays (8) Table 45 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only one Storwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 45, the IBM Storwize V3700 storage system can support up to 7000 users only. Table 45: Storwize storage configuration for stateless users Stateless storage 600 users 1500 users 4500 users 10000 users 400 GB SSDs in RAID 1 for master images 2 2 4 8 Hot spare SSDs 2 2 4 4 600 GB 10k rpm in RAID 10 for users 12 28 80 168 Hot spare 600 GB drives 2 2 4 12 Storwize control enclosures 1 1 1 1 Storwize expansion enclosures 0 1 3 7 Table 46 lists the Storwize storage configuration that is needed for each of the dedicated or shared stateless user counts. The top four rows of Table 46 are the same as for stateless desktops. Lenovo recommends clustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500 or so dedicated virtual desktops. For the 4500 and 10000 user solutions, the drives are divided equally across all of the controllers. Based on the assumptions in Table 46, the IBM Storwize V3700 storage system can support up to 1200 users. 36 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 46: Storwize storage configuration for dedicated or shared stateless users Dedicated or shared stateless storage 600 users 1500 users 4500 users 10000 users 400 GB SSDs in RAID 1 for master images 2 2 4 8 Hot spare SSDs 2 2 4 8 600 GB 10k rpm in RAID 10 for users 12 28 80 168 Hot spare 600 GB 10k rpm drives 2 2 4 12 300 GB 15k rpm in RAID 10 for persistent desktops 40 104 304 672 Hot spare 300 GB 15k rpm drives 2 4 4 12 400 GB SSDs for Easy Tier 4 12 32 64 Storwize control enclosures 1 1 2 4 Storwize expansion enclosures 2 6 16 (2 x 8) 36 (4 x 9) Refer to the BOM for shared storage on page 65 for more details. 4.9.2 IBM FlashSystem 840 with Atlantis USX storage acceleration The IBM FlashSystem 840 storage has low latencies and supports high IOPS. It can be used for solutions; however, it is not cost effective. However, if the Atlantis simple hybrid VM is used to provide storage optimization (capacity reduction and IOPS reduction), it becomes a much more cost-effective solution. Each FlashSystem 840 storage device supports up to 20 GB or 40 GB of storage, depending on the size of the flash modules. To maintain the integrity and redundancy of the storage, it is recommended to use RAID 5. It is not recommended to use this device for user counts because it is not cost-efficient. Persistent virtual desktops require the most storage space and are the best candidate for this storage device. The device also can be used for user folders, snap clones, and image management, although these items can be placed on other slower shared storage. The amount of required storage for persistent virtual desktops varies and depends on the environment. Table 47 is provided for guidance purposes only. Table 47: FlashSystem 840 storage configuration for dedicated users with Atlantis USX Dedicated storage 1000 users 3000 users 5000 users 10000 users IBM FlashSystem 840 storage 1 1 1 1 2 TB flash module 4 8 12 0 4 TB flash module 0 0 0 12 Capacity 4 TB 12 TB 20 TB 40 TB Refer to the BOM for OEM storage hardware on page 69 for more details. 37 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4.10 Networking The main driver for the type of networking that is needed for VDI is the connection to shared storage. If the shared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or 16 Gbps FC, 10 GbE FCoE, or 10 GbE iscsi connection is needed. Other types of storage can be network attached by using 1 Gb or 10 Gb Ethernet. Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this website: lenovopress.com/tips1275. Automated failover and redundancy of the entire network infrastructure and shared storage is important. This failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths between the compute servers, management servers, and shared storage. If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required. For more information, see BOM for racks on page 67. 4.10.1 10 GbE networking For 10 GbE networking, the use of CIFS, NFS, or iscsi, the Lenovo RackSwitch G8124E, and G8264R TOR switches are recommended because they support VLANs by using Virtual Fabric. Redundancy and automated failover is available by using link aggregation, such as Link Aggregation Control Protocol (LACP) and two of everything. For the Flex System chassis, pairs of the EN4093R switch should be used and connected to a G8124 or G8264 TOR switch. The TOR 10GbE switches are needed for multiple Flex chassis or external connectivity. iscsi also requires converged network adapters (CNAs) that have the LOM extension. Table 48 lists the TOR 10 GbE network switches for each user size. Table 48: TOR 10 GbE network switches needed 10 GbE TOR network switch 600 users 1500 users 4500 users 10000 users G8124E 24-port switch 2 2 0 0 G8264R 64-port switch 0 0 2 2 4.10.2 10 GbE FCoE networking FCoE on a 10 GbE network requires converged networking switches such as pairs of the CN4093 switch or the Flex System chassis and the G8264CS TOR converged switch. The TOR converged switches are needed for multiple Flex System chassis or for clustering of multiple IBM Storwize V7000 storage systems. FCoE also requires CNAs that have the LOM extension. Table 49 summarizes the TOR converged network switches for each user size. 38 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 49: TOR 10 GbE converged network switches needed 10 GbE TOR network switch 600 users 1500 users 4500 users 10000 users G8264CS 64-port switch (including up to 12 fibre channel ports) 2 2 2 2 4.10.3 Fibre Channel networking Fibre Channel (FC) networking for shared storage requires switches. Pairs of the FC3171 8 Gbps FC or FC5022 16 Gbps FC SAN switch are needed for the Flex System chassis. For top of rack, the Lenovo 3873 16 Gbps FC switches should be used. The TOR SAN switches are needed for multiple Flex System chassis or for clustering of multiple IBM V7000 Storwize storage systems. Table 50 lists the TOR FC SAN network switches for each user size. Table 50: TOR FC network switches needed Fibre Channel TOR network switch 600 users 1500 users 4500 users 10000 users Lenovo 3873 AR2 24-port switch 2 2 Lenovo 3873 BR1 48-port switch 2 2 4.10.4 1 GbE administration networking A 1 GbE network should be used to administer all of the other devices in the system. Separate 1 GbE switches are used for the IT administration network. Lenovo recommends that redundancy is also built into this network at the switch level. At minimum, a second switch should be available in case a switch goes down. Table 51 lists the number of 1 GbE switches for each user size. Table 51: TOR 1 GbE network switches needed 1 GbE TOR network switch 600 users 1500 users 4500 users 10000 users G8052 48 port switch 2 2 2 2 Table 52 shows the number of 1 GbE connections that are needed for the administration network and switches for each type of device. The total number of connections is the sum of the device counts multiplied by the number of each device. Table 52: 1 GbE connections needed Device Number of 1 GbE connections for administration System x rack server 1 Flex System Enterprise Chassis CMM 2 Flex System Enterprise Chassis switches 1 per switch (optional) IBM Storwize V7000 storage controller 4 TOR switches 1 39 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4.11 Racks The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex System Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on the total height of all of the components. For more information, see the BOM for racks section on page 68. 4.12 Proxy server As shown in Figure 1 on page 2, you can see that there is a proxy server behind the firewall. This proxy server performs several important tasks including, user authorization and secure access, traffic management, and providing high availability to the VMware connection servers. An example is the BIG-IP system from F5. The F5 BIG-IP Access Policy Manager (APM) provides user authorization and secure access. Other options, including more advanced traffic management options, a single namespace, and username persistence, are available when BIG-IP Local Traffic Manager (LTM) is added to APM. APM and LTM also provide various logging and reporting facilities for the system administrator and a web-based configuration utility that is called iapp. Figure 11 shows the BIG-IP APM in the demilitarized zone (DMZ) to protect access to the rest of the VDI infrastructure, including the active directory servers. An Internet user presents security credentials by using a secure HTTP connection (TCP 443), which is verified by APM that uses Active Directory. TCP 443 DMZ APM TCP 80 View Connection Server Stateless Virtual Desktops Hypervisor External Clients UDP 4172 SSL Decryption Authentication High Availability PCoIP Proxy Hosted Desktops and Apps Hypervisor TCP 80 TCP/UDP 4172 Dedicated Virtual Desktops Hypervisor Active Directory vcenter Pools Internal Clients Figure 11: Traffic Flow for BIG-IP Access Policy Manager The PCoIP connection (UDP 4172) is then natively proxied by APM in a reliable and secure manner, passing it internally to any available VMware connection server within the View pod, which then interprets the connection as a normal internal PCoIP session. This process provides the scalability benefits of a BIG-IP appliance and gives APM and LTM visibility into the PCoIP traffic, which enables more advanced access management decisions. This process also removes the need for VMware secure connection servers. Untrusted internal 40 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

users can also be secured by directing all traffic through APM. Alternatively, trusted internal users can directly use VMware connection servers. Various deployment models are described in the F5 BIG-IP deployment guide. For more information, see Deploying F5 with VMware View and Horizon, which is available from this website: f5.com/pdf/deployment-guides/vmware-view5-iapp-dg.pdf For this reference architecture, the BIG-IP APM was tested to determine if it introduced any performance degradation because of the added functionality of authentication, high availability, and proxy serving. The BIG-IP APM also includes facilities to improve the performance of the PCoIP protocol. External clients often are connected over a relatively slow wide area network (WAN). To reduce the effects of a slow network connection, the external clients were connected by using a 10 GbE local area network (LAN). Table 53 shows the results with and without the F5 BIG-IP APM by using Login VSI against a single compute server. The results show that APM can slightly increase the throughput. Testing was not done to determine the performance with many thousands of simultaneous users because this scenario is highly dependent on a customer s environment and network configuration. Table 53: Performance comparison of using F5 BIG-IP APM Without F5 BIG-IP With F5 BIG-IP Processor with medium workload (Dedicated) 208 users 218 users 4.13 Deployment models This section describes the following examples of different deployment models: Flex Solution with single Flex System chassis Flex System with 4500 stateless users System x server with Storwize V7000 and FCoE 4.13.1 Deployment example 1: Flex Solution with single Flex System chassis As shown in Table 54, this example is for 1250 stateless users that are using a single Flex System chassis. There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up to two nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the Flex System chassis. 41 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Table 54: Deployment configuration for 1250 stateless users with Flex System x240 Compute Nodes Stateless virtual desktop 1250 users x240 compute servers 10 x240 management servers 2 x240 Windows storage servers (WSS) 2 IBM Storwize V7000 expansion enclosure IBM Storwize V7000 controller enclosure Total 300 GB 15k rpm drives 40 Hot spare 300 GB 15k drives 2 Total 400 GB SSDs 6 Storwize V7000 controller enclosure 1 Storwize V7000 expansion enclosures 1 Flex System EN4093R switches 2 Flex System FC5022 switches 2 Flex System Enterprise Chassis 1 Compute Compute Compute Compute Compute Manage Compute Compute Compute Compute Compute Manage Total height 14U WSS WSS Number of Flex System racks 1 4.13.2 Deployment example 2: Flex System with 4500 stateless users As shown in Table 55, this example is for 4500 stateless users who are using a Flex System based chassis with each of the 36 compute nodes supporting 125 users in normal mode and 150 in the failover case. Table 55: Deployment configuration for 4500 stateless users Stateless virtual desktop 4500 users Compute servers 36 Management servers 4 V7000 Storwize controller 1 V7000 Storwize expansion 3 Flex System EN4093R switches 6 Flex System FC3171 switches 6 Flex System Enterprise Chassis 3 10 GbE network switches 2 x G8264R 1 GbE network switches 2 x G8052 SAN network switches Total height Number of racks 2 2 x SAN24B-5 44U Figure 12 shows the deployment diagram for this configuration. The first rack contains the compute and management servers and the second rack contains the shared storage. 42 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

TOR Switches M1 VMs 2 x G8052 2 x SAN24B-5 2 x G8124 vcenter Server Connection Server Cxx M4 VMs vcenter Server Connection Server Each compute server has 125 user VMs M2 VMs vcenter Server Connection Server vcenter SQL Server M3 VMs vcenter Server Connection Server vcenter SQL Server Figure 12: Deployment diagram for 4500 stateless users using Storwize V7000 shared storage Figure 13 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex System Enterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middle and abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for the purpose of clarity. Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the rack level by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switches and two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connected to each of the SAN24B-5 switches. 43 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Figure 13: Network diagram for 4500 stateless users using Storwize V7000 shared storage 44 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

4.13.3 Deployment example 3: System x server with Storwize V7000 and FCoE This deployment example is derived from an actual customer deployment with 3000 users, 90% of which are stateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, the average VM size is 2.1 GB. Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24 compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each compute server needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to 384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs. Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, 24 16 GB of 1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI (A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgrade and two S3700 400 GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs. In addition, there are three management servers. For interchangeability in case of server failure, these extra servers are configured in the same way as the compute servers. All the servers have a USB key with ESXi 5.5. There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for the operating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storage server if one should fail. The replacement server can be one of the compute servers. The idea is to quickly get a replacement online if the second one fails. Although there is a low likelihood of this situation occurring, it reduces the window of failure for the two critical Windows storage servers. All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TOR RackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fully redundant. As an alternative, iscsi with G8264 10GbE switches can be used. For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the users require space for user folders and profile data. Stateless users need space for master images and persistent users need space for the virtual clones. Stateless users have local SSDs to cache everything else, which substantially decreases the amount of shared storage. For stateless servers with SSDs, a server must be taken offline and only have maintenance performed on it after all of the users are logged off rather than being able to use vmotion. If a server crashes, this issue is immaterial. It is estimated that this configuration requires the following IBM Storwize V7000 drives: Two 400 GB SSD RAID 1 for master images Thirty 300 GB 15K drives RAID 10 for persistent images Four 400 GB SSD Easy Tier for persistent images Sixty 600 GB 10K drives RAID 10 for user folders This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansion enclosures. Figure 14 shows the deployment configuration for this example in a single rack. Because the rack has 36 items, it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 12 C13 sockets. 45 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Figure 14: Deployment configuration for 3000 stateless users with System x servers 46 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

3550 M5 3550 M5 3550 M5 3550 M5 3550 M5 3550 M5 MB MS MB MS SP L/A Stacking SP L/A Stacking 1 1 2 3 4 2 3 4 5 5 6 7 8 6 7 8 9 9 10 11 12 10 11 12 13 13 14 15 16 14 15 16 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 17 17 18 19 20 18 19 20 21 21 22 23 24 22 23 24 10101 Reset Mgmt 10101 Reset Mgmt B A B A 4.13.4 Deployment example 4: Hyper-converged using System x servers This deployment example supports 750 dedicated virtual desktops with VMware VSAN in hybrid mode (SSDs and HDDs). Each VM needs 3 GB of RAM. Assuming 125 users per server, six servers are needed, with a minimum of five servers in case of failures (ratio of 5:1). Each virtual desktop is 40 GB. The total required capacity is 86 GB, assuming full clones of 1500 VMs, one complete mirror of the data (FFT=1), space of swapping, and 30% extra capacity. However, the use of linked clones for VSAN means that a substantially smaller capacity is required and 43 TB is more than sufficient. This example uses six System x3550 M5 1U servers and two G8124E RackSwitch TOR switches. Each server has two E5-2680v3 CPUs, 512 GB of memory, two network cards, two 400 GB SSDs, and six 1.2TB HDDs. Two disk groups of one SSD and three HDDs are used for VSAN. Figure 15 shows the deployment configuration for this example. Figure 15: Deployment configuration for 750 dedicated users using hyper-converged System x servers 4.13.5 Deployment example 5: Graphics acceleration for 120 CAD users This example is for medium level computer-aided design (CAD) users. Each user requires a persistent virtual desktop with 12 GB of memory and 80 GB of disk space. Two CAD users share a K2 GPU that uses virtual GPU support. Therefore, a maximum of eight users can use a single NeXtScale nx360 M5 server with two GRID K2 graphics adapters. Each server needs 128 GB of system memory that uses eight 16 GB DDR4 DIMMs. For 120 CAD users, 15 servers are required and three extra are used for failover in case of a hardware problem. In total, three NeXtScale n1200 chassis are needed for these 18 servers in 18U of rack space. Storwize V7000 is used for shared storage for virtual desktops and the CAD data store. This example uses a total of six Storwize enclosures, each with 22 HDDs and four SSDs. Each HDD is a 15k rpm 600 GB and each SDD is 400 GB. Assuming RAID 10 for the HDDs, the six enclosures can store about 36 TB of data. A total of 12 400 SSDs is more than the recommended 10% of the data capacity and is used for Easytier function to improve read and write performance. The 120 VMs for the CAD users can use as much as 10 TB, but that amount still leaves a reasonable 26 TB of storage for CAD data. 47 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

In this example, it is assumed that two redundant x3550 M5 servers are used to manage the CAD data store; that is, all accesses for data from the CAD virtual desktop are routed into these servers and data is downloaded and uploaded from the data store into the virtual desktops. The total rack space for the compute servers, storage, and TOR switches for 1 GbE management network, 10 GbE user network and 8 Gbps FC SAN is 39U, as shown in Figure 16. Figure 16: Deployment configuration for 120 CAD users using NeXtScale servers with GPUs. 48 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5 Appendix: Bill of materials This appendix contains the bill of materials (BOMs) for different configurations of hardware for VMware Horizon deployments. There are sections for user servers, management servers, storage, networking switches, chassis, and racks that are orderable from Lenovo. The last section is for hardware orderable from an OEM. The BOM lists in this appendix are not meant to be exhaustive and must always be double-checked with the configuration tools. Any discussion of pricing, support, and maintenance options is outside the scope of this document. For connections between TOR switches and devices (servers, storage, and chassis), the connector cables are configured with the device. The TOR switch configuration includes only transceivers or other cabling that is needed for failover or redundancy. 5.1 BOM for enterprise and SMB compute servers This section contains the bill of materials for enterprise and SMB compute servers. Flex System x240 Code Description Quantity 9532AC1 Flex System node x240 M5 Base Model 1 A5SX Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5TE Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5RM Flex System x240 M5 Compute Node 1 A5S0 System Documentation and Software-US English 1 A5SG Flex System x240 M5 2.5" HDD Backplane 1 A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1 A1BM Flex System FC3172 2-port 8Gb FC Adapter 1 A1BP Flex System FC5022 2-port 16Gb FC Adapter 1 Select SSD storage for stateless virtual desktops 5978 Select Storage devices - Lenovo-configured RAID 1 7860 Integrated Solid State Striping 1 A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24 Select flash memory for ESXi hypervisor A5TJ Lenovo SD Media Adapter for System x 1 ASCH RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media) 1 49 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3550 M5 Code Description Quantity 5463AC1 System x3550 M5 1 A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A58X System x3550 M5 8x 2.5" Base Chassis 1 A59V System x3550 M5 Planar 1 A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1 A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5AB System x Advanced LCD Light path Kit 1 A5AK System x3550 M5 Slide Kit G4 1 A59F System Documentation and Software-US English 1 A59W System x3550 M5 4x 2.5" HS HDD Kit 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select SSD storage for stateless virtual desktops 5978 Select Storage devices - Lenovo-configured RAID 1 2302 RAID configuration 1 A2K6 Primary Array - RAID 0 (2 drives required) 1 2499 Enable selection of Solid State Drives for Primary Array 1 A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24 Select flash memory for ESXi hypervisor A5R7 32GB Enterprise Value USB Memory Key 1 50 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3650 M5 Code Description Quantity 5462AC1 System x3650 M5 1 A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5FD System x3650 M5 2.5" Base without Power Supply 1 A5EA System x3650 M5 Planar 1 A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1 A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1 A4VH Lightpath LCD Op Panel 1 A5EY System Documentation and Software-US English 1 A5G6 x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID) 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select SSD storage for stateless virtual desktops 5978 Select Storage devices - Lenovo-configured RAID 1 2302 RAID configuration 1 A2K6 Primary Array - RAID 0 (2 drives required) 1 2499 Enable selection of Solid State Drives for Primary Array 1 A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24 Select flash memory for ESXi hypervisor A5R7 32GB Enterprise Value USB Memory Key 1 51 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

NeXtScale nx360 M5 Code Description Quantity 5465AC1 nx360 M5 Compute Node Base Model 1 A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5JU nx360 M5 Compute Node 1 A5JX nx360 M5 IMM Management Interposer 1 A1MK Lenovo Integration Management Module Standard Upgrade 1 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5KD nx360 M5 PCI bracket KIT 1 A5JF System Documentation and Software-US English 1 A5JZ nx360 M5 RAID Riser 1 A5V2 nx360 M5 2.5" Rear Drive Cage 1 A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1 A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1 A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1 A5JV nx360 M5 ML2 Riser 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select SSD storage for stateless virtual desktops AS7E 400GB 12G SAS 2.5" MLC G3HS Enterprise SSD 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 Select flash memory for ESXi hypervisor A5TJ Lenovo SD Media Adapter for System x 1 NeXtScale n1200 chassis Code Description Quantity 5456HC1 n1200 Enclosure Chassis Base Model 1 A41D n1200 Enclosure Chassis 1 A4MM CFF 1300W Power Supply 6 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 6 A42S System Documentation and Software - US English 1 A4AK KVM Dongle Cable 1 52 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

NeXtScale nx360 M5 with graphics acceleration Code Description Quantity 5465AC1 nx360 M5 Compute Node Base Model 1 A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5JU nx360 M5 Compute Node 1 A4MB NeXtScale PCIe Native Expansion Tray 1 A5JX nx360 M5 IMM Management Interposer 1 A1MK Lenovo Integration Management Module Standard Upgrade 1 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5KD nx360 M5 PCI bracket KIT 1 A5JF System Documentation and Software-US English 1 A5JZ nx360 M5 RAID Riser 1 A5V2 nx360 M5 2.5" Rear Drive Cage 1 A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1 A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1 A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1 A5JV nx360 M5 ML2 Riser 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select SSD storage for stateless virtual desktops AS7E 400GB 12G SAS 2.5" MLC G3HS Enterprise SSD 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 Select GRID K1 or GRID K2 for graphics acceleration A3GM NVIDIA GRID K1 2 A3GN NVIDIA GRID K2 2 Select flash memory for ESXi hypervisor A5TJ Lenovo SD Media Adapter for System x 1 53 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5.2 BOM for hosted desktops Table 20 on page 20 lists the number of management servers that are needed for the different numbers of users and this section contains the corresponding bill of materials. Flex System x240 Code Description Quantity 9532AC1 Flex System node x240 M5 Base Model 1 A5SX Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5TE Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5RM Flex System x240 M5 Compute Node 1 A5S0 System Documentation and Software-US English 1 A5SG Flex System x240 M5 2.5" HDD Backplane 1 A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1 A1BM Flex System FC3172 2-port 8Gb FC Adapter 1 A1BP Flex System FC5022 2-port 16Gb FC Adapter 1 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16 Select flash memory for ESXi hypervisor A5TJ Lenovo SD Media Adapter for System x 1 ASCH RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media) 1 54 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3550 M5 Code Description Quantity 5463AC1 System x3550 M5 1 A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A58X System x3550 M5 8x 2.5" Base Chassis 1 A59V System x3550 M5 Planar 1 A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1 A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5AB System x Advanced LCD Light path Kit 1 A5AK System x3550 M5 Slide Kit G4 1 A59F System Documentation and Software-US English 1 A59W System x3550 M5 4x 2.5" HS HDD Kit 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16 Select flash memory for ESXi hypervisor A5R7 32GB Enterprise Value USB Memory Key 1 55 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3650 M5 Code Description Quantity 5462AC1 System x3650 M5 1 A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5FD System x3650 M5 2.5" Base without Power Supply 1 A5EA System x3650 M5 Planar 1 A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1 A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1 A4VH Lightpath LCD Op Panel 1 A5EY System Documentation and Software-US English 1 A5G6 x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID) 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 9297 2U Bracket for Emulex b10gbe Virtual Fabric Adapter for System x 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16 Select flash memory for ESXi hypervisor A5R7 32GB Enterprise Value USB Memory Key 1 56 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

NeXtScale nx360 M5 Code Description Quantity 5465AC1 nx360 M5 Compute Node Base Model 1 A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5JU nx360 M5 Compute Node 1 A5JX nx360 M5 IMM Management Interposer 1 A1MK Lenovo Integration Management Module Standard Upgrade 1 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5KD nx360 M5 PCI bracket KIT 1 A5JF System Documentation and Software-US English 1 A5JZ nx360 M5 RAID Riser 1 A5V2 nx360 M5 2.5" Rear Drive Cage 1 A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1 A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1 A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1 A5JV nx360 M5 ML2 Riser 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC 1 A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 Select amount of system memory 1 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 16 Select flash memory for ESXi hypervisor A5TJ Lenovo SD Media Adapter for System x 1 NeXtScale n1200 chassis Code Description Quantity 5456HC1 n1200 Enclosure Chassis Base Model 1 A41D n1200 Enclosure Chassis 1 A4MM CFF 1300W Power Supply 6 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 6 A42S System Documentation and Software - US English 1 A4AK KVM Dongle Cable 1 57 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5.3 BOM for hyper-converged compute servers This section contains the bill of materials for hyper-converged compute servers. System x3550 M5 Code Description Quantity 5463AC1 System x3550 M5 1 A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A58X System x3550 M5 8x 2.5" Base Chassis 1 A59V System x3550 M5 Planar 1 A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1 A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5AB System x Advanced LCD Light path Kit 1 A5AK System x3550 M5 Slide Kit G4 1 A59F System Documentation and Software-US English 1 A59W System x3550 M5 4x 2.5" HS HDD Kit 1 A59X System x3550 M5 4x 2.5" HS HDD Kit PLUS 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z0 ServeRAID M5200 Series 1GB Cache/RAID 5 Upgrade 1 5977 Select Storage devices - no Lenovo-configured RAID 1 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24 Select drive configuration for hyper-converged system (all flash or SDD/HDD combination) A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 4 A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2 A4TP 1.2TB 10K 6Gbps SAS 2.5" G3HS HDD 6 2498 Install largest capacity, faster drives starting in Array 1 1 Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors A5R7 32GB Enterprise Value USB Memory Key 1 58 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3650 M5 Code Description Quantity 5462AC1 System x3650 M5 1 A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5FD System x3650 M5 2.5" Base without Power Supply 1 A5EA System x3650 M5 Planar 1 A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5EW System x 900W High Efficiency Platinum AC Power Supply 2 6400 2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1 A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1 A4VH Lightpath LCD Op Panel 1 A5FV System x Enterprise Slides Kit 1 A5EY System Documentation and Software-US English 1 A5GG x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID) 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 2 A3Z0 ServeRAID M5200 Series 1GB Cache/RAID 5 Upgrade 2 5977 Select Storage devices - no Lenovo-configured RAID 1 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1 Select amount of system memory A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 16 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 24 Select drive configuration for hyper-converged system (all flash or SDD/HDD combination) A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 4 A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2 A4TP 1.2TB 10K 6Gbps SAS 2.5" G3HS HDD 14 2498 Install largest capacity, faster drives starting in Array 1 1 Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors A5R7 32GB Enterprise Value USB Memory Key 1 59 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3650 M5 with graphics acceleration Code Description Quantity 5462AC1 System x3650 M5 1 A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5FD System x3650 M5 2.5" Base without Power Supply 1 A5EA System x3650 M5 Planar 1 A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5EW System x 900W High Efficiency Platinum AC Power Supply 2 6400 2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1 A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1 A4VH Lightpath LCD Op Panel 1 A5FV System x Enterprise Slides Kit 1 A5EY System Documentation and Software-US English 1 A5GG x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID) 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 2 A3Z0 ServeRAID M5200 Series 1GB Cache/RAID 5 Upgrade 2 5977 Select Storage devices - no Lenovo-configured RAID 1 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1 A5B9 32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM 12 Select drive configuration for hyper-converged system (all flash or SDD/HDD combination) A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 4 A4U4 S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x 2 A4TP 1.2TB 10K 6Gbps SAS 2.5" G3HS HDD 14 2498 Install largest capacity, faster drives starting in Array 1 1 Select GRID K1 or GRID K2 for graphics acceleration AS3G NVIDIA Grid K1 (Actively Cooled) 2 A470 NVIDIA Grid K2 (Actively Cooled) 2 Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors A5R7 32GB Enterprise Value USB Memory Key 1 60 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5.4 BOM for enterprise and SMB management servers Table 41 on page 32 lists the number of management servers that are needed for the different numbers of users. To help with redundancy, the bill of materials for management servers must be the same as compute servers. For more information, see BOM for enterprise and SMB compute servers on page 49. Because the Windows storage servers use a bare-metal operating system (OS) installation, they require much less memory and can have a reduced configuration as listed below. Flex System x240 Code Description Quantity 9532AC1 Flex System node x240 M5 Base Model 1 A5SX Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5TE Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5RM Flex System x240 M5 Compute Node 1 A5S0 System Documentation and Software-US English 1 A5SG Flex System x240 M5 2.5" HDD Backplane 1 5978 Select Storage devices - Lenovo-configured RAID 1 8039 Integrated SAS Mirroring - 2 identical HDDs required 1 A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2 A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1 A1BM Flex System FC3172 2-port 8Gb FC Adapter 1 A1BP Flex System FC5022 2-port 16Gb FC Adapter 1 61 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3550 M5 Code Description Quantity 5463AC1 System x3550 M5 1 A5BL Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5C0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A58X System x3550 M5 8x 2.5" Base Chassis 1 A59V System x3550 M5 Planar 1 A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1 A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5AB System x Advanced LCD Light path Kit 1 A5AK System x3550 M5 Slide Kit G4 1 A59F System Documentation and Software-US English 1 A59W System x3550 M5 4x 2.5" HS HDD Kit 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 5978 Select Storage devices - Lenovo-configured RAID 1 A2K7 Primary Array - RAID 1 (2 drives required) 1 A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2 A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 62 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

System x3650 M5 Code Description Quantity 5462AC1 System x3650 M5 1 A5GW Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5EP Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5FD System x3650 M5 2.5" Base without Power Supply 1 A5EA System x3650 M5 Planar 1 A5FN System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5R5 System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots) 1 A5AX System x 550W High Efficiency Platinum AC Power Supply 2 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5FY System x3650 M5 2.5" ODD/LCD Light Path Bay 1 A5G3 System x3650 M5 2.5" ODD Bezel with LCD Light Path 1 A4VH Lightpath LCD Op Panel 1 A5EY System Documentation and Software-US English 1 A5G6 x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID) 1 A3YZ ServeRAID M5210 SAS/SATA Controller for System x 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 5978 Select Storage devices - Lenovo-configured RAID 1 A2K7 Primary Array - RAID 1 (2 drives required) 1 A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2 A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x 1 9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 7595 2U Bracket for Brocade 8GB FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 63 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

NeXtScale nx360 M5 Code Description Quantity 5465AC1 nx360 M5 Compute Node Base Model 1 A5HH Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5J0 Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W 1 A5JU nx360 M5 Compute Node 1 A5JX nx360 M5 IMM Management Interposer 1 A1MK Lenovo Integration Management Module Standard Upgrade 1 A1ML Lenovo Integrated Management Module Advanced Upgrade 1 A5KD nx360 M5 PCI bracket KIT 1 A5JF System Documentation and Software-US English 1 5978 Select Storage devices - Lenovo-configured RAID 1 A2K7 Primary Array - RAID 1 (2 drives required) 1 ASBZ 300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System 2 A5JZ nx360 M5 RAID Riser 1 A5V2 nx360 M5 2.5" Rear Drive Cage 1 A5K3 nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up) 1 A5B8 8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 8 A40Q Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x 1 A5UX nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+ 1 A5JV nx360 M5 ML2 Riser 1 Select extra network connectivity for FCoE or iscsi, 8Gb FC, or 16 Gb FC A4NZ Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD) 1 3591 Brocade 8Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 A2XV Brocade 16Gb FC Dual-port HBA for System x 1 88Y6854 5m LC-LC fiber cable (networking) 2 NeXtScale n1200 chassis Code Description Quantity 5456HC1 n1200 Enclosure Chassis Base Model 1 A41D n1200 Enclosure Chassis 1 A4MM CFF 1300W Power Supply 6 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 6 A42S System Documentation and Software - US English 1 A4AK KVM Dongle Cable 1 64 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5.5 BOM for shared storage This section contains the bill of materials for shared storage. IBM Storwize V7000 storage Table 45 and Table 46 on page 36 list the number of Storwize V7000 storage controllers and expansion units that are needed for different user counts. IBM Storwize V7000 expansion enclosure Code Description Quantity 619524F IBM Storwize V7000 Disk Expansion Enclosure 1 AHE1 300 GB 2.5-inch 15K RPM SAS HDD 20 AHE2 600 GB 2.5-inch 15K RPM SAS HDD 0 AHF9 600 GB 10K 2.5-inch HDD 0 AHH2 400 GB 2.5-inch SSD (E-MLC) 4 AS26 Power Cord - PDU connection 1 IBM Storwize V7000 control enclosure Code Description Quantity 6195524 IBM Storwize V7000 Disk Control Enclosure 1 AHE1 300 GB 2.5-inch 15K RPM SAS HDD 20 AHE2 600 GB 2.5-inch 15K RPM SAS HDD 0 AHF9 600 GB 10K 2.5-inch HDD 0 AHH2 400 GB 2.5-inch SSD (E-MLC) 4 AHCB 64 GB to 128 GB Cache Upgrade 2 AS26 Power Cord PDU connection 1 Select network connectivity of 10 GbE iscsi or 8 Gb FC AHB5 10Gb Ethernet 4 port Adapter Cards (Pair) 1 AHB1 8Gb 4 port FC Adapter Cards (Pair) 1 65 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

IBM Storwize V3700 storage Table 45 and Table 46 on page 37 show the number of Storwize V3700 storage controllers and expansion units that are needed for different user counts. IBM Storwize V3700 control enclosure Code Description Quantity 609924C IBM Storwize V3700 Disk Control Enclosure 1 ACLB 300 GB 2.5-inch 15K RPM SAS HDD 20 ACLC 600 GB 2.5-inch 15K RPM SAS HDD 0 ACLK 600 GB 10K 2.5-inch HDD 0 ACME 400 GB 2.5-inch SSD (E-MLC) 4 ACHB Cache 8 GB 2 ACFA Turbo Performance 1 ACFN Easy Tier 1 Select network connectivity of 10 GbE iscsi or 8 Gb FC ACHM 10Gb iscsi - FCoE 2 Port Host Interface Card 2 ACHK ACHS 8Gb FC 4 Port Host Interface Card 8Gb FC SW SPF Transceivers (Pair) IBM Storwize V3700 expansion enclosure Code Description Quantity 609924E IBM Storwize V3700 Disk Expansion Enclosure 1 ACLB 300 GB 2.5-inch 15K RPM SAS HDD 20 ACLC 600 GB 2.5-inch 15K RPM SAS HDD 0 ACLK 600 GB 10K 2.5-inch HDD 0 ACME 400 GB 2.5-inch SSD (E-MLC) 4 2 2 66 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5.6 BOM for networking For more information about the number and type of TOR network switches that are needed for different user counts, see the Networking section on page 38. RackSwitch G8052 Code Description Quantity 7159G52 Lenovo System Networking RackSwitch G8052 (Rear to Front) 1 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 3802 1.5m Blue Cat5e Cable 3 A3KP Lenovo System Networking Adjustable 19" 4 Post Rail Kit 1 RackSwitch G8124E Code Description Quantity 7159BR6 Lenovo System Networking RackSwitch G8124E (Rear to Front) 1 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 3802 1.5m Blue Cat5e Cable 1 A1DK Lenovo 19" Flexible 4 Post Rail Kit 1 RackSwitch G8264 Code Description Quantity 7159G64 Lenovo System Networking RackSwitch G8264 (Rear to Front) 1 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A3KP Lenovo System Networking Adjustable 19" 4 Post Rail Kit 1 5053 SFP+ SR Transceiver 2 A1DP 1m QSFP+-to-QSFP+ cable 1 A1DM 3m QSFP+ DAC Break Out Cable 0 RackSwitch G8264CS Code Description Quantity 7159DRX Lenovo System Networking RackSwitch G8264CS (Rear to Front) 1 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2 A2ME Hot-Swappable, Rear-to-Front Fan Assembly Spare 2 A1DK Lenovo 19" Flexible 4 Post Rail Kit 1 A1DP 1m QSFP+-to-QSFP+ cable 1 A1DM 3m QSFP+ DAC Break Out Cable 0 5075 8Gb SFP + SW Optical Transceiver 12 67 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

5.7 BOM for racks The rack count depends on the deployment model. The number of PDUs assumes a fully-populated rack. Flex System rack Code Description Quantity 9363RC4, A3GR IBM PureFlex System 42U Rack 1 5897 IBM 1U 9 C19/3 C13 Switched and Monitored 60A 3 Phase PDU 6 System x rack Code Description Quantity 9363RC4, A1RC IBM 42U 1100mm Enterprise V2 Dynamic Rack 1 6012 DPI Single-phase 30A/208V C13 Enterprise PDU (US) 6 5.8 BOM for Flex System chassis The number of Flex System chassis that are needed for different numbers of users depends on the deployment model. Code Description Quantity 8721HC1 IBM Flex System Enterprise Chassis Base Model 1 A0TA IBM Flex System Enterprise Chassis 1 A0UC IBM Flex System Enterprise Chassis 2500W Power Module Standard 2 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 2 A1PH IBM Flex System Enterprise Chassis 2500W Power Module 4 3803 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 4 A0UA IBM Flex System Enterprise Chassis 80mm Fan Module 4 A0UE IBM Flex System Chassis Management Module 1 3803 3 m Blue Cat5e Cable 2 5053 IBM SFP+ SR Transceiver 2 A1DP 1 m IBM QSFP+ to QSFP+ Cable 2 A1PJ 3 m IBM Passive DAC SFP+ Cable 4 A1NF IBM Flex System Console Breakout Cable 1 5075 BladeCenter Chassis Configuration 4 6756ND0 Rack Installation >1U Component 1 675686H IBM Fabric Manager Manufacturing Instruction 1 Select network connectivity for 10 GbE, 10 GbE FCoE or iscsi, 8 Gb FC, or 16 Gb FC A3J6 IBM Flex System Fabric EN4093R 10Gb Scalable Switch 2 A3HH 5075 5605 A0UD 5075 5605 A3DP A22R 5605 IBM Flex System Fabric CN4093 10Gb Scalable Switch IBM 8 Gb SFP + SW Optical Transceiver Fiber Cable, 5 meter multimode LC-LC IBM Flex System FC3171 8Gb SAN Switch IBM 8 Gb SFP + SW Optical Transceiver Fiber Cable, 5 meter multimode LC-LC IBM Flex System FC5022 16Gb SAN Scalable Switch Brocade 16Gb SFP + SW Optical Transceiver Fiber Cable, 5 meter multimode LC-LC 68 Reference Architecture: Lenovo Client Virtualization with VMware Horizon 2 4 4 2 4 4 2 4 4

5.9 BOM for OEM storage hardware This section contains the bill of materials for OEM shared storage. IBM FlashSystem 840 storage Table 47 on page 37 shows how much FlashSystem storage is needed for different user counts. Code Description Quantity 9840-AE1 IBM FlashSystem 840 1 AF11 4TB emlc Flash Module 12 AF10 2TB emlc Flash Module 0 AF1B 1 TB emlc Flash Module 0 AF14 Encryption Enablement Pack 1 Select network connectivity for 10 GbE iscsi, 10 GbE FCoE, 8 Gb FC, or 16 Gb FC AF17 AF1D AF15 AF1D AF15 AF18 3701 AF15 AF19 3701 iscsi Host Interface Card 10 Gb iscsi 8 Port Host Optics FC/FCoE Host Interface Card 10 Gb iscsi 8 Port Host Optics FC/FCoE Host Interface Card 8 Gb FC 8 Port Host Optics 5 m Fiber Cable (LC-LC) FC/FCoE Host Interface Card 16 Gb FC 4 Port Host Optics 5 m Fiber Cable (LC-LC) 5.10 BOM for OEM networking hardware For more information about the number and type and TOR network switches that are needed for different user counts, see Networking section on page 38. Lenovo 3873 AR2 2 2 2 2 2 2 8 2 2 4 Code Description Quantity 3873HC2 Brocade 6505 FC SAN Switch 1 00MT457 Brocade 6505 12 Port Software License Pack 1 Select network connectivity for 8 Gb FC or 16 Gb FC 88Y6416 Brocade 8Gb SFP+ Optical Transceiver 24 88Y6393 Brocade 16Gb SFP+ Optical Transceiver 24 Lenovo 3873 BR1 Code Description Quantity 3873HC3 Brocade 6510 FC SAN Switch 1 00MT459 Brocade 6510 12 Port Software License Pack 2 Select network connectivity for 8 Gb FC or 16 Gb FC 88Y6416 Brocade 8Gb SFP+ Optical Transceiver 48 88Y6393 Brocade 16Gb SFP+ Optical Transceiver 48 69 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Resources For more information, see the following resources: Lenovo Client Virtualization reference architecture lenovopress.com/tips1275 VMware vsphere vmware.com/products/datacenter-virtualization/vsphere VMware Horizon (with View) vmware.com/products/horizon-view VMware VSAN vmware.com/products/virtual-san VMware VSAN design and sizing guide vmware.com/files/pdf/products/vsan/vsan_design_and_sizing_guide.pdf Atlantis Computing USX atlantiscomputing.com/products Flex System Interoperability Guide ibm.com/redbooks/abstracts/redpfsig.html IBM System Storage Interoperation Center (SSIC) ibm.com/systems/support/storage/ssic/interoperability.wss View Architecture Planning VMware Horizon 6.0 pubs.vmware.com/horizon-view-60/topic/com.vmware.icbase/pdf/horizon-view-60-architectureplanning.pdf F5 BIG-IP f5.com/products/big-ip Deploying F5 with VMware View and Horizon f5.com/pdf/deployment-guides/vmware-view5-iapp-dg.pdf 70 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Document history Version 1.0 30 Jan 2015 Conversion to Lenovo format Added Lenovo thin clients Added VMware VSAN for storage Version 1.1 5 May 2015 Added performance measurements and recommendations for the Intel E5-2600 v3 processor family with ESXi 6.0 and added BOMs for Lenovo M5 series servers. Added more performance measurements and resiliency testing for VMware VSAN hyper-converged solution. Added Atlantis USX hyper-converged solution. Added graphics acceleration performance measurements for VMware ESXi vdga mode. Version 1.2 2 July 2015 Added more performance measurements and recommendations for the Intel E5-2600 v3 processor family to include power workers and Atlantis USX simple hybrid storage acceleration. Added performance measurements for ESXi vgpu modes. Added hyper-converged and vgpu deployment models 71 Reference Architecture: Lenovo Client Virtualization with VMware Horizon

Trademarks and special notices Copyright Lenovo 2015. References in this document to Lenovo products or services do not imply that Lenovo intends to make them available in every country. Lenovo, the Lenovo logo, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and Rescue and Recovery are trademarks of Lenovo. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used Lenovo products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-lenovo products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources for non-lenovo list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-lenovo products. Questions on the capability of non-lenovo products should be addressed to the supplier of those products. All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in Lenovo product announcements. The information is presented here to communicate Lenovo s current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-lenovo websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this Lenovo product and use of those websites is at your own risk. 72 Reference Architecture: Lenovo Client Virtualization with VMware Horizon