EMC VSPEX END-USER COMPUTING



Similar documents
EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC BACKUP-AS-A-SERVICE

EMC VSPEX PRIVATE CLOUD

Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

EMC VSPEX PRIVATE CLOUD

Business Process Desktop: Acronis backup & Recovery 11.5 Deployment Guide

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

WHITE PAPER 1

Nimble Storage for VMware View VDI

Building the Virtual Information Infrastructure

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

EMC VSPEX END-USER COMPUTING

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS

EMC Integrated Infrastructure for VMware

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Benefits of Consolidating and Virtualizing Microsoft Exchange and SharePoint in a Private Cloud Environment

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

VMware vsphere Data Protection 6.0

VMware Workspace Portal Reference Architecture

VMware vsphere: Install, Configure, Manage [V5.0]

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

Brocade Solution for EMC VSPEX Server Virtualization

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

How To Make Your Computer More Efficient And Reliable

Consolidate and Virtualize Your Windows Environment with NetApp and VMware

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

EMC Integrated Infrastructure for VMware

Reference Architecture

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Backup & Recovery for VMware Environments with Avamar 6.0

EMC VFCACHE ACCELERATES ORACLE

Cloud Infrastructure Licensing, Packaging and Pricing

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Cloud Optimize Your IT

EMC Backup and Recovery for Microsoft SQL Server

Virtual SAN Design and Deployment Guide

EMC VNXe3200 UFS64 FILE SYSTEM

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

Redefining Microsoft SQL Server Data Management. PAS Specification

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

VMware vsphere Design. 2nd Edition

VirtualclientTechnology 2011 July

EMC Business Continuity for Microsoft SQL Server 2008

Three Paths to the Virtualized Private Cloud

SAN Conceptual and Design Basics

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

vsphere 6.0 Advantages Over Hyper-V

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

EMC Data Protection Advisor 6.0

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure Components of the VMware infrastructure...2

What s New with VMware Virtual Infrastructure

FOR SERVERS 2.2: FEATURE matrix

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Nimble Storage VDI Solution for VMware Horizon (with View)

Softverski definirani data centri - 2. dio

The VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5

Pivot3 Reference Architecture for VMware View Version 1.03

Backup and Recovery for SAP Environments using EMC Avamar 7

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC DESKTOP-AS-A-SERVICE

RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

VMware vsphere with Operations Management and VMware vsphere

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Installing and Administering VMware vsphere Update Manager

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

SIZING EMC VNX SERIES FOR VDI WORKLOAD

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

VMware vsphere 4.1 with ESXi and vcenter

VMware vsphere 4.1. Pricing, Packaging and Licensing Overview. E f f e c t i v e A u g u s t 1, W H I T E P A P E R

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review

VMware vsphere 5. Licensing, Pricing and Packaging W H I T E P A P E R

A virtual SAN for distributed multi-site environments

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

EMC VNX-F ALL FLASH ARRAY

VMware vsphere Data Protection 6.1

Comprehensive Virtual Desktop Deployment with VMware and NetApp

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

VMware vsphere-6.0 Administration Training

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

Transcription:

VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This document describes the EMC VSPEX End-User Computing solution with VMware vsphere and EMC VNXe for up 250 virtual desktops. January, 2013

Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published January 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC online support website. EMC End-User Computing VSPEX Proven Infrastructure Part Number H11331.1 2

Contents Chapter 1 Executive Summary 13 Introduction... 14 Target audience... 14 Document purpose... 14 Business needs... 15 Chapter 2 Solution Overview 17 Solution overview... 18 Desktop broker... 18 Virtualization... 18 Compute... 18 Network... 19 Storage... 19 Chapter 3 Solution Technology Overview 21 The technology solution... 22 Summary of key components... 23 Desktop virtualization... 24 Overview... 24 VMware View 5.1... 24 View Composer 3.0... 24 View Persona Management... 25 View Storage Accelerator... 25 Virtualization... 25 VMware vsphere 5.1... 25 EMC Virtual Storage Integrator for VMware... 26 VNXe VMware vstorage API for Array Integration support... 26 VMware vcenter... 26 VMware vsphere High Availability... 26 3

Contents Compute... 27 Network... 29 Storage... 30 Overview... 30 EMC VNXe series... 31 Backup and recovery... 32 Overview... 32 EMC Avamar... 32 Security... 32 RSA SecurID two-factor authentication... 32 SecurID authentication in the VSPEX End-User Computing for VMware View environment... 33 Required components... 33 Compute, memory and storage resources... 34 Other sections... 35 VMware vshield Endpoint... 35 VMware vcenter Operations Manager for View... 35 Chapter 4 Solution Stack Architectural Overview 37 Solution Overview... 38 Solution architecture... 38 Overview... 38 Architecture for up to 250 virtual desktops... 38 Key components... 39 Hardware resources... 41 Software resources... 42 Sizing for validated configuration... 43 Server configuration guidelines... 44 Overview... 44 VMware vsphere memory virtualization for VSPEX... 45 Memory configuration guidelines... 47 Network configuration guidelines... 47 Overview... 47 VLAN... 48 Enable jumbo frames... 49 Link aggregation... 49 Storage configuration guidelines... 49 Overview... 49 VMware vsphere storage virtualization for VSPEX... 49 Storage layout for 250 virtual desktops... 50 4

Contents High availability and failover... 52 Introduction... 52 Virtualization layer... 52 Compute layer... 53 Network layer... 53 Storage layer... 54 Validation test profile... 55 Profile characteristics... 55 Antivirus and antimalware platform profile... 55 Platform characteristics... 55 vshield architecture... 56 vcenter Operations Manager for View platform profile... 56 Platform characteristics... 56 vcenter Operations Manager for View architecture... 57 Backup and Recovery configuration guidelines... 57 Backup characteristics... 57 Backup layout... 57 Sizing guidelines... 58 Reference workload... 58 Defining the reference workload... 58 Applying the reference workload... 59 Concurrency... 59 Heavier desktop workloads... 59 Implementing the reference architectures... 59 Overview... 59 Resource types... 59 CPU resources... 59 Memory resources... 60 Network resources... 60 Storage resources... 61 Backup resources... 61 Implementation summary... 62 Quick assessment... 62 Overview... 62 CPU requirements... 62 Memory requirements... 62 Storage performance requirements... 63 Storage capacity requirements... 63 Determining equivalent reference virtual desktops... 63 5

Contents Fine tuning hardware resources... 65 Chapter 5 VSPEX Configuration Guidelines 69 Configuration overview... 70 Pre-deployment tasks... 71 Overview... 71 Deployment prerequisites... 71 Customer configuration data... 74 Prepare switches, connect network, and configure switches... 74 Overview... 74 Configure infrastructure network... 74 Configure VLANs... 75 Complete network cabling... 76 Prepare and configure storage array... 76 Overview... 76 Prepare VNXe... 76 Set up the initial VNXe configuration... 76 Setup VNXe networking... 77 Provision storage for NFS datastores... 77 Provision optional storage for user data... 78 Provision optional storage for infrastructure virtual machines... 78 Install and configure vsphere hosts... 78 Overview... 78 Install vsphere... 79 Configure vsphere networking... 79 Jumbo frames... 80 Connect VMware datastores... 80 Plan virtual machine memory allocations... 80 Install and configure SQL server database... 83 Overview... 83 Create a virtual machine for Microsoft SQL server... 84 Install Microsoft Windows on the virtual machine... 84 Install SQL server... 84 Configure database for VMware vcenter... 84 Configure database for VMware Update Manager... 85 Configure database for VMware View Composer... 85 Configure database for VMware View Manager... 85 Configure the VMware View and View Composer database permissions... 85 VMware vcenter server deployment... 85 Overview... 85 6

Contents Create the vcenter host virtual machine... 86 Install vcenter guest OS... 87 Create vcenter ODBC connections... 87 Install vcenter Server... 87 Apply vsphere license keys... 87 Deploy the vstorage APIs for Array Integration (VAAI) plug-in... 87 Install the EMC VSI plug-in... 87 Set up VMware View Connection Server... 88 Overview... 88 Install the VMware View Connection Server... 89 Configure the View Event Log database connection... 89 Add a Second View Connection Server... 89 Configure the View Composer ODBC connection... 89 Install View Composer... 89 Link VMware View to vcenter and View Composer... 89 Prepare master virtual machine... 89 Configure View Persona Management group policies... 90 Configure folder redirection group policies for Avamar... 90 Configure View PCoIP group policies... 90 Set up EMC Avamar... 90 Avamar configuration overview... 90 GPO modifications for EMC Avamar... 91 GPO additions for EMC Avamar... 92 Master image preparation for EMC Avamar... 96 Defining datasets... 97 Defining schedules... 102 Adjust maintenance Window schedule... 102 Defining retention policies... 103 Group and group policy creation... 104 EMC Avamar Enterprise Manager activate clients... 106 Set up VMware vshield Endpoint... 114 Overview... 114 Verify desktop vshield Endpoint driver installation... 115 Deploy vshield Manager appliance... 115 Install the vsphere vshield Endpoint service... 115 Deploy an antivirus solution management server... 115 Deploy vsphere Security Virtual Machines... 115 Verify vshield Endpoint functionality... 115 Set up VMware vcenter Operations Manager for View... 116 Overview... 116 7

Contents Create vsphere IP Pool for vc Ops... 117 Deploy vcenter Operations Manager vapp... 117 Specify the vcenter server to monitor... 117 Update virtual desktop settings... 117 Create the virtual machine for the vc Ops for View Adapter server... 117 Install the vc Ops for View Adapter software... 118 Import the vc Ops for View PAKFile... 118 Verify vc Ops for View functionality... 118 Summary... 118 Chapter 6 Validating the Solution 119 Overview... 120 Post-install checklist... 121 Deploy and test a single virtual desktop... 121 Verify the redundancy of the solution components... 121 Provision remaining virtual desktops... 122 Appendix A Bills of Materials 125 Bill of material for 250 virtual desktops... 126 Appendix B Customer Configuration Data Sheet 127 Overview of customer configuration data sheets... 128 Appendix C References 131 References... 132 EMC documentation... 132 Other documentation... 133 Appendix D About VSPEX 135 About VSPEX... 136 8

Figures Figure 1. Solution components... 22 Figure 2. Compute Layer Flexibility... 28 Figure 3. Example of Highly-Available network design... 30 Figure 4. Authentication control flow for View access Figure 5. requests originating on an external network... 33 Logical architecture: VSPEX End-User Computing for VMware View with RSA... 34 Figure 6. Logical architecture for 250 virtual desktops... 39 Figure 7. Hypervisor memory consumption... 46 Figure 8. Required networks... 48 Figure 9. VMware Virtual Disk Types... 50 Figure 10. Core storage layout... 51 Figure 11. Optional storage layout... 51 Figure 12. High Availability at the Virtualization layer... 52 Figure 13. Redundant Power Supplies... 53 Figure 14. Network Layer High Availability... 54 Figure 15. VNXe series high availability... 54 Figure 16. Sample Ethernet network architecture... 75 Figure 17. Virtual Machine memory settings... 82 Figure 18. Persona Management modifications for Avamar... 92 Figure 19. Configuring Windows folder redirection... 93 Figure 20. Create a Windows network drive mapping for user files... 94 Figure 21. Configure drive mapping settings... 95 Figure 22. Configure drive mapping common settings... 95 Figure 23. Create a Windows network drive mapping for user profile data... 96 Figure 24. Avamar Tools menu... 97 Figure 25. Avamar Manage All Datasets window... 98 Figure 26. Avamar New Dataset window... 98 Figure 27. Configure Avamar Dataset settings... 99 Figure 28. User Profile data dataset... 99 Figure 29. User Profile data dataset Exclusion settings... 100 Figure 30. User Profile data dataset Options settings... 100 Figure 31. User Profile data dataset Advanced Options settings... 101 Figure 32. Avamar default backup/maintenance Windows schedule... 102 Figure 33. Avamar modified Backup/Maintenance Windows schedule... 103 Figure 34. Create new Avamar backup group... 104 Figure 35. New backup group settings... 105 Figure 36. Select backup group dataset... 105 Figure 37. Select backup group schedule... 106 9

Figures Figure 38. Select backup group retention policy... 106 Figure 39. Avamar Enterprise Manager... 107 Figure 40. Avamar Client Manager... 107 Figure 41. Avamar activate client window... 108 Figure 42. Avamar activate client menu... 108 Figure 43. Avamar Directory Service configuration... 109 Figure 44. Avamar Client Manager post configuration... 109 Figure 45. Avamar Client Manager Virtual desktop clients... 110 Figure 46. Select virtual desktop clients in Avamar Client Manager... 110 Figure 47. Select Avamar groups to add virtual desktops... 111 Figure 48. Activate Avamar clients... 111 Figure 49. Commit Avamar client activation... 112 Figure 50. The first informational prompt in Avamar client activation... 112 Figure 51. The second informational prompt in Avamar client activation... 113 Figure 52. Avamar Client Manager Activated clients... 113 Figure 53. View Composer Disks page... 122 10

Tables Table 1. VNXe customer benefits... 31 Table 2. Minimum hardware resources to support SecurID... 34 Table 3. Solution hardware... 41 Table 4. Solution software... 42 Table 5. Server hardware... 45 Table 6. Storage hardware... 49 Table 7. Validated environment profile... 55 Table 8. Platform characteristics... 55 Table 9. Platform characteristics... 56 Table 10. Profile characteristics... 57 Table 11. Virtual desktop characteristics... 58 Table 12. Blank worksheet row... 62 Table 13. Reference virtual desktop resources... 63 Table 14. Example worksheet row... 64 Table 15. Example applications... 64 Table 16. Server resource component totals... 65 Table 17. Blank customer worksheet... 67 Table 18. Deployment process overview... 70 Table 19. Tasks for pre-deployment... 71 Table 20. Deployment prerequisites checklist... 72 Table 21. Tasks for switch and network configuration... 74 Table 22. Tasks for storage configuration... 76 Table 23. Tasks for server installation... 79 Table 24. Tasks for SQL server database setup... 83 Table 25. Tasks for vcenter configuration... 85 Table 26. Tasks for VMware View Connection Server setup... 88 Table 27. Tasks for Avamar integration... 91 Table 28. Tasks required to install and configure vshield Endpoint... 114 Table 29. Tasks required to install and configure vc Ops... 116 Table 30. Tasks for testing the installation... 120 Table 31. Common server information... 128 Table 32. vsphere server information... 129 Table 33. Array information... 129 Table 34. Network infrastructure information... 129 Table 35. VLAN information... 129 Table 36. Service accounts... 130 11

Tables 12

Chapter 1 Executive Summary This chapter presents the following topics: Introduction... 14 Target audience... 14 Document purpose... 14 Business needs... 15 13

Executive Summary Introduction Target audience Document purpose VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment or IT consolidation, VSPEX accelerates your IT Transformation by enabling faster deployments, choice, greater efficiency, and lower risk. This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware of their choice that meet or exceed the stated minimums. The readers of this document are expected to have the necessary training and background to install and configure an End-User Computing solution based on VMware View with VMware vsphere as a hypervisor, EMC VNXe series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable and it is recommended that the readers be familiar with these documents. Readers are also expected to be familiar with the infrastructure and database security policies of the customer installation. Individuals focused on selling and sizing a VSPEX End-User Computing for VMware View solution should pay particular attention to the first four chapters of this document. After the purchase, implementers of the solution will want to focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6 and the appropriate references and appendices. This document is an initial introduction to the VSPEX End-User Computing architecture, an explanation on how to modify the architecture for specific engagements and instructions on how to effectively deploy the system. The VSPEX End-User Computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on VMware s vsphere virtualization layer backed by the highly available VNX storage family for storage and VMware s View desktop broker. The Compute and Network components are vendor definable, redundant, and sufficiently powerful to handle the processing and data needs of a large virtual desktop environment. The 250 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when deployed. For larger environments, solutions for up to 2000 virtual desktops are 14

Business needs Executive Summary described in the document: EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops. An End-User Computing or Virtual Desktop architecture is a complex system offering. This document will facilitate its setup by providing up front software and hardware material lists, step by step sizing guidance and worksheets, and verified deployment steps. After installing the last component, there are validation tests to ensure that your system is up and running properly. Following this document will ensure an efficient and painless desktop deployment. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, efficiency, and lower risk. Business applications are moving into the consolidated compute, network, and storage environment. EMC VSPEX End-User Computing using VMware reduces the complexity of configuring every component of a traditional deployment model. The challenge of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The following are the business needs for the VSPEX End-User Computing for VMware architectures: Provide an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components. Provide a VSPEX End-User Computing for VMware View solution for efficiently virtualizing 250 virtual desktops for varied customer use cases. Provide a reliable, flexible and scalable reference design. 15

Executive Summary 16

Chapter 2 Solution Overview This chapter presents the following topics: Solution overview... 18 Desktop broker... 18 Virtualization... 18 Compute... 18 Network... 19 Storage... 19 17

Solution Overview Solution overview Desktop broker Virtualization Compute The EMC VSPEX End-User Computing for VMware View on VMware vsphere 5.1 provides a complete systems architecture capable of supporting up to 250 virtual desktops with a redundant server/network topology and highly available storage. The core components that make up this particular solution are desktop broker, virtualization, storage, server computer and networking. View is the virtual desktop solution from VMware that allows virtual desktops to run on the VMware vsphere virtualization environment. It enables the centralization of desktop management and provides increased control for IT organizations. View allows end users to connect to their desktop from multiple devices across a network connection. VMware vsphere is the leading virtualization platform in the industry. For years, it has provided flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vsphere components are the VMware vsphere Hypervisor and the VMware vcenter Server for system management. The VMware hypervisor runs on a dedicated server and allows multiple operating systems to execute on the system at one time as virtual machines. These hypervisor systems can then be connected to operate in a clustered configuration. These clustered configurations are then managed as a larger resource pool through the vcenter product and allow for dynamic allocation of CPU, memory, and storage across the cluster. Features like vmotion, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS) which perform vmotions automatically to balance load, make vsphere a solid business choice. With the release of vsphere 5.1, a VMware virtualized environment can host virtual machines with up to 64 virtual CPUs and 1TB of virtual RAM. VSPEX enables the flexibility of designing and implementing the vendor s choice of server components. The infrastructure has to conform to the following attributes: Sufficient RAM, cores and memory to support the required number and types of virtual desktops Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to withstand a server failure and failover in the environment 18

Solution Overview Network VSPEX allows the flexibility of designing and implementing the vendor s choice of network components. The infrastructure has to conform to the following attributes: Redundant network links for the hosts, switches and storage Support for Link Aggregation Traffic isolation based on industry accepted best practices Storage The EMC VNX storage family is the number one shared storage platform in the industry. Its ability to provide both file and block access with a broad feature set make it an ideal choice for any End-User Computing implementation. The VNXe storage components include the following, which are sized for the stated reference architecture workload: Host adapter ports Provide host connectivity via fabric into the array. Storage processors (SPs) The compute component of the storage array, responsible for all aspects of data moving into, out of, and between arrays and protocol support. Disk drives actual spindles that contain the host/application data and their enclosures The 250 Virtual Desktop solution discussed in this document is based on the VNXe3300 storage array. The VNXe3300 can host up to 150 drives. The EMX VNXe series supports a wide range of business class features ideal for the End-User Computing environment including: Thin Provisioning Replication Snapshots File Deduplication and Compression Quota Management and many more 19

Solution Overview 20

Chapter 3 Solution Technology Overview This chapter presents the following topics: The technology solution... 22 Summary of key components... 23 Desktop virtualization... 24 Virtualization... 25 Compute... 27 Network... 29 Storage... 30 Backup and recovery... 32 Security... 32 Other sections... 35 21

Solution Technology Overview The technology solution This solution uses EMC VNXe3300 and VMware vsphere 5.1 to provide the storage and computing resources for a VMware View 5.1 environment of Microsoft Windows 7 virtual desktops provisioned by VMware View Composer 3.0. Figure 1. Solution components In particular, planning and designing the storage infrastructure for VMware View environment is a critical step because the shared storage must be able to absorb large bursts of input/output (I/O) that occur over the course of a workday. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users might adapt to slow performance, but unpredictable performance is frustrating and reduces efficiency. To provide predictable performance for End-User Computing, the storage system must be able to handle the peak I/O load from the clients while keeping response time to a minimum. Designing for this workload involves the deployment of many disks to handle brief periods of extreme I/O pressure, which is expensive to implement. EMC Avamar enables protection of user data and end-user recoverability. This is accomplished by leveraging EMC Avamar and its desktop client within the desktop image. 22

Solution Technology Overview Summary of key components This section briefly describes the key components of this solution. Desktop broker The Desktop Virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, to allow maintenance to the image without impacting user productivity, and prevent the environment from growing in an unconstrained way. Virtualization The virtualization layer allows the physical implementation of resources to be decoupled from the applications that use them. In other words, the application s view of the resources available to it is no longer directly tied to the hardware. This enables many key features in the End-User Computing concept. Compute The compute layer provides memory and processing resources for the virtualization layer software as well as the needs of the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required, but allows the customer to implement using any server hardware, which meets these requirements. Network The compute layer provides memory and processing resources for the virtualization layer software as well as the needs of the applications running in the infrastructure. The VSPEX program defines the minimum amount of network layer resources required, but allows the customer to implement using any network hardware, which meets these requirements. Storage The storage layer is a critical resource for the implementation of the End-User Computing environment. Due to the way desktops are used, the storage layer must be able to absorb large bursts of activity as they occur, without unduly affecting the user experience. Backup and recovery The optional Backup and recovery components of the solution provide data protection in the event that the data in the primary system is deleted, damaged, or otherwise unusable. Security The optional Security components of the solution from RSA provides consumers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. 23

Solution Technology Overview Other sections There are additional, optional, components, which may improve the functionality of the solution depending on the specifics of the environment. Desktop virtualization Solution architecture provides details on all the components that make up the reference architecture. Overview Desktop virtualization is a technology encapsulating and delivering desktops to a remote client device, which can be thin clients, zero clients, smart phones, or tablets. It allows subscribers from different locations access to virtual desktops hosted on centralized computing resources at remote data centers. In this solution, VMware View is used to provision, manage, broker and monitor desktop virtualization environments. VMware View 5.1 VMware View 5.1 is a leading desktop virtualization solution that enables desktops to deliver cloud-computing services to users. VMware View 5.1 integrates effectively with vsphere 5.1 to provide: Performance optimization and tiered storage support View Composer 3.0 optimizes storage utilization and performance by reducing the footprint of virtual desktops. It also supports the use of different tiers of storage to maximize performance and reduce cost. Thin provisioning support VMware View 5.1 enables efficient allocation of storage resources when virtual desktops are provisioned. This results in better utilization of storage infrastructure and reduced capital expenditure (CAPEX)/operating expenditure (OPEX). This solution requires VMware View 5.1 Premier edition. VMware View Premier includes access to all View features including vsphere Desktop, vcenter Server, View Manager, View Composer, View Persona Management, vshield Endpoint, VMware ThinApp, and VMware View Client with Local Mode. View Composer 3.0 View Composer 3.0 works directly with vcenter Server to deploy, customize, and maintain the state of the virtual desktops when using linked clones. Desktops provisioned as linked clones share a common base image within a desktop pool and as such have a minimal storage footprint. View Composer 3.0 also enables the following capabilities: Tiered storage support to enable the use of dedicated storage resources for the placement of both the read-only replica and linked clone disk images. An optional standalone View Composer server used to minimize the impact of virtual desktop provisioning and maintenance operations on the vcenter server. This solution uses View Composer 3.0 to deploy 250 dedicated virtual desktops running Windows 7 as linked clones. 24

Solution Technology Overview View Persona Management View Persona Management preserves user profiles and dynamically synchronizes them with a remote profile repository. View Persona Management does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage View user profiles. View Persona Management provides the following benefits over traditional Windows roaming profiles: View Storage Accelerator With View Persona Management, a user s remote profile is dynamically downloaded when the user logs in to a View desktop. View downloads persona information only when the user needs it. During login, View downloads only the files that Windows requires, such as user registry files. Other files are copied to the local desktop when the user or an application opens them from the local profile folder. View copies recent changes in the local profile to the remote repository at a configurable interval. During logout, only files that are updated since the last replication are copied back to the remote repository. Configure View Persona Management to store user profiles in a secure, centralized repository. View Storage Accelerator reduces the storage load associated with virtual desktops by caching the common blocks of desktop images into local vsphere host memory. The Accelerator leverages one of a VMware vsphere 5.1 features called Content Based Read Cache (CBRC) implemented inside the vsphere hypervisor. When enabled for the View virtual desktop pools, the host hypervisor scans the storage disk blocks to generate digests of the block contents. When these blocks are read into the hypervisor, they are cached in the host based CBRC. Subsequent reads of blocks with the same digest are served from the in-memory cache directly. This significantly improves the performance of the virtual desktops, especially during boot storms, user login storms, or antivirus scanning storms when reading a large number of blocks with identical content. Virtualization VMware vsphere 5.1 VMware vsphere 5.1 is the market-leading virtualization platform that is used across thousands of IT environments around the world. VMware vsphere 5.1 transforms a computer s physical resources by virtualizing the Memory, Storage, and Network. This transformation creates fully functional virtual desktops that run isolated and encapsulated operating systems and applications just like physical computers. The high-availability features of VMware vsphere 5.1 are coupled with Distributed Resource Scheduler (DRS) and VMware vmotion which enables the seamless migration of virtual desktops from one vsphere server to another with minimal or no impact to the customer s usage. 25

Solution Technology Overview This solution leverages VMware vsphere Desktop Edition for deploying desktop virtualization. It provides the full range of features and functionalities of the vsphere Enterprise Plus edition, allowing customers to achieve scalability, high availability, and optimal performance for all of their desktop workloads. vsphere Desktop also comes with unlimited vram entitlement. vsphere Desktop edition is intended for customers who want to purchase only vsphere licenses to deploy desktop virtualization. EMC Virtual Storage Integrator for VMware EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in to the vsphere client that provides a single management interface that is used for managing EMC storage within the vsphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed with the VSI Feature Manager. VSI provides a unified user experience, which allows new features to be introduced rapidly in response to changing customer requirements. Apply the following features during the validation testing: Storage Viewer (SV) Extends the vsphere client to facilitate the discovery and identification of EMC VNXe storage devices that are allocated to VMware vsphere hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new Network File System (NFS) and Virtual Machine File System (VMFS) datastores, and Raw Device Mapping (RDM) volumes seamlessly within vsphere client. Refer to the EMC VSI for VMware vsphere product guides on EMC Online Support for more information. VNXe VMware vstorage API for Array Integration support VMware vcenter Hardware acceleration with VMware vstorage API for Array Integration (VAAI) is a storage enhancement in vsphere that enables vsphere to offload specific storage operations to compatible storage hardware such as the VNXe series platforms. With storage hardware assistance, vsphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. VMware vcenter is a centralized management platform for the VMware Virtual Infrastructure. It provides administrators with a single interface for all aspects of monitoring, managing, and maintaining of the virtual infrastructure and can be accessed from multiple devices. VMware vcenter is also responsible for managing some of the more advanced features of the VMware virtual infrastructure, such as VMware vsphere High Availability and Distributed Resource Scheduling (DRS), along with vmotion and Update Manager. VMware vsphere High Availability The VMware vsphere High Availability feature allows the virtualization layer to restart virtual machines automatically in various failure conditions. 26

Solution Technology Overview Note If the Virtual Machine operating system has an error, the Virtual Machine can be automatically restarted on the same hardware. If the physical hardware has an error, the impacted virtual machines can be automatically restarted on other servers in the cluster. In order to restart virtual machines on different hardware those servers will need to have resources available. VMware vsphere High Availability allows you to configure policies to determine which machines are restarted automatically, and under what conditions these operations should be attempted. Compute The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents a number of processor cores and an amount of RAM that must be achieved. This can be implemented with two servers or twenty and still be considered the same VSPEX solution. For example, assume that the compute layer requirements for a given implementation are 25 processor cores, and 200GB of RAM. One customer might want to implement these using white-box servers containing 16 processor cores, and 64 GB of RAM; while a second customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 2 on page 28 shows this example. 27

Solution Technology Overview Figure 2. Compute Layer Flexibility The first customer will need four of the servers they chose, while the second customer needs two. Note To enable high availability at the compute layer each customer will need one additional server so if a server fails the system has enough capability to maintain business operations. The following best practices should be observed in the compute layer: Use a number of identical or at least compatible servers. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. 28

Solution Technology Overview If you are implementing hypervisor layer high availability, then the largest virtual machine you can create is constrained by the smallest physical server in the environment. It is recommended to implement the high availability features available in the virtualization layer, and to ensure that the compute layer has sufficient resources to accommodate at least single server failures. This allows you to implement minimal-downtime upgrades, and tolerate single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be very flexible to meet your specific needs. The key constraint is that you provide sufficient processor cores and RAM per core to meet the needs of the target environment. Network The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists, or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 3 on page 30. 29

Solution Technology Overview Figure 3. Example of Highly-Available network design This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage Overview The storage layer is a key component of any Cloud Infrastructure solution that serves data generated by applications and operating systems in a datacenter storage processing system. This increases storage efficiency, management flexibility and 30

Solution Technology Overview total cost of ownership. In this VSPEX solution, EMC VNXe Series arrays are used for providing virtualization at storage layer. EMC VNXe series The EMC VNX family is optimized for virtual applications delivering industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. The VNXe series is powered by Intel Xeon processor for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. The VNXe series is purpose-built platform for IT managers in smaller environments and the VNX series is designed to meet the high-performance, high-scalability requirements of midsize and large enterprises. Table 1 lists the VNXe customer benefits. Table 1. Feature VNXe customer benefits Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High availability, designed to deliver five 9s availability Multiprotocol support for file and block Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Software Suites Available Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and proves compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software Packs Available Total Value Pack Includes all the protection software suites and the Security and Compliance Suite. 31

Solution Technology Overview Backup and recovery Overview EMC Avamar Backup and recovery is another import component in this VSPEX solution, which provides data protection by backing up data files or volumes with defined schedule and restoring data from backup in case recovery is happening after disaster. In this VSPEX solution, EMC Avamar is used for backup/recovery, supporting up to 250 virtual desktops. EMC Avamar provides methods to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk (VMDK) level for image backups and at the file-level for guest-based backups. Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop, to be used in the event of hardware failure, corruption, or accidental deletion of a virtual desktop. Avamar significantly reduces the backup and recovery time of the virtual desktop by leveraging change block tracking (CBT) on both backup and recovery. Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an operating system for which an Avamar backup client is available. It enables fine-grained control over the content and inclusion and exclusion patterns. This can be leveraged to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected allows for the end-user self-service recoverability of their data. This solution is tested with guest-based backups. Security RSA SecurID two-factor authentication RSA SecurID two-factor authentication can provide enhanced security for the VSPEX End-User Computing environment by requiring the user to authenticate with two pieces of information, collectively called a passphrase, consisting of: Something the user knows: a PIN, which is used like any other PIN or password. Something the user has: A token code, provided by a physical or software token, which changes every 60 seconds. The typical use case deploys SecurID to authenticate users accessing protected resources from an external or public network. Access requests originating from within a secure network are authenticated by traditional mechanisms involving Active Directory or LDAP. A configuration description for implementing SecurID is available for the VSPEX End-User Computing infrastructures. SecurID functionality is managed through RSA Authentication Manager, which also controls administrative functions such as token assignment to users, user management, high availability, and so on. 32

Solution Technology Overview SecurID authentication in the VSPEX End- User Computing for VMware View environment SecurID support is built into VMware View, providing a simple activation process. Users accessing a SecurID-protected View environment are initially authenticated with a SecurID passphrase, following by a normal authentication against Active Directory. In a typical deployment, one or more View Connection servers are configured with SecurID for secure access from external or public networks, with other Connection servers accessed within the local network retaining Active Directoryonly authentication. Figure 4 depicts placement of the Authentication Manager server(s) in the View environment. Figure 4. Authentication control flow for View access requests originating on an external network Required components Enablement of SecurID for VSPEX VMware View End-User Computing architecture is described in Securing VSPEX VMware View 5.1 End-User Computing Solutions with RSA: Design Guide. The following components are required: RSA SecurID Authentication Manager (version 7.1 SP4) Used to configure and manage the SecurID environment and assign tokens to users, 33

Solution Technology Overview Authentication Manager 7.1 SP4 is available as an appliance or as an installable on a Windows Server 2008 R2 instance. Future versions of Authentication Manager are available as physical or virtual appliances only. SecurID tokens for all users SecurID requires something the user knows (a PIN) with a constantly-changing code from a token the user has in possession. SecurID tokens may be physical, displaying a new code every 60 seconds which the user must then enter with a PIN, or software-based, wherein the user supplies a PIN and the token code is supplied programmatically. Hardware and software tokens are registered with Authentication Manager through token records supplied on a CD or other media. Compute, memory and storage resources Figure 5 depicts the VSPEX End-User Computing for VMware View environment with two infrastructure virtual machines added to support Authentication Manager. Table 2 shows server resources needed; virtual machine requirements are minimal and are drawn from the overall infrastructure resource pool. Figure 5. Table 2. Logical architecture: VSPEX End-User Computing for VMware View with RSA Minimum hardware resources to support SecurID CPU (cores) Memory (GB) Disk (GB) RSA Authentication Manager 2 2 60 Reference RSA Authentication Manager 7.1 Performance and Scalability Guide 34

Solution Technology Overview Other sections VMware vshield Endpoint VMware vcenter Operations Manager for View VMware vshield Endpoint offloads virtual desktop antivirus and antimalware scanning operations to a dedicated secure virtual appliance delivered by VMware partners. Offloading scanning operations improves desktop consolidation ratios and performance by eliminating antivirus storms, while also streamlining antivirus and antimalware deployment and monitoring and satisfying compliance and audit requirements through detailed logging of antivirus and antimalware activities. VMware vcenter Operations Manager for View provides end-to-end visibility into the health, performance and efficiency of virtual desktop infrastructure (VDI). It enables desktop administrators to proactively ensure the best end-user experience, avert incidents and eliminate bottlenecks. Designed for VMware View, this optimized version of vcenter Operations Manager improves IT productivity and lowers the cost of owning and operating VDI environments. Traditional operations-management tools and processes are inadequate for managing large View deployments, because: The amount of monitoring data and quantity of alerts overwhelm desktop and infrastructure administrators. Traditional tools provide only a silo view and don t adapt to the behavior of specific environments. End users are often first to report incidents, and troubleshooting performance problems leading to fire drills among infrastructure teams, helpless help-desk administrators and frustrated users. Lack of end-to-end visibility into the performance and health of the entire stack including servers, storage and networking stalls large VDI deployments. IT productivity suffers from reactive management and the inability to ensure quality of service proactively. VMware vcenter Operations Manager for View addresses these challenges and delivers higher team productivity, lower operating expenses and improved infrastructure utilization. Key features include: Patented self-learning analytics that adapt to your environment, continuously analyzing thousands of metrics for server, storage, networking and end-user performance. Comprehensive dashboards that simplify monitoring of health and performance, identify bottlenecks, and improve infrastructure efficiency of your entire View environment. Dynamic thresholds and smart alerts that notify administrators earlier in the process and provide more-specific information about impending performance issues. 35

Solution Technology Overview Automated root-cause analysis, session lookup and event correlation for faster troubleshooting of end- user problems. Integrated approach to performance, capacity and configuration management that supports holistic management of VDI operations. Design and optimizations specifically for VMware View. Availability as a virtual appliance for faster time to value. 36

Chapter 4 Solution Stack Architectural Overview This chapter presents the following topics: Solution Overview... 38 Solution architecture... 38 Server configuration guidelines... 44 Network configuration guidelines... 47 Storage configuration guidelines... 49 High availability and failover... 52 Validation test profile... 55 Antivirus and antimalware platform profile... 55 vcenter Operations Manager for View platform profile... 56 Backup and Recovery configuration guidelines... 57 Sizing guidelines... 58 Reference workload... 58 Applying the reference workload... 59 37

Solution Stack Architectural Overview Solution Overview Solution architecture VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloud-based computing by enabling faster deployment, more choice, higher efficiency, and lower risk. This section is intended to be a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory and network interfaces; the customer is free to select the server and networking hardware of their choice that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your End-User Computing deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, which have been validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Overview Below is a detailed description of the VSPEX End-User Computing solution for up to 250 virtual desktops. The VSPEX Virtual Infrastructure solution for End-User Computing with EMC VNXe is validated at three different points of scale. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload later in this document. Note VSPEX uses the concept of a Reference Workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment may not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. A detailed process is described in Applying the reference workload. Architecture for up to 250 virtual desktops The architecture diagrams in this section show the layout of major components comprising the solution. Figure 6 shows the overall logical architecture of the solution. 38

Solution Stack Architectural Overview Figure 6. Logical architecture for 250 virtual desktops Key components VMware View Manager Server 5.1 Provides virtual desktop delivery, authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops. In this solution, VMware View Manager 5.1 is installed on Windows Server 2008 R2 and hosted as a virtual machine on a VMware vsphere 5.1 server. Two VMware View Manager Servers were used in this solution. Virtual desktops Two hundred and fifty persistent virtual desktops running Windows 7 are provisioned as VMware View Linked Clones. VMware vsphere 5.1 Provides a common virtualization layer to host a server environment that contains the virtual machines. The specifics of the validated environment are listed in Table 3. vsphere 5.1 provides highly available infrastructure through such features as: vmotion Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Storage vmotion Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. vsphere High Availability Detects and provides rapid recovery for a failed virtual machine in a cluster. Distributed Resource Scheduler (DRS) Provides load balancing of computing capacity in a cluster. Storage Distributed Resource Scheduler (SDRS) Provides load balancing across multiple datastores, based on space use and I/O latency. 39

Solution Stack Architectural Overview VMware vcenter Server 5.1 Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vsphere 5.1 cluster. All vsphere hosts and their virtual machines are managed via vcenter. VMware vshield Endpoint VMware vshield Endpoint offloads virtual desktop antivirus and antimalware scanning operations to a dedicated secure virtual appliance delivered by VMware partners. Offloading scanning operations improves desktop consolidation ratios and performance by eliminating antivirus storms. These operations also streamline antivirus and antimalware deployment, and monitor and satisfy compliance and audit requirements through detailed logging of antivirus and antimalware activities. VMware vcenter Operations Manager for View vc Ops for View monitors the virtual desktops and all of the supporting elements of the VMware View virtual infrastructure. VSI for VMware vsphere EMC VSI for VMware vsphere is a plugin to the vsphere client that provides storage management for EMC arrays directly from the client. VSI is highly customizable and helps provide a unified management interface. SQL Server VMware vcenter server requires a database service to store configuration and monitoring details. A Microsoft SQL 2008 R2 server is used for this purpose. DHCP server Centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose. DNS Server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows Server 2012 server is used for this purpose. Active Directory Server Active Directory services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. Shared Infrastructure DNS and authentication/authorization services like Microsoft Active Directory can be provided via existing infrastructure or set up as part of the new virtual infrastructure. IP/Storage Networks All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while NFS storage traffic is carried over a private, non-routable subnet. EMC VNXe3300 series Provides storage by using IP (NFS) connections for virtual desktops, and infrastructure virtual machines such as VMware View Manager Servers, VMware vcenter servers, Microsoft SQL server databases, and other supporting services. Optionally, user profiles and home directories are redirected to CIFS network shares on VNXe3300. VNXe series storage arrays include the following components: 40

Solution Stack Architectural Overview Storage Processors (SPs) support block and file data with UltraFlex I/O technology that supports iscsi, CIFS and NFS protocols. The SPs provide access for all external hosts and for the file side of the VNXe array. Battery Backup Units are battery units within each storage processor and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted. Disk-array enclosures (DAE) house the drives used in the array. EMC Avamar Provides the platform for protection of virtual machines. This protection strategy leverages persistent virtual desktops. It can leverage both image-level and guest-based protection. Hardware resources Table 3 lists the hardware used in this solution. Table 3. Solution hardware Hardware Configuration Notes Servers for virtual desktops Memory: 2 GB RAM per desktop (500 GB RAM across all servers) CPU: 1 vcpu per desktop (eight desktops per core:32 cores across all servers) Network: Six 1 GbE NICs per server Additional CPU and RAM as needed for the VMware vshield Endpoint and Avamar AVE components. Note: To implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have one additional server. Total server capacity required to host 250 virtual desktops Refer to vendor documentation for specific details concerning vshield Endpoint and Avamar AVE resource requirements NFS and CIFS network infrastructure EMC VNXe3300 Minimum switching capability: Six 1 GbE ports per vsphere Server Two 10 GbE ports per storage processor Two storage processors (active/active) Two 10 GbE interfaces per storage processor Twenty-two 300 GB, 15k rpm 3.5-inch SAS disks (three RAID 5 performance packs and one hot spare disk) Redundant LAN configuration For throughput requirement, 1 GbE may be sufficient for 250 basic desktops, while 10 GbE is preferred for applications/systems with higher IO needs. Thirteen 2 TB, 7,200 rpm 3.5-inch NL-SAS disks Seven 300 GB, 15k rpm 3.5-inch SAS disks (one RAID-5 performance pack) Optional for user data Optional for infrastructure storage 41

Solution Stack Architectural Overview Hardware Configuration Notes Seven 300 GB, 15k rpm 3.5-inch SAS disks (one RAID-5 performance pack) Optional for vcenter Operations Manager for View Servers for customer infrastructure Minimum number required: Two physical servers 20 GB RAM per server Four processor cores per server Two 1 GbE ports per server Additional CPU and RAM as needed for the VMware vshield Endpoint components. These servers and the roles they fulfill may already exist in the customer environment Refer to vendor documentation for specific details concerning vshield Endpoint resource requirements Software resources Table 4 lists the software used in this solution. Table 4. Solution software Software Configuration VNXe3300 (shared storage, file systems) Software version 2.3.1.19462 View Desktop Virtualization VMware View Manager Operating system for View Manager Microsoft SQL server 5.1.1 Premier Windows Server 2008 R2 Standard Edition Version 2008 R2 Standard Edition EMC Avamar next-generation backup Avamar Virtual Edition (2TB) Avamar Agent 6.1 SP1 6.1 SP1 VMware vsphere vsphere Server 5.1* vcenter server vshield Manager (includes vshield Endpoint Service) Operating system for vcenter server 5.1.0a 5.1 Windows Server 2008 R2 Standard Edition VMware vcenter Operations Manager for View VMware vcenter Operations Manager 5.0.1.0 vcenter Operations Manager for View plug-in 1.0 42

Software Configuration Solution Stack Architectural Overview Virtual desktops Note Aside from the base operating system, this software is used for solution validation and is not required Base operating system Microsoft Windows 7 Standard (32-bit) SP1 Microsoft Office Office Enterprise 2007 Internet Explorer 8.0.7601.17514 Adobe Reader X (10.1.3) VMware vshield Endpoint (component of VMware Tools) 8.6.5 build-652272 Adobe Flash Player 11 Bullzip PDF Printer 7.2.0.1304 FreeMind 0.8.1 Login VSI (VDI workload generator) 3.6 Professional Edition * Patch ESXi510-201210001 needed for support View 5.1.1 Sizing for validated configuration When selecting servers for this solution, the processor core should meet or exceed the performance of the Intel Nehalem family at 2.66 GHz. As servers with greater processor speeds, performance, and higher core density become available, servers can be consolidated as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability. As with the selection of servers, selecting network interface card (NIC) speed and quantity should also be consolidated as long as the overall bandwidth requirement for this solution and sufficient redundancy necessary to support high availability are maintained. The following represents a sample server configuration required to support this 250- desktop solution. Four servers each with: Two 4-core processors (total eight cores) 128GB of RAM This server configuration provides 32 cores and 512GB of RAM. As shown in Table 3, a minimum of one core is required to support eight virtual desktops and a minimum of 2 GB of RAM is required for each. The correct balance of memory and cores for the expected number of virtual desktops to be supported by a server must also be taken into account. Additional CPU resources and RAM are required to support the VMware vshield Endpoint components. IP network switches used to implement this solution must have a minimum backplane capacity of 48 Gb/s non-blocking and support the following features: 43

Solution Stack Architectural Overview IEEE 802.1x Ethernet flow control. 802.1q VLAN tagging. Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol. Simple Network Management Protocol (SNMP) management capability. Jumbo frames. The quantity and type of switches chosen should support high availability. A network vendor must be chosen based on the availability of parts, service, and support contracts. The network configuration should also include the following: A minimum of two switches to support redundancy. Redundant power supplies. A minimum of forty 1-GbE ports (distributed for high availability). Appropriate uplink ports for customer connectivity. Use of 10 GbE ports should align with the ports on the server and storage while keeping in mind the overall network requirement for this solution and a level of redundancy to support high availability. Additional server NICs and storage connections should also be considered based on customer or specific implementation requirements. The management infrastructure (Active Directory, DNS, DHCP, and SQL server) can be supported on two servers similar to those previously defined, but will require a minimum of 20 GB RAM instead of 128 GB. Disk storage layout is explained in Storage configuration guidelines. Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may alter the final purchase. From a virtualization perspective, if a systems workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement. If the virtual machine/desktop pool does not have a high level of peak or concurrent usage, the number of vcpus is reduced. Conversely, if the applications deployed are highly computational in nature, the number of CPUs and memory purchased may need to be increased. Table 5 on page 45 identifies the server hardware and the configurations. 44

Solution Stack Architectural Overview Table 5. Server hardware Hardware Configuration Notes Servers for virtual desktops Memory: 2 GB RAM per desktop (500 GB RAM across all servers) CPU: 1 vcpu per desktop (eight desktops per core:32 cores across all servers) Network: Six 1 GbE NICs per server Additional CPU and RAM as needed for the VMware vshield Endpoint and Avamar AVE components. Note: To implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have one additional server. Total server capacity required to host 250 virtual desktops Refer to vendor documentation for specific details concerning vshield Endpoint and Avamar AVE resource requirements VMware vsphere memory virtualization for VSPEX VMware vsphere 5.1 has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section and Figure 7 describes some of these features and the items you need to consider when using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources: 45

Solution Stack Architectural Overview Figure 7. Hypervisor memory consumption This basic concept is enhanced by understanding the technologies presented in this section. Memory over-commitment Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vsphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vsphere is able to handle memory over-commitment without any performance degradation. If more memory than that is present on the server that is being actively used, vsphere might resort to swapping out portions of a virtual machine s memory. 46

Non-Uniform Memory Access (NUMA) Solution Stack Architectural Overview vsphere uses a NUMA load-balancer to assign a home node to a virtual machine. Because memory for the virtual machine is allocated from the home node, memory access is local and provides the best performance possible. Applications that do not directly support NUMA benefit from this feature. Transparent page sharing Virtual machines running similar operating systems and applications typically have identical sets of memory content. Page sharing allows the hypervisor to reclaim the redundant copies and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries then total memory usage can be reduced to increase consolidation ratios. Memory ballooning By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. This is done with little to no impact to the performance of the application. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vsphere memory overhead and the virtual machine memory settings. vsphere memory overhead There is some associated overhead for the virtualization of memory resources. The memory space overhead has two components: The system overhead for the VMkernel Additional overhead for each virtual machine The amount of additional overhead memory for the VMkernel is fixed while each virtual machine depends on the number of virtual CPUs and configured memory for the guest operating system. Allocating memory to virtual machines The proper sizing of memory for a virtual machine/desktop in VSPEX architectures is based on many factors. With the number of application services and use cases available determining a suitable configuration for an environment requires creating a baseline configuration, testing, adjustments, as discussed later in this paper. In this solution, each virtual desktop gets 2 GB memory, as listed in Table 3 on page 41. Network configuration guidelines Overview This section provides guidelines for setting up a redundant, high-availability network configuration. The guidelines outlined here take into account Jumbo Frames, VLANs and Link Aggregation Control Protocol (LACP) on EMC unified storage. For details on the network resource requirement, please refer to Table 3 on page 41. 47

Solution Stack Architectural Overview VLAN The best practice is to isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs. Client access Storage Management These VLANs are illustrated in Figure 8: Figure 8. Required networks The client access network is for users of the system, or clients, to communicate with the infrastructure. The Storage Network is used for communication between the compute layer and the storage layer. The Management network is used for administrators to have a dedicated way to access the management connections on the storage array, network switches, and hosts. Note The diagram demonstrates the network connectivity requirements for a VNXe3300 using 1-GbE network connections. A similar topology should be created when using the VNXe3150 array, or 10 GbE network connections. 48

Solution Stack Architectural Overview Enable jumbo frames Link aggregation This solution for EMC VSPEX End-User Computing recommends MTU be set at 9,000 (jumbo frames) for efficient storage and migration traffic. A link aggregation resembles an Ethernet channel, but uses the Link Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview vsphere allows more than one method of utilizing storage when hosting virtual machines. The solutions in Table 6 were tested utilizing NFS and the storage layout described adheres to all current best practices. An educated customer or architect can make modifications based on their understanding of the systems usage and load if required. Table 6. Storage hardware Hardware Configuration Notes EMC VNXe3300 Two storage processors (active/active) Two 10 GbE interfaces per storage processor Twenty-two 300 GB, 15k rpm 3.5-inch SAS disks (three RAID 5 performance packs) For throughput requirement, 1 GbE may be sufficient for 250 basic desktops, while 10 GbE is preferred for applications/systems with higher IO needs. Thirteen 2 TB, 7,200 rpm 3.5-inch NL-SAS disks Seven 300 GB, 15k rpm 3.5-inch SAS disks (one RAID-5 performance pack) Seven 300 GB, 15k rpm 3.5-inch SAS disks (one RAID-5 performance pack) Optional for user data Optional for infrastructure storage Optional for vcenter Operations Manager for View This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance. VMware vsphere storage virtualization for VSPEX VMware ESXi provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to virtual machine. A virtual machine stores its operating system and all other files related to the virtual machine activities in a virtual disk. Figure 9 illustrates VMware virtual disk types. The virtual disk itself is one or multiple files. VMware uses a virtual SCSI controller to present the virtual disk to guest operating systems running inside virtual machines. 49

Solution Stack Architectural Overview A virtual disk resides in a datastore. Depending on its type used, it can be either a VMware Virtual Machine File system (VMFS) datastore or an NFS datastore. Figure 9. VMware Virtual Disk Types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI based local or networked storage. Raw Device Mapping In addition, VMware also provides a mechanism named Raw Device Mapping (RDM). RDM allows a virtual machine to access a volume on the physical storage directly, and can only be used with Fibre Channel or iscsi. NFS VMware supports using NFS file systems from external NAS storage system or device as virtual machine datastore. Storage layout for 250 virtual desktops Core storage layout Figure 10 illustrates the layout of the disks that are required to store 250 virtual desktops. This layout does not include space for user profile data. Refer to VNXe Shared File for more information. 50

Solution Stack Architectural Overview Figure 10. Core storage layout Core storage layout overview The following core configuration is used in the solution. Twenty-one SAS disks are allocated in RAID (6+1) 5 group to contain virtual desktop datastores. One SAS disk is a hot spare and is contained in the VNXe hot spare pool. Note Seven of the disks used (one RAID (6+1) 5 group) may contain VNXe system storage, which reduces user storage. VNXe provisioning wizards perform disk allocation and do not allow user selection. If more capacity is desired, larger drives may be substituted. To satisfy the load recommendations, the drives will all need to be 15k rpm and the same size. If differing sizes are utilized, storage layout algorithms may give sub-optimal results. Optional user data storage layout In solution validation testing, storage space for user data is allocated on the VNXe array as shown in Figure 11. This storage is in addition to the core storage shown in Figure 10. If storage for user data exists elsewhere in the production environment, this storage is not required. Figure 11. Optional storage layout 51

Solution Stack Architectural Overview Optional storage layout overview The following optional configuration is used in the solution. Note Twelve NL-SAS disks are allocated in RAID (4+2) 6 groups to store user data and profiles. One NL-SAS disk is a hot spare. Seven SAS disks configured as a RAID (6+1) 5 group are used to store the infrastructure virtual machines. Seven SAS disks configured as RAID (6+1) 5 group are used to store the vcenter Operations Manager for View virtual desktops and databases. Remaining disks are unbound or drive bays may be empty, as no additional drives were used for testing this solution. The actual disk selection is done by the VNXe provisioning wizards and may not match the allocation. VNXe Shared File Systems High availability and failover The virtual desktops use two shared filed systems one for user profiles, and the other to redirect user storage that resides in home directories. In general, redirecting user data out of the base image of VNXe for file usage enables centralized administration, backup, and recovery, and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. Introduction Virtualization layer This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide it provides the ability to survive most single-unit failures with minimal to no impact to business operations. As indicated earlier, it is recommended to configure high availability in the virtualization layer and allow the hypervisor to restart automatically any virtual machines that fail. Figure 12 illustrates the hypervisor layer responding to a failure in the compute layer: Figure 12. High Availability at the Virtualization layer By implementing high availability at the virtualization layer, it ensures that, even in the event of a hardware failure the infrastructure will attempt to keep as many services running as possible. 52

Solution Stack Architectural Overview Compute layer While the choice of servers to implement in the compute layer is flexible, it is recommended to use enterprise class servers designed for the datacenter. This type of server has redundant power supplies, which should be connected to separate Power Distribution units (PDUs) in accordance with your server vendor s best practices. Figure 13. Redundant Power Supplies It is also recommended to configure high availability in the virtualization layer. This means to configure the compute layer with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, shown in Figure 12 on page 52. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network. These connections are also illustrated in Figure 14 on page 54. 53

Solution Stack Architectural Overview Figure 14. Network Layer High Availability By ensuring that there are no single points of failure in the network layer, you can ensure that the compute layer is able to access storage, and communicate with users even if a component fails. Storage layer The VNX family is designed for five 9s availability by using redundant components throughout the array as shown in Figure 15. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss due to individual disk failures, and the available hot spare drives are dynamically allocated to replace a failing disk. Figure 15. VNXe series high availability EMC Storage arrays are designed to be highly available by default. When configured according to the directions in their installation guides there are no single unit failures that result in data loss or unavailability. 54

Solution Stack Architectural Overview Validation test profile Profile characteristics Table 7 shows the solution has a validated environment profile. Table 7. Validated environment profile Profile characteristic Value Number of virtual desktops 250 Virtual desktop operating system CPU per virtual desktop Windows 7 Enterprise (32-bit) SP1 1 vcpu Number of virtual desktops per CPU core 8 RAM per virtual desktop Desktop provisioning method Average storage available for each virtual desktop Average IOPS per virtual desktop at steady state Average peak IOPS per virtual desktop during boot storm 2 GB Linked clone 18 GB (vmdk and vswap) 9.3 IOPS 40 IOPS Number of datastores to store virtual desktops 2 Number of virtual desktops per datastore 125 Disk and RAID type for datastores Disk and RAID type for CIFS shares to host user profiles and home directories RAID 5, 300 GB, 15k rpm, 3.5 inch SAS disks RAID 6, 2 TB, 7,200 rpm, 3.5 inch NL-SAS disks Antivirus and antimalware platform profile Platform characteristics The solution is sized based on the vshield Endpoint platform requirements, as shown in Table 8. Table 8. Platform characteristics Platform Component VMware vshield Manager appliance VMware vshield Endpoint service VMware Tools vshield Technical Information Manages the vshield Endpoint service installed on each vsphere host. 1 vcpu, 3 GB RAM, and 8 GB hard disk space. Installed on each desktop vsphere host. The service uses up to 512 MB of RAM on the vsphere host. A component of the VMware tools suite that enables 55

Solution Stack Architectural Overview Platform Component Endpoint component Technical Information integration with the vsphere host vshield Endpoint service. Installed as an optional component of the VMware tools software package and should be installed on the master virtual desktop image. vshield Endpoint thirdparty security plug-in Requirements vary based on individual vendor specifications. Note A third-party plugin and associated components are required to complete the vshield Endpoint solution. vshield architecture The individual components of the VMware vshield Endpoint platform and the vshield partner security plug-ins each have specific CPU, RAM, and disk space requirements. The resource requirements vary based on a number of factors, such as the number of events logged, log retention needs, the number of desktops being monitored, and the number of desktops present on each vsphere host. vcenter Operations Manager for View platform profile Platform characteristics Table 9 shows the solution was sized based on the vcenter Operations Manager for View platform requirements. Table 9. Platform characteristics Platform Component VMware vcenter Operations Manager vapp VMware vc Ops for View Adapter Technical Information The vapp consists of a user interface (UI) virtual appliance and an Analytics virtual appliance. UI appliance requirements: 2 vcpu, 5 GB RAM, and 25 GB hard disk space Analytics appliance requirements: 2 vcpu, 7 GB RAM, and 150 GB hard disk space The vc Ops for View Adapter enables integration between vcenter Operations Manager and VMware View and requires a server running Microsoft Windows 2008 R2. The adapter gathers View related status information and statistical data. Server requirements: 2 vcpu, 6 GB RAM, and 30 GB hard disk space. 56

Solution Stack Architectural Overview vcenter Operations Manager for View architecture The individual components of vcenter Operations Manager for View have specific CPU, RAM, and disk space requirements. The resource requirements vary based on the number of desktops being monitored. The numbers provided in Table 9 assume that 250 desktops are monitored. Backup and Recovery configuration guidelines Backup characteristics Table 10 shows the solution sizing with the application environment profile. Table 10. Profile characteristics Profile characteristic Value Number of virtual desktops 250 User data 2.5 TB (10.0 GB per desktop) Daily change rate for user data User data 2% Retention per data types # daily 30 daily # weekly 4 weekly # monthly 1 monthly EMC Avamar AVE requirements 0.5 TB AVE 1.0 TB AVE 2.0 TB AVE 6 GB dedicated RAM and 850 GB disk space 8 GB dedicated RAM and 1,600 GB disk space 16 GB dedicated RAM and 3,100 GB disk space Two dedicated 2 GHz processors and one 1 GbE connection Backup layout EMC Avamar provides various deployment options depending on the specific use case and recovery requirements. In this case, the solution is deployed with two 2 TB Avamar Virtual Edition machines. This enables the unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. The solution also enables customers to unify their backup process with industry-leading deduplication backup software, and achieve the highest levels of performance and efficiency. 57

Solution Stack Architectural Overview Sizing guidelines Reference workload In the following sections, the readers will find definitions of the reference workload used to size and implement the VSPEX. Guidance is provided on how to correlate those reference workloads to actual customer workloads and how that may change the end delivery from the server and network perspective. Modification to the storage definition is made by adding drives for greater capacity and performance. The disk layouts are created to provide support for the appropriate number of virtual desktops at the defined performance level. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a reduced user experience due to higher response times. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops that have been validated by EMC. In practice, each virtual desktop has its own set of requirements that rarely fit a pre-defined idea of what a virtual desktop should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, we have defined a representative customer reference workload. By comparing the actual customer usage to this reference workload, you can extrapolate which solution to choose. For the VSPEX end-user computing solutions, the reference workload is defined as a single virtual desktop. Table 11 shows the characteristics of this virtual desktop: Table 11. Virtual desktop characteristics Characteristic Virtual desktop operating system Value Microsoft Windows 7 Enterprise Edition (32-bit) SP1 Virtual processors per virtual desktop 1 RAM per virtual desktop Available storage capacity per virtual desktop Average IOPS per virtual desktop at steady state 2 GB 18 GB (vmdk and vswap) 10 This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently, with a steady load generated by the constant use of office-based applications like browsers, office productivity software, the Avamar backup agent and other standard task worker utilities. 58

Solution Stack Architectural Overview Applying the reference workload In addition to the supported desktop numbers, there may be other factors to consider when deciding which end-user computing solution to deploy. Concurrency Heavier desktop workloads The workloads used to validate VSPEX solutions assume that all desktop users will be active at all times. In other words, the 250 desktop architecture is tested with 250 desktops, all generating workload in parallel, all booted at the same time, and so on. If the customer expects to have 300 users, but only 50 percent of them are logged on at any given time due to time zone differences or alternate shifts, the 150 active users out of the total 300 users can be supported by the 250 desktop architecture. The workload is defined in Table 11 on page 58 and used to test this VSPEX End-User Computing configuration is considered a typical office worker load. However, some customers may feel that their users have a more active profile. If a company has 200 users, and due to custom corporate applications, each user generates 15 IOPS as compared to 10 IOPS used in the VSPEX workload. This customer will need 3,000 IOPS (200 users * 15 IOPS per desktop). This 250 desktop configuration would be underpowered in this case because it has been rated to 2,500 IOPS (250 desktops * 10 IOPS per desktop). This customer should refer to the VMware View 5.1 and VMware vsphere 5.1 for 500 Virtual Desktops document and consider moving up to the 500 desktops solution. Implementing the reference architectures Overview Resource types The solutions architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. In the solutions architectures, these are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements. The solution architectures define the hardware requirements for the solution in terms of five basic types of resources: CPU resources Memory resources Network resources Storage resources Backup resources This section describes the resource types, how they are used in the solution, and key considerations for implementing them in a customer environment. CPU resources The solution architectures define the number of CPU cores that are required, but not a specific type or configuration. It is intended that new deployments use recent 59

Solution Stack Architectural Overview revisions of common processor technologies. It is assumed that these will perform as well as, or better than, the systems used to validate the solution. When using Avamar backup solution for VSPEX, considerations should be taken to not schedule all backups at once, but stagger them across your backup window. Scheduling all resources to backup at the same time could cause the consumption of all available host CPUs. In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual desktop and required hardware resources in the solutions architectures assume that there are no more than eight virtual CPUs for each physical processor core (8:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktop. This ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual desktop in the solution is defined to have 2 GB of memory. In a virtual environment, it is common to provision virtual desktops with more memory than the hypervisor physically has due to budget constraints. The memory over-commitment technique takes advantage of the fact that each virtual desktop does not fully utilize the amount of memory allocated to it. It makes business sense to oversubscribe the memory usage to some degree. The administrator should proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware vsphere runs out of memory for the guest operating systems, paging will begin to take place, resulting in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, more disks will need to be added not because of capacity requirement, but due to the demand of increased performance. The administrator must now decide whether it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option. This solution was validated with statically assigned memory and no over-commitment of memory resources. If memory over-commit is used in a real-world environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. When using Avamar backup solution for VSPEX, considerations should be taken to stagger the scheduling of backups across your backup window. Do not schedule all backups to occur simultaneously. Scheduling all resources to backup at the same time could cause the consumption of all available host memory. Network resources The solution outlines the minimum needs of the system. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server will 60

Solution Stack Architectural Overview depend on the type of server. The storage arrays have a number of included network ports, and have the option to add ports using EMC FLEX I/O modules. For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 10 IOs per second with an average size of 4 KB. Each virtual desktop is generating at least 40 KB/s of traffic on the storage network. For an environment rated for 250 virtual desktops, this comes out to a minimum of approximately 8 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic. Virtual desktop migration. Administrative and management operations. The requirements for each of these will vary depending on how the environment is being used. It is not practical to provide concrete numbers in this context. The network described in the solution for each solution should be sufficient to handle average workloads for the above use cases. Regardless of the network traffic requirements, always have at least two physical network connections that are shared for a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The solution contains a layout for the disks used in the validation of the system. The layout balances the available storage capacity with the performance capability of the drives. There are a few layers to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision datastores to the VMware vsphere Cluster. Each layer has a specific configuration that is defined for the solution and documented in the Chapter 5 VSPEX Configuration Guidelines. It is generally acceptable to replace drive types with a type that has more capacity with the same performance characteristics; or with ones that have higher performance characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Backup resources The solution outlines the backup storage (initial and growth) and retention needs of the system. Additional information can be gathered to size Avamar further including tape-out needs, RPO and RTO specifics, as well as multi-site environment replication needs. 61

Solution Stack Architectural Overview Implementation summary The requirements stated in the solution are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual desktop. In any customer implementation, the load of a system will vary over time as users interact with the system. If the customer s virtual desktops differ significantly from the reference definition, and vary in the same resource, then you may need to add more of that resource to the system. Quick assessment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations, and help assess the customer environment. First, summarize the user types planned for migration into the VSPEX End-User Computing environment. For each group, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual desktops required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as shown in Table 12. Table 12. Blank worksheet row Application CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Example User Type Resource Requirements Equivalent Reference Desktops Fill out the resource requirements for the User Type. The row requires inputs on three different resources: CPU, Memory, and IOPS. CPU requirements Memory requirements The reference virtual desktop assumes most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you have virtualized 100 desktops, but 20 users require two CPUs instead of one, consider that your pool needs to provide 120 virtual desktops capability. Memory plays a key role in ensuring application functionality and performance. Therefore, each application process has different targets for the acceptable amount of available memory. Like the CPU calculation, if a group of users require additional memory resources, simply adjust the number of desktops you are planning for to accommodate the additional resource requirements. 62

Solution Stack Architectural Overview For example, if you have 100 desktops to be virtualized, but each one needs 4GB of memory instead of the 2GB that is provided in the reference virtual desktop, plan for 200 reference virtual desktops. Storage performance requirements Storage capacity requirements Determining equivalent reference virtual desktops The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications and should be representative of the majority of virtual desktop implementations. The storage capacity requirement for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops presented in this solution rely on additional shared storage for user profile data and user documents. This requirement is covered as an optional component that can be met with the addition of specific storage hardware defined in the Solution. It can also be covered with existing file shares in the environment. With all of the resources defined, determine an appropriate value for the Equivalent Reference virtual desktops line by using the relationships in Table 13. Round all values up to the closest whole number. Table 13. Reference virtual desktop resources Resource Value for Reference Virtual Desktop Relationship between requirements and equivalent reference virtual desktops. CPU 1 Equivalent Reference Virtual Desktops = Resource Requirements Memory 2 Equivalent Reference Virtual Desktops = (Resource Requirements)/2 IOPS 10 Equivalent Reference Virtual desktops = (Resource Requirements)/10 Consider a scenario where there is a group of 50 users who need the two virtual CPUs and 12 IOPS per desktop described earlier, along with 8 GB of memory on the resource requirements line. In this scenario, you should describe them as needing two reference desktops of CPU, four reference desktops of memory, and two reference desktops of IOPS based on the virtual desktop characteristics in Table 13. These figures go in the Equivalent Reference Desktops row as shown in Table 14 on page 64. Use the maximum value in the row to fill in the Equivalent Reference Virtual Desktops column. 63

Solution Stack Architectural Overview Multiply the number of equivalent reference virtual desktops by the number of users to arrive at the total resource needs for that type of user. Table 14. Example worksheet row User Type CPU(Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Heavy Users Resource Requirements 2 8 12 Equivalent Reference Virtual Desktops 2 4 2 4 50 200 Once the worksheet is filled out for each user type that the customer wants to migrate into the virtual infrastructure, compute the total number of reference virtual desktops required in the pool by computing the sum of the total column on the right side of the worksheet as shown in Table 15. Table 15. Example applications User Type CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Heavy Users Resource Requirements 2 8 12 Equivalent Reference Virtual Desktops 2 4 2 4 40 160 Moderate Users Resource Requirements 2 4 8 Equivalent Reference Virtual Desktops 2 2 1 2 20 40 Typical Users Resource Requirements 1 2 8 Equivalent Reference Virtual Desktops 1 1 1 1 20 40 Total 240 64

Solution Stack Architectural Overview The VSPEX End-User Computing Solutions define discrete resource pool sizes. For this solution set, the pool contains 250 desktops. In the case of Table 15 on page 64, the customer requires 240 virtual desktops of capability from the pool. Therefore, this 250 virtual desktop resource pool provides sufficient resources for the current needs as well as some room for growth. Fine tuning hardware resources In most cases, the recommended hardware for servers and storage is sized appropriately based on the process described. In some cases, there is a desire to customize the hardware resources available to the system beyond the recommended sizing. A complete description of system architecture is beyond the scope of this document. Additional customization can be done at this point. Storage resources In some applications, there is a need to separate some storage workloads from other workloads. The storage layouts in the VSPEX architectures put all of the virtual desktops in a single resource pool. In order to achieve workload separation, purchase additional disk drives for each group that needs workload isolation, and add them to a dedicated pool. It is not appropriate to reduce the size of the main storage resource pool in order to support isolation, or to reduce the capability of the pool without additional guidance beyond this paper. The storage layouts presented in the Solutions are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult to predict impacts on other areas of the system. Server resources For the server resources in the VSPEX end-user computing solution, it is possible to customize the hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 16. Note the addition of the Total CPU Resources and Total Memory Resources columns on the right side of the table. Table 16. Server resource component totals User Type CPU (Virtual CPUs) Memory (GB) Number of Users Total CPU Resources Total Memory Resources Heavy Users Moderate Users Typical Users Resource Requirements Resource Requirements Resource Requirements 2 8 15 30 120 2 4 40 80 160 1 2 100 100 200 Total 210 480 65

Solution Stack Architectural Overview In this example, the target architecture required 210 virtual CPUs and 480 GB of memory. With the stated assumptions of 8 desktops per physical processor core, and no memory over-provisioning, this translates to 27 physical processor cores and 480 GB of memory. In contrast, this 250 virtual desktop resource pool as documented in the Solution calls for 500 GB of memory and at least 32 physical processor cores. In this environment, the solution is effectively implemented with fewer server resources. Note Keep high availability requirements in mind when customizing the resource pool hardware. A blank worksheet is presented in Table 17 on page 67. 66

Solution Stack Architectural Overview Table 17. Blank customer worksheet User Type CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Total 67

Solution Stack Architectural Overview 68

Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Configuration overview... 70 Pre-deployment tasks... 71 Customer configuration data... 74 Prepare switches, connect network, and configure switches... 74 Prepare and configure storage array... 76 Install and configure vsphere hosts... 78 Install and configure SQL server database... 83 VMware vcenter server deployment... 85 Set up VMware View Connection Server... 88 Set up EMC Avamar... 90 Set up VMware vshield Endpoint... 114 Set up VMware vcenter Operations Manager for View... 116 69

VSPEX Configuration Guidelines Configuration overview The deployment process is divided into the stages shown in Table 18. Upon completion of the deployment, the VSPEX infrastructure is ready for integration with the existing customer network and server infrastructure. Table 18 lists the main stages in the solution deployment process. The table also includes references to chapters where relevant procedures are provided. Table 18. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment 2 3 Obtain the deployment tools Gather customer configuration data Pre-deployment Pre-deployment 5 Configure the switches and networks, connect to the customer network Prepare switches, connect network, and configure switches 6 Install and configure the VNXe Prepare and configure storage array 7 Configure virtual desktop datastores Prepare and configure storage array 8 Install and configure the servers Install and configure vsphere hosts 9 Set up SQL server (used by VMware vcenter and View) 10 Install and configure vcenter and virtual machine networking 11 Set up VMware View Connection Server Install and configure SQL server database VMware vcenter server deployment Set up VMware View Connection Server 12 Set up EMC Avamar Set up EMC Avamar 13 Set up VMware vshield Endpoint 14 Set up VMware vcenter Operations Manager (vc Ops) for View Set up VMware vshield Endpoint Set up VMware vcenter Operations Manager for View 70

VSPEX Configuration Guidelines Pre-deployment tasks Overview Pre-deployment tasks, as shown in Table 19, include procedures that do not directly relate to environment installation and configuration, but whose results are needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. Perform these tasks before the customer visit to decrease the time required onsite. Table 19. Tasks for pre-deployment Task Description Reference Gather documents Gather the related documents listed in the Preface. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution. EMC documentation Other documentation Gather tools Gather data Gather the required and optional tools for the deployment. Use Table 20 to confirm that all equipment, software, and appropriate licenses are available before the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer Configuration Data worksheet for reference during the deployment process. Table 20 Deployment prerequisites checklist Appendix B Deployment prerequisites Complete the VNXe Series Configuration Worksheet, available on the EMC online support website, to provide the most comprehensive array-specific information. Table 20 itemizes the hardware, software, and license requirements to configure the solution. 71

VSPEX Configuration Guidelines Table 20. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual desktops: Sufficient physical server capacity to host 250 virtual desktops. VMware vsphere 5.1 servers to host virtual infrastructure servers. Note This requirement may be covered by existing infrastructure. Networking: Switch port capacity and capabilities as required by the End-User Computing. EMC VNXe3300: Multiprotocol storage array with the required disk layout. Software VMware vsphere 5.1 installation media. VMware vcenter server 5.1.0a installation media. VMware vshield Manager Open Virtualization Appliance (OVA) file. VMware vc Ops OVA file. VMware vc Ops for View Adapter. VMware View 5.1 installation media. vshield Endpoint partner antivirus solution management server software. vshield Endpoint partner security virtual machine software. EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer. EMC Online Support Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware 72

Requirement Description Reference vcenter and VMware View Connection Server). VSPEX Configuration Guidelines Microsoft Windows 7 SP1 installation media. Microsoft SQL server 2008 or later installation media. Note: This requirement might be covered in the existing infrastructure. EMC vstorage API for Array Integration Plug-in. EMC Online Support Licenses VMware vcenter 5.1 license key. VMware vsphere 5.1 Desktop license keys. VMware View Premier 5.1 license keys. vshield Endpoint license keys (VMware). vshield Endpoint license keys (vshield Partner). VMware vc Ops for View. Microsoft Windows Server 2008 R2 Standard (or later) license keys. Note: This requirement might be covered in the existing Microsoft Key Management Server (KMS) Microsoft Windows 7 license keys. Note: This requirement might be covered in the existing Microsoft Key Management Server (KMS). Microsoft SQL server license key. Note: This requirement might be covered in the existing infrastructure. 73

VSPEX Configuration Guidelines Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process. Appendix A provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information may be added, modified, and recorded as deployment progresses. Additionally, complete the VNXe Series Configuration Worksheet, available on the EMC online support website, to provide the most comprehensive array-specific information. Prepare switches, connect network, and configure switches Overview This chapter provides the requirements for network infrastructure required to support this solution. Table 21 provides a summary of the tasks to be completed and references for further information. Table 21. Tasks for switch and network configuration Task Description Reference Configure the infrastructure network Configure storage array and vsphere host infrastructure networking as specified in the solution document. Configure the VLANs Configure private and public VLANs as required. Vendor s switch configuration guide Complete the network cabling Connect switch interconnect ports. Connect VNXe ports. Connect vsphere server ports. Configure infrastructure network The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists, or is being deployed alongside other components of the solution. 74

VSPEX Configuration Guidelines Figure 16 shows a sample redundant Ethernet infrastructure for this solution. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in network connectivity. Figure 16. Sample Ethernet network architecture Configure VLANs Ensure adequate switch ports for the storage array and vsphere hosts that are configured with a minimum of three VLANs for: Virtual machine networking, vsphere management, and CIFS traffic (customer-facing networks, which may be separated if desired). NFS networking (private network). VMware vmotion (vmotion) (private network). 75

VSPEX Configuration Guidelines Complete network cabling Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network. Note At this point, the new equipment is being connected to the existing customer network. Please ensure that unforeseen interactions do not cause service issues on the customer network. Prepare and configure storage array Overview This chapter describes how to configure the VNXe storage array. In this solution, the VNXe series provides NFS data storage for VMware hosts. Table 22. Tasks for storage configuration Task Description Reference Set up the initial VNXe configuration Setup VNXe Networking Provision storage for NFS datastores Configure the IP address information and other key parameters on the VNXe. Configure Link Aggregation Control Protocol (LACP) on the VNXe and network switches. Create NFS file systems presented to the vsphere servers as NFS datastores hosting the virtual desktops. VNXe3300 System Installation Guide VNXe Series Configuration Worksheet Vendor s switch configuration guide Provision optional storage for user data Provision optional storage for infrastructure virtual machines Create CIFS file systems that used to store roaming user profiles and home directories. Create optional NFS datastores to host the SQL server, domain controller, vcenter server, and/or View Manager virtual machines. Prepare VNXe Set up the initial VNXe configuration VNXe3300 System Installation Guide provides instructions for assembly, racking, cabling, and powering the VNXe. There are no specific setup steps for this solution. After completing the initial VNXe setup, you need to configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information: DNS NTP 76

VSPEX Configuration Guidelines Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership The reference documents listed in Table 22 on page 76 provide more information on how to configure the VNXe platform. Storage configuration guidelines on page 49 provide more information on the disk layout. Setup VNXe networking The VNXe supports Ethernet port aggregation so that users can bind Ethernet ports together as a single logical interface. The interfaces must be on the same IP subnet and connected to the same physical or logical switch. For NFS datastores used in this solution, LACP should be used to provide additional network cable redundancy rather than to increase overall throughput. The following steps show how to configure LACP on VNXe if more than one network interface is available. 1. In the VNXe Unisphere Dashboard, select Settings. 2. Click More configuration. The More Configuration page appears. 3. Click Advanced Configuration. The Advanced Configuration page appears. 4. In the Advanced Configuration, select the port you want to aggregate. Note Ports can be aggregated only with eth2 from the base port list and only with eth10 from the list of I/O modules. 5. Select Aggregate with eth2 or eth10, and then click Apply changes. The changes are applied and the aggregation is complete. Provision storage for NFS datastores Note There may be additional configuration required on the network switch. These steps are available in the configuration materials from the switch vendor. Complete the following steps in EMC Unisphere to configure NFS file systems on VNXe is used for storing virtual desktops: 1. Create a pool with the appropriate number of disks. a. In Unisphere, navigate to System > Storage Pools, and then select Configure Disks b. Create a new pool manually by Disk Type for SAS drives. Note The validated configuration uses a single pool with 21 drives. In other scenarios, creating separate pools may be advisable. Hot Spare disks must be created at this point. Please consult the VNXe3300 System Installation Guide for additional information. Figure 10 on page 51 shows the target core storage layout for the solution. 2. Create an NFS Shared Folder Server. 77

VSPEX Configuration Guidelines Access this wizard in Unisphere by navigating Settings -> Shared Folder Server Settings -> Add Shared Folder Server. Refer to VNXe3300 System Installation Guide for more detailed instructions. Provision optional storage for user data Provision optional storage for infrastructure virtual machines 3. Create a VMware storage resource. a. In Unisphere, navigate to Storage -> VMware -> Create. b. Create two NFS datastores on the pool and shared folder server created above. The size of the datastore is determined by the number of virtual desktops it contains. The validated configuration used 1TB datastores each. Note Thin Provisioning should not be enabled. 4. Finally, add the required vsphere hosts to the list of hosts allowed to access the new datastore. If storage required is for user data (that is, roaming user profiles or View Persona Management repositories and user/home directories) does not exist in the production environment already and the optional user data disk pack has been purchased, complete the following steps in Unisphere to configure two CIFS file systems on VNXe: 1. Create a RAID 6 storage pool that consists of twelve 2 TB NL-SAS drives. Figure 11 on page 51 shows the target optional user data storage layout. 2. Create two file systems from the storage pool and export them as CIFS shares on a CIFS server. If the storage required for infrastructure virtual machines (that is, SQL server, domain controller, vcenter server, vcenter Operations Manager for View, and/or VMware View Connection Servers) does not exist in the production environment already, and the optional user data disk pack is purchased, configure a NFS file system on VNXe to be used as an NFS datastore in which the infrastructure virtual machines reside. Repeat the configuration steps that are shown in Provision storage for NFS datastores to provision the optional storage, while taking into account the smaller number of drives. Install and configure vsphere hosts Overview This chapter provides information about installation and configuration of vsphere hosts and infrastructure servers required to support the architecture. Table 23 describes the tasks to be completed. 78

Table 23. Tasks for server installation Task Description Reference Install vsphere Install the vsphere 5.1 hypervisor on the physical servers deployed for the solution. VSPEX Configuration Guidelines vsphere Installation and Setup Guide Configure vsphere networking Configure vsphere networking including NIC trunking, VMkernel ports, and virtual machine port groups and Jumbo Frames. vsphere Networking Install vsphere Connect VMware datastores Connect the VMware datastores to the vsphere hosts deployed for the solution. vsphere Storage Guide Upon initial power up of the servers being used for vsphere, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in each server s BIOS. If the servers are equipped with a RAID controller, it is recommended to configure mirroring on the local disks. Start up the vsphere 5.1 installation media and install the hypervisor on each of the servers. vsphere hostnames, IP addresses, and a root password are required for installation. Appendix B provides appropriate values. Configure vsphere networking During the installation of VMware vsphere, a standard virtual switch (vswitch) is created. By default, vsphere chooses only one physical NIC as a vswitch uplink. To maintain redundancy and bandwidth requirements, an additional NIC must be added either by using the vsphere console or by connecting to the vsphere host from the vsphere Client. Each VMware vsphere server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, link aggregation, and network adapter failover. VMware vsphere networking configuration, including load balancing, link aggregation, and failover options is described in vsphere Networking. Refer to the list of documents in the Preface of this document for more information. Choose the appropriate load-balancing option based on what is supported by the network infrastructure. Create VMkernel ports as required, based on the infrastructure configuration: VMkernel port for NFS traffic VMkernel port for VMware vmotion Virtual desktop port groups (used by the virtual desktops to communicate on the network) vsphere Networking describes the procedure for configuring these settings. Refer to the list of documents in the Preface of this document for more information. 79

VSPEX Configuration Guidelines Jumbo frames A jumbo frame is an Ethernet frame with a payload greater than 1,500 bytes and up to 9,000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9,000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames must be enabled end-to-end. This includes the network switches, vsphere servers, and VNXe storage processors (SPs). EMC recommends enabling jumbo frames on the networks and interfaces used for carrying NFS traffic. Jumbo frames can be enabled on the vsphere server into two different levels. If all the portals on the vswitch need to be enabled for jumbo frames, this can be achieved by selecting properties of vswitch and editing the MTU settings from the vcenter. If specific VMkernel ports are to be jumbo frames-enabled, edit the VMkernel port under network properties from vcenter. To enable Jumbo frames on the VNXe in Unisphere, navigate to Settings > More Configuration > Advanced Configuration. Select the appropriate IO module and Ethernet port, and then set the MTU to 9,000. Jumbo frames may also need to be enabled on each network switch. Please consult your switch configuration guide for instructions. Connect VMware datastores Connect the datastores configured in Prepare and configure storage array to the appropriate vsphere servers. These include the datastores configured for: Virtual desktop storage Infrastructure virtual machine storage (if required) SQL server storage (if required) vsphere Storage Guide provides instructions on how to connect the VMware datastores to the vsphere host. Refer to the list of documents in Appendix C of this document for more information. The vsphere vstorage APIs for Array Integration (VAAI) plug-in for NFS must be installed after VMware Virtual Center has been deployed as described in VMware vcenter server deployment. Plan virtual machine memory allocations Server capacity is required for two purposes in the solution: To support the new virtualized desktop infrastructure. Support the required infrastructure services such as authentication/authorization, DNS, and databases. For information on minimum infrastructure services hosting requirements, refer to Table 3 on page 41. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. 80

Memory configuration VSPEX Configuration Guidelines Proper sizing and configuration of the solution necessitates care being taken when configuring server memory. The following section provides general guidance on memory allocation for the virtual desktops and factor in vsphere overhead and the virtual machine configuration. We begin with an overview of how memory is managed in a VMware environment. ESX/ESXi Memory management Memory virtualization techniques allow the vsphere hypervisor to abstract physical host resources such as memory in order to provide resource isolation across multiple virtual machines while avoiding resource exhaustion. In cases where advanced processors (i.e. such as Intel processors with EPT support) are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself via a feature known as shadow page tables. vsphere employs the following memory management techniques: Allocation of memory resources greater than those physically available to the virtual machine is known as memory over-commitment. Identical memory pages that are shared across virtual machines are merged via a feature known as transparent page sharing. Duplicate pages are returned to the host free memory pool for reuse. Memory compression - ESXi stores pages, which would otherwise be swapped out to disk through host swapping, in a compression cache located in the main memory. Host resource exhaustion can be relieved via a process known as memory ballooning. This process requests free pages be allocated from the virtual machine to the host for reuse. Lastly, Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk. Additional information can be obtained by visiting: http://www.vmware.com/files/pdf/mem_mgmt_perf_vsphere5.pdf 81

VSPEX Configuration Guidelines Virtual machine memory concepts Figure 17 shows the memory settings parameters in the virtual machine. Figure 17. Virtual Machine memory settings Configured memory Physical memory allocated to the virtual machine at the time of creation. Reserved memory Memory that is guaranteed to the virtual machine. Touched memory Memory that is active or in use by the virtual machine. Swappable Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines via ballooning, compression or swapping. The following are the recommended best practices: Do not disable the default memory reclamation techniques. These are lightweight processes that enable flexibility with minimal impact to workloads. Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machines sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, the virtual machine performance will likely be adversely affected. Having performance baselines of your virtual machine workloads assists in this process. An excellent reference on esxtop can be found in the VMware community blog: http://communities.vmware.com/docs/doc-9279 82