EMC VSPEX END-USER COMPUTING

Size: px
Start display at page:

Download "EMC VSPEX END-USER COMPUTING"

Transcription

1 VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document describes the EMC VSPEX end-user computing solution with Citrix XenDesktop, Microsoft Hyper-V Server 2012, and EMC Next-Generation VNX for up to 2,000 virtual desktops. December 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published December 2013 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX End-User Computing with Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Part Number H EMC VSPEX End-User Computing

3 Contents Contents Chapter 1 Executive Summary 12 Introduction Audience Purpose of this guide Business needs Chapter 2 Solution Overview 15 Solution overview Desktop broker Virtualization Compute Network Backup Storage Chapter 3 Solution Technology Overview 22 Solution technology Summary of key components Desktop virtualization Citrix XenDesktop Machine Creation Services Citrix Provisioning Services Citrix Personal vdisk Citrix Profile Management Virtualization Microsoft Hyper-V Server Microsoft System Center Virtual Machine Manager Hyper-V High Availability EMC Storage Integrator for Windows Compute Network Storage EMC VNX Snapshots EMC VNX SnapSure EMC VSPEX End-User Computing 3

4 Contents EMC VNX Virtual Provisioning VNX FAST Cache VNX FAST VP (optional) VNX file shares ROBO Backup and recovery EMC Avamar ShareFile ShareFile StorageZones ShareFile StorageZone Architecture Using ShareFile StorageZone with VSPEX architectures Chapter 4 Solution Overview 44 Solution overview Solution architecture Logical architecture Key components Hardware resources Software resources Sizing for validated configuration Server configuration guidelines Microsoft Hyper-V memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines VLAN Enable jumbo frames Link aggregation Storage configuration guidelines Hyper-V storage virtualization for VSPEX VSPEX storage building block VSPEX end-user computing validated maximums Storage layout for 500 virtual desktops Storage layout for 1,000 virtual desktops Storage layout for 2,000 virtual desktops High availability and failover Virtualization layer Compute layer Network layer Storage layer EMC VSPEX End-User Computing

5 Contents Validation test profile Backup environment configuration guidelines Backup characteristics Backup layout Sizing guidelines Reference workload Defining the reference workload Applying the reference workload Implementing the reference architectures Resource types Backup resources Expanding existing VSPEX EUC environments Implementation summary Quick assessment CPU requirements Memory requirements Storage performance requirements Storage capacity requirements Determining equivalent reference virtual desktops Fine-tuning Chapter 5 VSPEX Configuration Guidelines 97 Overview Pre-deployment tasks Deployment prerequisites Customer configuration data Preparing switches, connecting the network, and configuring switches Preparing network switches Configuring infrastructure network Configuring VLANs Completing network cabling Preparing and configuring the storage array Configuring VNX Provisioning core data storage Provisioning optional storage for user data Provisioning optional storage for infrastructure virtual machines Installing and configuring Microsoft Hyper-V hosts Installing Windows hosts Installing Hyper-V and configuring failover clustering EMC VSPEX End-User Computing 5

6 Contents Configuring Windows host networking Installing PowerPath on Windows servers Enabling jumbo frames Planning virtual machine memory allocations Installing and configuring SQL Server database Creating a virtual machine for Microsoft SQL Server Installing Microsoft Windows on the virtual machine Installing SQL Server Configuring database for Microsoft SCVMM Deploying System Center Virtual Machine Manager server Creating a SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server Installing the SCVMM Management Console Installing the SCVMM agent locally on a host Adding a Hyper-V cluster into SCVMM Adding file share storage to SCVMM (file variant only) Creating a virtual machine in SCVMM Creating a template virtual machine Deploying virtual machines from the template virtual machine Installing and configuring XenDesktop controller Installing server-side components of XenDesktop Configuring a site Adding a second controller Installing Citrix Studio Preparing master virtual machine Provisioning virtual desktops Installing and configuring Provisioning Services (PVS only) Configuring a PVS server farm Adding a second PVS server Create a PVS store Configuring inbound communication Configuring a bootstrap file Setting up a TFTP server on VNX Configuring boot options 66 and 67 on DHCP server Preparing the master virtual machine Provisioning the virtual desktops Setting up EMC Avamar GPO additions for EMC Avamar EMC VSPEX End-User Computing

7 Contents Preparing the master image for EMC Avamar Defining datasets Defining schedules Adjusting the maintenance window schedule Defining retention policies Creating groups and group policy EMC Avamar Enterprise Manager: activating clients Summary Chapter 6 Validating the Solution 150 Overview Post-installation checklist Deploying and testing a single virtual desktop Verifying the redundancy of the solution components Appendix A Bills of Materials 153 Bill of materials for 500 virtual desktops Bill of materials for 1,000 virtual desktops Bill of materials for 2,000 virtual desktops Appendix B Customer Configuration Data Sheet 160 Customer configuration data sheets Appendix C References 164 References EMC documentation Other documentation Appendix D About VSPEX 166 About VSPEX EMC VSPEX End-User Computing 7

8 Contents Figures Figure 1. Next-Generation VNX with multicore optimization Figure 2. Active/active processors increase performance, resiliency, and efficiency Figure 3. Latest Unisphere Management Suite Figure 4. Solution components Figure 5. XenDesktop 7 architecture components Figure 6. Compute layer flexibility Figure 7. Example of highly-available network design Figure 8. Storage pool rebalance progress Figure 9. Thin LUN space utilization Figure 10. Examining storage pool space utilization Figure 11. Defining storage pool utilization thresholds Figure 12. Defining automated notifications for block Figure 13. ShareFile high-level architecture Figure 14. Logical architecture: VSPEX end-user computing for Citrix XenDesktop with ShareFile StorageZone Figure 15. Logical architecture for SMB variant Figure 16. Logical architecture for FC variant Figure 17. Hypervisor memory consumption Figure 18. Required networks Figure 19. Hyper-V virtual disk types Figure 20. Core storage layout with PVS provisioning for 500 virtual desktops Figure 21. Core storage layout with MCS provisioning for 500 virtual desktops Figure 22. Optional storage layout for 500 virtual desktops Figure 23. Figure 24. Core storage layout with PVS provisioning for 1,000 virtual desktops Core storage layout with MCS provisioning for 1,000 virtual desktops Figure 25. Optional storage layout for 1,000 virtual desktops Figure 26. Figure 27. Core storage layout with PVS provisioning for 2,000 virtual desktops Core storage layout with MCS provisioning for 2,000 virtual desktops Figure 28. Optional storage layout for 2,000 virtual desktops Figure 29. High availability at the virtualization layer Figure 30. Redundant power supplies Figure 31. Network layer high availability Figure 32. VNX series high availability Figure 33. Sample network architecture SMB variant EMC VSPEX End-User Computing

9 Contents Figure 34. Sample network architecture FC Variant Figure 35. Set nthread parameter Figure 36. Storage System Properties dialog box Figure 37. Create FAST Cache dialog box Figure 38. Advanced tab in the Create Storage Pool dialog box Figure 39. Advanced tab in the Storage Pool Properties dialog box Figure 40. Storage Pool Properties window Figure 41. Manage Auto-Tiering dialog box Figure 42. LUN Properties window Figure 43. Configure Bootstrap dialog box Figure 44. Configuring Windows Folder Redirection Figure 45. Create a Windows network drive mapping for user files Figure 46. Configure drive mapping settings Figure 47. Configure drive mapping common settings Figure 48. Create a Windows network drive mapping for user profile data Figure 49. Avamar tools menu Figure 50. Avamar Manage All Datasets dialog box Figure 51. Avamar New Dataset dialog box Figure 52. Configure Avamar Dataset settings Figure 53. User Profile data dataset Figure 54. User Profile data dataset Exclusion settings Figure 55. User Profile data dataset Options settings Figure 56. User Profile data dataset Advanced Options settings Figure 57. Avamar default Backup/Maintenance Windows schedule Figure 58. Avamar modified Backup/Maintenance Windows schedule Figure 59. Create new Avamar backup group Figure 60. New backup group settings Figure 61. Select backup group dataset Figure 62. Select backup group schedule Figure 63. Select backup group retention policy Figure 64. Avamar Enterprise Manager Figure 65. Avamar Client Manager Figure 66. Avamar Activate Client dialog box Figure 67. Avamar Activate Client menu Figure 68. Avamar Directory Service configuration Figure 69. Avamar Client Manager post configuration Figure 70. Avamar Client Manager virtual desktop clients Figure 71. Avamar Client Manager select virtual desktop clients Figure 72. Select Avamar groups EMC VSPEX End-User Computing 9

10 Contents Figure 73. Activate Avamar clients Figure 74. Commit Avamar client activation Figure 75. Avamar client activation informational prompt one Figure 76. Avamar client activation informational prompt two Figure 77. Avamar Client Manager activated clients Tables Table 1. Thresholds and settings under VNX OE Block Release Table 2. Table 3. Minimum hardware resources to support ShareFile StorageZone with Storage Center Recommended EMC VNX storage needed for ShareFile StorageZone CIFS share Table 4. Solution hardware Table 5. Solution software Table 6. Configurations that support this solution Table 7. Server hardware Table 8. Hardware resources for network Table 9. Storage hardware Table 10. Number of disks required for various numbers of virtual desktops Table 11. Validated environment profile Table 12. Backup profile characteristics Table 13. Virtual desktop characteristics Table 14. Blank worksheet row Table 15. Reference virtual desktop resources Table 16. Example worksheet row Table 17. Example applications Table 18. Server resource component totals Table 19. Blank customer worksheet Table 20. Deployment process overview Table 21. Tasks for pre-deployment Table 22. Deployment prerequisites checklist Table 23. Tasks for switch and network configuration Table 24. Tasks for storage configuration Table 25. Tasks for server installation Table 26. Tasks for SQL Server database setup Table 27. Tasks for SCVMM configuration Table 28. Tasks for XenDesktop controller setup Table 29. Tasks for XenDesktop controller setup Table 30. Tasks for Avamar integration EMC VSPEX End-User Computing

11 Contents Table 31. Tasks for testing the installation Table 32. Table 33. Table 34. List of components used in the VSPEX solution for 500 virtual desktops List of components used in the VSPEX solution for 1,000 virtual desktops List of components used in the VSPEX solution for 2,000 virtual desktops Table 35. Common server information Table 36. Hyper-V server information Table 37. Array information Table 38. Network infrastructure information Table 39. VLAN information Table 40. Service accounts EMC VSPEX End-User Computing 11

12 Chapter 1: Executive Summary Chapter 1 Executive Summary This chapter presents the following topics: Introduction Audience Purpose of this guide Business needs EMC VSPEX End-User Computing

13 Chapter 1: Executive Summary Introduction EMC VSPEX validated and modular architectures are built with proven technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When you are embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, more choices, greater efficiency, and lower risk. This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces. Customers are free to select any server and networking hardware that meets or exceeds the stated minimums. Audience Purpose of this guide This guide assumes you have the necessary training and background to install and configure an end-user computing solution based on Citrix XenDesktop with Microsoft Hyper-V as a hypervisor, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and you should be familiar with these documents. You should also be familiar with the infrastructure and database security policies of the customer installation. Individuals focused on selling and sizing a VSPEX end-user computing solution for Citrix XenDesktop should pay particular attention to the first four chapters of this document. Implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This guide presents an initial introduction to the VSPEX end-user computing architecture, an explanation of how to modify the architecture for specific engagements, and instructions for effectively deploying the system. The VSPEX end-user computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on a Microsoft Hyper-V virtualization layer backed by the highly available VNX storage family for storage and Citrix s XenDesktop desktop broker. The compute and network components, while vendor-definable, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. The 500, 1,000, and 2,000 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same EMC VSPEX End-User Computing 13

14 Chapter 1: Executive Summary Business needs requirements, this document provides adjustment methods and guidance for deploying a cost-effective system. An end-user computing or virtual desktop architecture is a complex system offering. This document facilitates setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. Validation tests are provided to ensure that your system is up and running properly after the last component has been installed. Follow the guidelines in this document to ensure an efficient and painless desktop deployment. The use of business applications is becoming more common in the consolidated compute, network, and storage environment. Using Citrix for EMC VSPEX end-user computing reduces the complexity of configuring the components of a traditional deployment model. It simplifies integration management while maintaining the application design and implementation options. Citrix unifies administration while enabling the control and monitoring of process separation. The following are the business needs addressed by the VSPEX end-user computing solution for Citrix architecture: Provides an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components Provides a solution for efficiently virtualizing 500, 1,000, or 2,000 virtual desktops for varied customer use cases Provides a reliable, flexible, and scalable reference design 14 EMC VSPEX End-User Computing

15 Chapter 2: Solution Overview Chapter 2 Solution Overview This chapter presents the following topics: Solution overview Desktop broker Virtualization Compute Network Backup Storage EMC VSPEX End-User Computing 15

16 Chapter 2: Solution Overview Solution overview Desktop broker The EMC VSPEX end-user computing solution for Citrix XenDesktop on Microsoft Hyper-V server 2012 provides a complete system architecture capable of supporting and protecting up to 2,000 virtual desktops with a redundant server and network topology, highly available storage, and trusted EMC backup solutions. The core components that make up this particular solution are desktop broker, virtualization, storage, network, and compute. XenDesktop is the virtual desktop solution from Citrix that allows virtual desktops to run on the Microsoft Hyper-V virtualization environment. It enables the centralization of desktop management and provides increased control for IT organizations. XenDesktop allows end users to connect to their desktops from multiple devices across a network connection. Virtualization Compute Microsoft Hyper-V is a virtualization platform that provides flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core Microsoft virtualization components are the Microsoft Hyper-V hypervisor and the Microsoft System Center Virtual Machine Manager for system management. The Microsoft Hyper-V hypervisor runs on a dedicated server and allows multiple operating systems to execute on the system simultaneously as virtual machines. Microsoft clustered services allows multiple Hyper-V servers to operate in a clustered configuration. The Microsoft Hyper-V cluster configuration is managed as a larger resource pool through the Microsoft System Center Virtual Machine, allowing dynamic allocation of CPU, memory, and storage across the cluster. High-availability features of Microsoft Hyper-V Server 2012, such as live Migration and Storage Migration, enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact. VSPEX allows flexibility in the design and implementation of the vendor s choice of server components. The infrastructure must conform to the following attributes: Sufficient RAM, CPU cores, and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to support failover after a server failure in the environment 16 EMC VSPEX End-User Computing

17 Chapter 2: Solution Overview Network VSPEX allows flexibility in the design and implementation of the vendor s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage Support for link aggregation Traffic isolation based on industry-accepted best practices Backup EMC Avamar delivers the protection and efficiency needed to accelerate the deployment of a VSPEX end-user computing solution. Avamar enables administrators to centrally back up and manage the policies and end-user computing infrastructure components, while it allows end users to efficiently recover their own files from a simple and intuitive web-based interface. Avamar only moves new, unique sub-file data segments, resulting in fast daily full backups. This results in an up to 90 percent reduction in backup times, and can reduce the required daily network bandwidth by up to 99 percent and the required backup storage by 10 to 30 times. Storage The EMC Next-Generation VNX storage series provides both file and block access with a broad feature set, making it an ideal choice for any end-user computing implementation. VNX storage includes the following components, sized for the stated reference architecture workload: Host adapter ports (for block) Provide host connectivity through fabric to the array Data Movers (for file) Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB, NFS services) Storage processors (SPs) The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays Disk drives Disk spindles and solid state drives (SSDs) that contain the host/application data and their enclosures Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables CIFS (SMB) and NFS protocols on the VNX. The desktop solutions described in this document are based on the EMC VNX5400 and EMC VNX5600 storage arrays. The VNX5400 can support a maximum of 250 drives and the VNX5600 can host up to 500 drives. EMC VSPEX End-User Computing 17

18 Chapter 2: Solution Overview The EMC VNX series supports a wide range of business-class features that are ideal for the end-user computing environment, including: EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP) EMC FAST Cache File-level data deduplication and compression Block deduplication Thin provisioning Replication Snapshots and checkpoints File-level retention Quota management Features and enhancements The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency and management and protection software to meet the demanding needs of today s virtualized application environments. The next-generation VNX series includes many features and enhancements designed and built upon the first generation s success. These features and enhancements include: More capacity with multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx ) Greater efficiency with a flash-optimized hybrid array Better protection by increasing application availability with active/active storage processors Easier administration and deployment by increasing productivity with a new Unisphere Management Suite VSPEX is built with the next-generation VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts 18 EMC VSPEX End-User Computing

19 Chapter 2: Solution Overview the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. New data tends to be accessed more frequently than older data, so it is stored on flash drives to provide the best performance. As data ages and becomes less active, FAST VP automatically tiers the data from high-performance drives to high-capacity drives, based on customer-defined policies. This functionality has been enhanced to provide four times better efficiency with new FAST VP solid-state disks (SSDs) that are based on enterprise multi-level cell (emlc) technology, lowering the cost per gigabyte. FAST Cache dynamically absorbs unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier. VNX Intel MCx code path optimization The advent of flash technology has been a catalyst in significantly changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores, as shown in Figure 1. The VNX series with MCx dramatically improves the file performance for transactional applications like databases and virtual machines over network-attached storage (NAS). Figure 1. Next-Generation VNX with multicore optimization EMC VSPEX End-User Computing 19

20 Chapter 2: Solution Overview Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is fundamental to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNX performance VNX storage, enabled with the MCx architecture, is optimized for FLASH 1st and provides unprecedented overall performance. It optimizes the system for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and provides optimal capacity efficiency (cost per GB). VNX provides the following performance improvements: Up to four times more file transactions when compared with dual controller arrays Increased file performance for transactional applications (for example, Microsoft Exchange on VMware over NFS) by up to three times with a 60 percent better response time Up to four times more Oracle and Microsoft SQL Server OLTP transactions Up to six times more virtual machines Active/active array storage processors The new VNX architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O. Load balancing is also improved and applications can achieve up to two times better performance. Active/active for block is ideal for applications that require the highest levels of availability and performance but do not require tiering or efficiency services like compression, deduplication, or snapshot. With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file-system migrations between systems. This process migrates all checkpoints and settings automatically and enables the clients to continue operation during the migration. 20 EMC VSPEX End-User Computing

21 Chapter 2: Solution Overview Figure 2. Active/active processors increase performance, resiliency, and efficiency Unisphere management The latest Unisphere Management Suite extends Unisphere s easy-to-use interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 3, the suite also includes Unisphere Remote for centrally managing up to thousands of VNX and VNXe systems with added support for XtremSW Cache. Figure 3. Latest Unisphere Management Suite EMC VSPEX End-User Computing 21

22 Chapter 3: Solution Technology Overview Chapter 3 Solution Technology Overview This chapter presents the following topics: Solution technology Summary of key components Desktop virtualization Virtualization Compute Network Storage Backup and recovery ShareFile EMC VSPEX End-User Computing

23 Chapter 3: Solution Technology Overview Solution technology This VSPEX solution uses EMC VNX5400 (for up to 1,000 virtual desktops) or VNX5600 (for up to 2,000 virtual desktops) storage arrays and Microsoft Hyper-V Server 2012 to provide the storage and computer resources for a Citrix XenDesktop 7 environment for Windows 7 virtual desktops, which are provisioned by Provisioning Services (PVS) or Machine Creation Services (MCS). Figure 4 shows the components of the solution. Figure 4. Solution components Planning and designing the storage infrastructure for Citrix XenDesktop is a critical step, because the shared storage must be able to absorb large bursts of input/output (I/O) that occur during some use cases like when many desktops boot at the beginning of a workday or when required patches are applied. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. If planning does not take these use cases into account, users can quickly become frustrated by unpredictable performance. To provide predictable performance for an end-user computing environment, the storage must be able to handle peak I/O loads from clients while still providing fast response times. Typically, the design for this type of workload involves deploying several disks to handle brief periods of extreme I/O pressure, and this can be expensive to implement. This solution uses EMC VNX FAST Cache, allowing for a reduction in the number of disks required. EMC s next-generation backup enables protection of user data and end-user recoverability by using EMC Avamar and its desktop client within the desktop image. EMC VSPEX End-User Computing 23

24 Chapter 3: Solution Technology Overview Summary of key components This section describes the key components of this solution. Desktop virtualization The desktop virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software enables on-demand creation of desktop images, allows maintenance of the image without affecting user productivity, and prevents the environment from growing in an unconstrained way. Virtualization The virtualization layer allows physical resources to be uncoupled from the applications that use them. This allows applications to use resources that are not directly tied to hardware, enabling many key features for end-user computing. Compute The compute layer provides memory and processing resources for the virtualization layer software and the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required and allows the customer to choose any compute hardware that meets the requirements. Network The network layer connects the users of the environment to the resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture. It allows the customer to implement the requirements using any network hardware that meets these requirements. Storage The storage layer is a critical resource for the implementation of the end-user computing environment. Because of the way desktops are used, the storage layer must be able to absorb large bursts of transient activity without causing undue impact on the user experience. This solution uses EMC VNX FAST Cache to handle this workload efficiently. Backup and recovery The optional backup and recovery component of the solution provides data protection in the event that the data in the primary system is deleted, damaged, or otherwise becomes unusable. ShareFile Security components from RSA provide customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. Solution architecture provides details about the components that make up the reference architecture. 24 EMC VSPEX End-User Computing

25 Chapter 3: Solution Technology Overview Desktop virtualization Desktop virtualization encapsulates and delivers the user desktop to a remote client device, which can be a thin client, zero client, smartphone, or tablet. It allows subscribers from different locations to access virtual desktops hosted on centralized computing resources at remote data centers. In this solution, Citrix XenDesktop is used to provision, manage, broker, and monitor the desktop virtualization environment. Citrix XenDesktop 7 Under the XenDesktop 7 architecture, management and delivery components are shared between XenDesktop and XenApp to give administrators a unified management experience. Figure 5 shows the XenDesktop 7 architecture components. Citrix Director Users Users (Receiver) (Receiver) Receiver communicates with virtual delivery agent for desktop access Client side network Server side network StoreFront StoreFront communicates with controllers to broker connections between users and virtual desktops Citrix Studio Delivery controller Virtual Delivery Agent Virtual Machines Hypervisors Virtual Delivery Agent Server OS Machines Virtual Delivery Agent Desktop OS Machines Virtual Delivery Agent Remote PCs Database Controllers communicate with hypervisor management suite to deploy virtual desktops Figure 5. XenDesktop 7 architecture components The XenDesktop 7 architecture components are described as follows: Receiver: Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of the user s devices including smartphones, tablets, and PCs. Receiver provides on-demand access to Windows, Web, and Software-as-a-Service (SaaS) applications. StoreFront: StoreFront authenticates users to sites hosting resources and manages stores of desktops and applications that users access. Studio: Studio is the management console that enables you to configure and manage the deployment, eliminating the need for separate management consoles for managing delivery of applications and desktops. Studio provides various wizards to guide you through the process of setting up your EMC VSPEX End-User Computing 25

26 Chapter 3: Solution Technology Overview environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users. Delivery Controller: Installed on servers in the data center, the Delivery Controller consists of services that communicate with the hypervisor to distribute applications and desktops, authenticate and manage user access, and broker connections between users and their virtual desktops and applications. The controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller allows you to install profile management to manage user personalization settings in virtualized or physical Windows environments. Each site has one or more delivery controllers. Virtual Delivery Agent (VDA): Installed on server or workstation operating systems, the VDA enables connections for desktop s and applications. For Remote PC access, install the VDA on the office PC. Server OS machines: Server OS machines are virtual machines or physical machines based on Windows Server operating systems and are used for delivering applications or hosted shared desktops (HSD) to users. Desktop OS machines: Desktop OS machines are virtual machines or physical machines based on Windows Desktop operating system and are used for delivering personalized desktops to users or applications from desktop operating systems. Remote PC access: User devices that are included on a whitelist enable users to access resources on their office PCs remotely from any device running Citrix Receiver. Machine Creation Services Machine Creation Services (MCS) is a provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management. MCS allows the management of several types of machines within a catalog in Citrix Studio. Desktop customization is persistent for machines that use Personal vdisk, while non-personal vdisk machines are appropriate if desktop changes are to be discarded when the user logs off. Desktops provisioned using MCS share a common base image within a catalog. Because of this, the base image is typically accessed often enough to use EMC VNX FAST Cache, while frequently accessed data is promoted to flash drives to provide optimal I/O response time with fewer physical disks. Citrix Provisioning Services Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between the hardware and the software that runs on it. By streaming a single shared disk image (vdisk) instead of copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage. As the number of machines continues to grow, PVS provides the efficiency of a centralized management with the benefits of distributed processing. 26 EMC VSPEX End-User Computing

27 Chapter 3: Solution Technology Overview Because machines stream the disk data dynamically in real time from a single shared image, consistency of the machine image is ensured. In addition, the configuration, applications, and even OS of large pools of machines can change completely during the reboot restart operation. In this solution, PVS provisions 500, 1,000, or 2,000 virtual desktops that are running Windows 7 or 8. The desktops are deployed from a single vdisk image. Citrix Personal vdisk The Citrix Personal vdisk (PvDisk or PvD) feature was introduced in Citrix XenDesktop 5.6. With Personal vdisk, users can preserve customization settings and userinstalled applications in a pooled desktop. This capability is accomplished by redirecting the changes from the user s pooled virtual machine to a separate disk called Personal vdisk. During runtime, the content of the Personal vdisk is blended with the content from the base virtual machine to provide a unified experience to the end user. The Personal vdisk data is preserved during restart and refresh operations. In this solution, PVS provisions 500, 1,000, or 2,000 virtual desktops that are running Windows 7. The desktops are deployed from a single vdisk image. Citrix Profile Management Citrix Profile Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Management ensures that personal settings are applied to desktops and applications, regardless of the user s login location or client device. The combination of Citrix Profile Management and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization. Citrix Profile Management dynamically downloads a user s remote profile when the user logs in to a Citrix XenDesktop. Profile Management downloads user profile information only when the user needs it. Virtualization The virtualization layer is a key component of any end-user computing solution. It allows the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance and allows the physical capability of the system to change without affecting the hosted applications. Microsoft Hyper-V Server 2012 Microsoft Hyper-V server 2012 is used to build the virtualization layer for this solution. Microsoft Hyper-V transforms a computer s physical resources by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. High-availability features of Microsoft Hyper-V such as Live Migration and Storage Migration enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact. EMC VSPEX End-User Computing 27

28 Chapter 3: Solution Technology Overview Microsoft System Center Virtual Machine Manager Hyper-V High Availability Microsoft System Center Virtual Machine Manager is a centralized management platform for the Microsoft Hyper-V infrastructure. It provides administrators with a single interface that can be accessed from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure. The Microsoft Hyper-V Cluster High Availability feature allows the virtualization layer to automatically restart virtual machines in various failure conditions. If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster. Note: For Microsoft Hyper-V Cluster High Availability to restart virtual machines on different hardware, those servers must have resources available. The Compute section provides specific recommendations to enable this functionality. Microsoft Hyper-V Cluster allows you to configure policies to determine which machines are restarted automatically and under what conditions these operations should be performed. EMC Storage Integrator for Windows EMC Storage Integrator (ESI) 3.0 for Windows is a management interface that provides the ability to view and provision block and file storage for Windows environments. ESI simplifies the steps involved in creating and provisioning storage to Hyper-V servers as a local disk or a mapped share. ESI also supports storage discovery and provisioning through PowerShell. The ESI for Windows product guides that are available on EMC Online Support provide more information. Compute The choice of a server platform for an EMC VSPEX infrastructure is based not only on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and other factors. For these reasons, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents the requirements for the number of processor cores and the amount of RAM. This can be implemented with 2 servers or 20 and still be considered the same VSPEX solution. For example, let us assume that the compute layer requirements for a given implementation are 25 processor cores and 200 GB of RAM. One customer wants to use white-box servers containing 16 processor cores and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. 28 EMC VSPEX End-User Computing

29 Chapter 3: Solution Technology Overview In this example, the first customer needs four servers while the second customer needs two, as shown in Figure 6. Figure 6. Compute layer flexibility Note: To enable high availability at the compute layer, each customer needs one additional server with sufficient capacity to provide a failover platform in the event of a hardware outage. In the compute layer, observe the following best practices: Use a number of identical or at least compatible servers. VSPEX implements hypervisor-level high-availability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you are implementing hypervisor-layer high availability, then the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single-server failures. This allows you to implement minimal-downtime upgrades and tolerate single-unit failures. EMC VSPEX End-User Computing 29

30 Chapter 3: Solution Technology Overview Network Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX is flexible enough to meet your specific needs. The key constraint is the provision of sufficient processor cores and RAM per core to meet the needs of the target environment. The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 7. Note: The example is for IP-based networks, but the same underlying principles regarding multiple connections and elimination of single points of failure also apply to Fibre Channelbased networks. Figure 7. Example of highly-available network design 30 EMC VSPEX End-User Computing

31 Chapter 3: Solution Technology Overview This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage The storage layer is a key component of any cloud infrastructure solution, providing storage efficiency, management flexibility, and reduced total cost of ownership. This VSPEX solution uses the EMC VNX series to provide virtualization at the storage layer. EMC VNX Snapshots VNX Snapshots is a software feature that creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation, and local rapid restores. VNX Snapshots improves on the existing EMC VNX SnapView snapshot functionality by integrating with storage pools. Note: LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView snapshots. This limitation exists because VNX Snapshots requires pool space as part of its technology. VNX Snapshots supports 256 writeable snapshots per pool LUN. It supports branching, also called Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is the hard limit. VNX Snapshots uses redirect-on-write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy-on-first-write (CoFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release also supports consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. EMC VNX SnapSure EMC VNX SnapSure is an EMC VNX file software feature that enables you to create and manage checkpoints that are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block s original contents is saved to a separate volume called the SavVol. EMC VSPEX End-User Computing 31

32 Chapter 3: Solution Technology Overview Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint was created. SnapSure supports these types of checkpoints: Read-only checkpoints Read-only file systems created from a PFS Writeable checkpoints Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. Using VNX SnapSure provides more detailed information. EMC VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide predictable high performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNX snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and the User Capacity Threshold setting. Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. You can monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure EMC VSPEX End-User Computing

33 Chapter 3: Solution Technology Overview Figure 8. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. You can expand a pool LUN with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. LUN shrink Use LUN shrink to reduce the capacity of existing thin LUNs. VNX can shrink a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process involves these steps: 1. Shrink the file system from Windows Disk Management. 2. Shrink the pool LUN using a command window and the DISKRAID utility. The DISKRAID utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package. The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is complete, any other LUN in that pool can use the reclaimed space. For more detailed information on LUN expansion/shrinkage, refer to the EMC VNX Virtual Provisioning Applied Technology White Paper. EMC VSPEX End-User Computing 33

34 Chapter 3: Solution Technology Overview Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages are avoided. Figure 9 demonstrates why provisioning with thin pools requires monitoring. Figure 9. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host-reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation must never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. 34 EMC VSPEX End-User Computing

35 Chapter 3: Solution Technology Overview Figure 10 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free, Percent Full, Total Allocation, Total Subscription of physical capacity, Percent Subscribed and Oversubscribed By of virtual capacity. Figure 10. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization so you are alerted when thresholds are reached; set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by clicking Advanced in the Storage Pool Properties dialog box, as shown in Figure 11. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization so you are alerted when thresholds are reached; set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by clicking Advanced in the Storage Pool Properties dialog box, as shown in Figure 11. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, EMC VSPEX End-User Computing 35

36 Chapter 3: Solution Technology Overview the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. Figure 11. Defining storage pool utilization thresholds View alerts by clicking Alert in Unisphere. Figure 12 shows the Unisphere Event Monitor Wizard, where you can also select the option of receiving alerts through , a paging service, or an SNMP trap. Figure 12. Defining automated notifications for block 36 EMC VSPEX End-User Computing

37 Table 1 lists the information about thresholds and their settings. Table 1. Thresholds and settings under VNX OE Block Release 33 Threshold type Threshold range Threshold default Chapter 3: Solution Technology Overview Alert severity User settable 1%-84% 70% Warning None Side effect Built-in N/A 85% Critical Clears user settable alert Allowing total allocation to exceed 90 percent of total capacity puts you at risk of running out of space and affecting all applications that use thin LUNs in the pool. VNX FAST Cache VNX FAST VP (optional) VNX file shares ROBO VNX FAST Cache, a part of the VNX FAST Suite, enables the use of flash drives as an expanded cache layer for the array. FAST Cache is array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN. VNX FAST VP, a part of the VNX FAST Suite, enables you to automatically tier data across multiple types of drives to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. Configuring and Managing CIFS on VNX provides more information. Organizations with remote offices and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments must balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also use Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers. EMC VSPEX End-User Computing 37

38 Chapter 3: Solution Technology Overview Backup and recovery Backup and recovery provides data protection by backing up data files or volumes according to defined schedules and restoring data from backup in case recovery is needed after a disaster. In this VSPEX solution, EMC Avamar is used for the stack, which supports up to 2,000 virtual machines. EMC Avamar EMC Avamar provides methods to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk (VHDX) level for image backups and at the file level for guest-based backups. Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop in the event of hardware failure, corruption, or accidental deletion of the virtual desktop. Avamar significantly reduces the backup and recovery time of the virtual desktop by using change block tracking (CBT) on both backup and recovery. Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an operating system for which an Avamar backup client is available. It enables detailed control over the content and inclusion and exclusion patterns. This can be used to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected enables self-service recoverability of user data. ShareFile ShareFile is a cloud-based file-sharing and storage service built for enterprise class storage and security. ShareFile enables users to securely share documents with other users. ShareFile users include employees and users who are outside of the enterprise directory (referred to as clients). ShareFile StorageZones ShareFile StorageZones allow businesses to share files across the company while meeting compliance and regulatory concerns. StorageZones allow customers to keep their data on storage systems that are onsite. It allows for sharing of large files with full encryption and provides the ability to synchronize files with multiple devices. By keeping data onsite and closer to users than data residing on the public cloud, StorageZone can provide improved performance as well as improved security. ShareFile StorageZone allows you to: Use StorageZone with or instead of the ShareFile-managed cloud storage. Configure Citrix CloudGateway Enterprise to integrate ShareFile services with Citrix Receiver for user authentication and user provisioning. Take advantage of automated reconciliation between the ShareFile cloud and a company s StorageZone deployment. Enable automated antivirus scans of uploaded files. 38 EMC VSPEX End-User Computing

39 Chapter 3: Solution Technology Overview Enable file recovery from the Storage Center backup (the server component of a StorageZone is called a Storage Center). You can browse the file records for a particular date and time and tag any files and folders to restore from Storage Center backup. ShareFile StorageZone Architecture Figure 13 shows the ShareFile high-level architecture. Figure 13. ShareFile high-level architecture ShareFile consists of three components: Client accesses the ShareFile service through one of the native tools like a browser, Citrix Receiver, or directly through the application programming interface (API). Control Plane performs functions such as storing files, folders and account information, access control, reporting and various other brokering functions. The Control Plane resides in multiple Citrix data centers located worldwide. StorageZone defines locations where data is stored. The server component of StorageZone is called Storage Center. High availability requires at least two Storage Centers per StorageZone. A StorageZone must use a single file share for all of its Storage Centers. ShareFile Storage Center extends the ShareFile Software-as-a-Service (SaaS) cloud storage by providing the ShareFile account with on-premises private storage, referred to as StorageZone. The ShareFile on-premises storage differs from cloud storage as follows: ShareFile-managed cloud storage is a public multi-tenant storage system maintained by Citrix. A ShareFile Storage Center is a private single-tenant storage system maintained by the customer that can be used only by approved customer accounts. By default, ShareFile stores data in the secure ShareFile-managed cloud storage. The ShareFile Storage Center feature enables you to configure a private, onsite EMC VSPEX End-User Computing 39

40 Chapter 3: Solution Technology Overview StorageZone. StorageZone defines locations where data is stored and enables performance optimization by locating data storage close to users. Determine the number of StorageZones and their location based on the organization s performance and compliance requirements. In general, assigning users to the StorageZone that is geographically closest to them is the best practice for optimizing performance. Storage Center is a Web service that handles all HTTPS operations from end users and the ShareFile control subsystem. The ShareFile control subsystem handles all operations not related to file contents, such as authentication, authorization, file browsing, configuration, metadata, sending and requesting files, and load balancing. The control subsystem also performs Storage Center health checks and prevents offline servers from sending requests. The ShareFile control subsystem is maintained in Citrix Online data centers. The ShareFile storage subsystem handles operations related to file contents such as uploads, downloads, and antivirus verification. When you create a StorageZone, you are creating a private storage subsystem for your ShareFile data. For a production deployment of ShareFile, the recommended best practice is to use at least two servers with Storage Center installed for high availability. When you install Storage Center, you create a StorageZone. You can then install Storage Center on another server and join it to the same StorageZone. Storage Centers that belong to the same StorageZone must use the same file share for storage. 40 EMC VSPEX End-User Computing

41 Chapter 3: Solution Technology Overview Using ShareFile StorageZone with VSPEX architectures Figure 14 illustrates the VSPEX end-user computing for Citrix XenDesktop environment with added infrastructure to support ShareFile StorageZone with Storage Center. Server capacity is specified in generic terms for required minimums of CPU and memory. The customer is free to select the server and networking hardware that meets or exceeds the stated minimums. The recommended storage delivers a highly available architecture for the ShareFile StorageZone deployment. Desktop users (ICA clients) Virtual desktop #1 Virtual desktop #N Microsoft Server 2012 Hyper-V virtual desktops Active Directory / DNS / DHCP XenDesktop 7 Controller #1 SQL Server XenDesktop 7 Controller #2 Virtual Machine Manager Microsoft Server 2012 Hyper-V VMs Microsoft Server 2012 Hyper-V Cluster virtual desktops Network Microsoft Server 2012 Hyper-V Cluster Infrastructure VNX5400 Storage center #1 Storage center #2 Microsoft Server 2012 Hyper-V VMs ShareFile StorageZone 10 Gb Ethernet network 1 Gb Ethernet network Microsoft Server 2012 Hyper-V Cluster ShareFile StorageZone VMs Figure 14. Logical architecture: VSPEX end-user computing for Citrix XenDesktop with ShareFile StorageZone EMC VSPEX End-User Computing 41

42 Chapter 3: Solution Technology Overview Server A high availability production environment requires a minimum of two servers (virtual machines) with Storage Center installed. Table 2 summarizes the requirements for CPU/Memory to implement ShareFile StorageZone with Storage Center. Table 2. Minimum hardware resources to support ShareFile StorageZone with Storage Center CPU (cores) Memory (GB) Reference Storage Center 2 4 Storage Center system requirements on Citrix edocs Network Provide sufficient network ports to support the additional two-storage-center server requirements. You can implement the networking components using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. Storage ShareFile StorageZone requires a CIFS share to provide private data storage for Storage Center. The EMC VNX storage family has the ability to provide both file and block access with a broad feature set, making it an ideal choice for ShareFile StorageZone storage implementation. The EMC VNX series supports a wide range of business class features ideal for ShareFile StorageZone storage including: Fully Automated Storage Tiering for Virtual Pools (FAST VP) FAST Cache Data compression and file deduplication Thin provisioning Replication Checkpoints File-level retention Quota management 42 EMC VSPEX End-User Computing

43 Chapter 3: Solution Technology Overview Table 3 provides the recommended EMC VNX storage needed for ShareFile StorageZone CIFS share. Table 3. Recommended EMC VNX storage needed for ShareFile StorageZone CIFS share Storage Configuration Notes CIFS share For 500 users: 2 x Data Movers (active/standby CIFS variant only) 8 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 users: 2 x Data Movers (active/standby CIFS variant only) 16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 users: 2 x Data Movers (active/standby CIFS variant only) 24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks The configuration assumes that each user utilizes 10 GB of private storage space. EMC VSPEX End-User Computing 43

44 Chapter 4: Solution Overview Chapter 4 Solution Overview This chapter presents the following topics: Solution overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High availability and failover Validation test profile Backup environment configuration guidelines Sizing guidelines Reference workload Applying the reference workload Implementing the reference architectures Quick assessment EMC VSPEX End-User Computing

45 Chapter 4: Solution Overview Solution overview This chapter provides a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network interfaces. You can select the server and networking hardware that meets or exceeds the stated minimums. EMC has validated the specified storage architecture, along with a system meeting the server and network requirements outlined, to provide high levels of performance while delivering a highly available architecture for your end-user computing deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops and has been validated by EMC. In practice, each virtual desktop type has its own set of requirements that rarely fit a predefined idea of what a virtual desktop should be. In any discussion about end-user computing, a reference workload should first be defined. Not all servers perform the same tasks, and building a reference that takes into account every possible combination of workload characteristics is impractical. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment might not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Applying the reference workload provides a detailed description. Solution architecture We 1 validated the VSPEX end-user computing solution with EMC VNX at three different points of scale. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload. Logical architecture The architecture diagrams in this section show the layout of the major components in the solutions for the two storage variants SMB and FC. 1 In this, "we" refers to the EMC Solutions engineering team that validated the solution. EMC VSPEX End-User Computing 45

46 Chapter 4: Solution Overview Figure 15 depicts the logical architecture of the SMB variant, where 10 GbE carries all network traffic. Desktop users (ICA clients) Virtual desktop #1 Virtual desktop #N Hyper-V virtual desktops AD/DNS/ DHCP XenDesktop 7 Controller #1 SQL Server PVS Server #1 XenDesktop 7 Controller #2 Hyper-V Server 2012 virtual servers Virtual Machine Manager PVS Server #2 Network Hyper-v Server 2012 Cluster virtual desktops Hyper-v Server 2012 Cluster infrastructure 10 Gb Ethernet network Figure 15. Logical architecture for SMB variant EMC VNX Series Note: You can implement the networking components of the solution using 1 Gb/s or 10 Gb/s IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. 46 EMC VSPEX End-User Computing

47 Chapter 4: Solution Overview Figure 16 depicts the logical architecture of the FC variant, wherein an FC SAN carries storage traffic and 10 GbE carries management and application traffic. Desktop users (ICA clients) Virtual desktop #1 Virtual desktop #N Hyper-V virtual desktops AD/DNS/ DHCP XenDesktop 7 Controller #1 SQL Server PVS Server #1 XenDesktop 7 Controller #2 Hyper-V Server 2012 virtual servers Virtual Machine Manager PVS Server #2 Network Hyper-v Server 2012 Cluster virtual desktops Hyper-v Server 2012 Cluster infrastructure Fibre Channel Storage Network 10 Gb Ethernet network Figure 16. Logical architecture for FC variant EMC VNX Series Note: You can implement the networking components of the solution using 1 Gb/s or 10 Gb/s IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. Key components Citrix XenDesktop 7 delivery controller We used two Citrix XenDesktop controllers to provide redundant virtual desktop delivery, authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. In this reference architecture, the controllers are installed on Windows Server 2012 and hosted as virtual machines on Hyper-V Server Citrix Provisioning Services server We used two Citrix Provisioning Services (PVS) servers to provide redundant stream services to stream desktop images from vdisks, as needed, to target devices. In this reference architecture, vdisks are stored on a CIFS share that is hosted by the VNX storage system. EMC VSPEX End-User Computing 47

48 Chapter 4: Solution Overview Virtual desktops We provisioned virtual desktops running Windows 7 or 8 using MCS and PVS. Microsoft Hyper-V Server 2012 Microsoft Hyper-V provides a common virtualization layer to host a server environment. Table 13 on page 93 lists the specific characteristics of the validated environment. Microsoft Hyper-V server 2012 provides a highly available infrastructure through features such as the following: Live Migration Provides live migration of virtual machines within clustered and non-clustered servers with no virtual machine downtime or service disruption Storage Live Migration Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption Microsoft System Center Virtual Manager 2012 SP1 Microsoft System Center Virtual Manager Server provides a scalable and extensible platform that forms the foundation for virtualization management for the Microsoft Hyper-V cluster. Microsoft System Center Virtual Manager manages all Hyper V hosts and their virtual machines. SQL Server Microsoft System Center Virtual Manager Server and XenDesktop controllers require a database service to store configuration and monitoring details. A Microsoft SQL Server 2012 running on a Windows 2012 Server is used for this purpose. Active Directory server Active Directory (AD) services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. DHCP server The DHCP server centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose. DNS server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose. EMC SMI-S Provider for Microsoft System Center Virtual Machine Manager 2012 SP1 EMC SMI-S Provider for Microsoft System Center Virtual Machine Manager is a plug-in to the Microsoft System Center Virtual Machine Manager that provides storage management for EMC arrays directly from the client. EMC SMI-S Provider helps provide a unified management interface. 48 EMC VSPEX End-User Computing

49 IP/Storage Networks Chapter 4: Solution Overview All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while SMB storage traffic is carried over a private, non-routable subnet. IP network The Ethernet network infrastructure provides IP connectivity between virtual desktops, Hyper-V clusters, and VNX storage. For the SMB variant, the IP infrastructure allows Hyper-V servers to access CIFS shares on the VNX and desktop streaming from PVS servers with high bandwidth and low latency. It also allows desktop users to redirect their user profiles and home directories to the centrally maintained CIFS shares on the VNX. Fibre Channel (FC) network For the FC variant, storage traffic between all Hyper-V hosts and the VNX storage system is carried over an FC network. All other traffic is carried over the IP network. EMC VNX5400 array A VNX5400 array provides storage by presenting SMB/FC storage to Hyper-V hosts for up to 1,000 virtual desktops. EMC VNX5600 array A VNX5600 array provides storage by presenting SMB/FC storage to Hyper-V hosts for up to 2,000 virtual desktops. VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and Fibre Channel over Ethernet (FCoE) protocols. The SPs provide access for all external hosts and for the file side of the VNX array. The Disk-Processor Enclosure (DPE) is 3U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500. X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. The Data Mover Enclosure (DME) is 2U in size and houses the Data Movers (X- Blades). The DME is similar in form to the SPE and is used on all VNX models that support file protocol. Standby power supplies are 1U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted. Control Stations are 1U in size and provide management functions to the fileside components referred to as X-Blades. The Control Station is responsible for EMC VSPEX End-User Computing 49

50 Chapter 4: Solution Overview X-Blade failover. The Control Station optionally can be configured with a matching secondary Control Station to ensure redundancy on the VNX array. Disk-Array Enclosures (DAEs) house the drives used in the array. EMC Avamar Avamar software provides the platform for the protection of virtual machines. This protection strategy uses persistent virtual desktops. It also enables image protection and end-user recoveries. Hardware resources Table 4 lists the hardware used in this solution. Table 4. Solution hardware Hardware Configuration Notes Servers for virtual desktops Memory: Desktop OS: 2 GB RAM per desktop 1 TB RAM across all servers for 500 virtual desktops 2 TB RAM across all servers for 1,000 virtual desktops 4 TB RAM across all servers for 2,000 virtual desktops Server OS: 0.6 GB RAM per desktop CPU: 300 GB RAM across all servers for 500 virtual desktops 600 GB RAM across all servers for 1,000 virtual desktops 1.2 TB RAM across all servers for 2,000 virtual desktops Desktop OS: 1 vcpu per desktop (8 desktops per core) 63 cores across all servers for 500 virtual desktops 125 cores across all servers for 1,000 virtual desktops 250 cores across all servers for 2,000 virtual desktops Server OS: 0.2 vcpu per desktop (5 desktops per core) 100 cores across all servers for 500 virtual Network: 6 x 1 GbE NICs per standalone server for 500 virtual desktops 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 1,000/2,000 desktops 200 cores across all servers for 1,000 virtual desktops 400 cores across all servers for 2,000 virtual desktops Total server capacity required to host virtual desktops 50 EMC VSPEX End-User Computing

51 Chapter 4: Solution Overview Hardware Configuration Notes Network infrastructure Storage Minimum switching capability for SMB variant: Two physical switches 6 x 1 GbE ports per Hyper-V server or three 10 GbE ports per blade chassis 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Minimum switching capability for FC variant: 2 x 1 GbE ports per Hyper-V server 4 x 4/8 Gb FC ports for VNX back end 2 x 4/8 Gb FC ports per Hyper-V server Common 2 x 10 GbE interfaces per Data Mover 2 x 8 FC ports per storage processor (FC variant only) Redundant LAN configuration Redundant LAN/SAN configuration EMC VSPEX End-User Computing 51

52 Chapter 4: Solution Overview Hardware Configuration Notes For 500 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks VNX shared storage for virtual desktops Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 1,000 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 2,000 virtual desktops: 2 Data Movers ( active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks Drive count PvD Non-PvD HSD PVS MCS x 100 GB, 3.5-inch flash drives For 500 virtual desktops: 16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops: 24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops: 48 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks Optional for user data 52 EMC VSPEX End-User Computing

53 Chapter 4: Solution Overview Hardware Configuration Notes Shared infrastructure EMC nextgeneration backup Servers for customer infrastructure For 500 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks In most cases, a customer environment will already have infrastructure services such as Active Directory and DNS configured. The setup of these services is beyond the scope of this document. If this solution is being implemented with no existing infrastructure, a minimum number of additional servers is required: 2 x physical servers 20 GB RAM per server 4 x processor cores per server 2 x 1 GbE ports per server Avamar 1 x Gen4 utility node 1 x Gen4 3.9TB spare node 3 x Gen4 3.9TB storage nodes Minimum number required: 2 x physical servers 20 GB RAM per server 4 x processor cores per server 2 x 1 GbE ports per server Optional for infrastructure storage Services can be migrated into VSPEX postdeployment but must exist before VSPEX can be deployed Servers and the roles they fulfill might already exist in the customer environment EMC VSPEX End-User Computing 53

54 Chapter 4: Solution Overview Software resources Table 5 lists the software used in this solution. Table 5. Solution software Software Configuration VNX5400 or 5600 (shared storage, file systems) VNX Operating Environment (OE) for file Release VNX OE for block Release 33 ( ) ESI for Windows Version 3.0 XenDesktop Desktop Virtualization Citrix XenDesktop Controller Operating system for XenDesktop Controller Microsoft SQL Server Version 7 Platinum Edition Windows Server 2012 Standard Edition Version 2012 Standard Edition Next-generation backup Avamar 7.0 Microsoft Hyper-V Hyper-V Server Hyper-V Server 2012 System Center Virtual Machine Manager Operating system for System Center Virtual Machine Manager 2012 SP1 Windows Server 2012 Standard PowerPath Edition (FC variant only) 5.7 Virtual desktops Note: Other than base OS, this software was used for solution validation and is not required. Base operating system Microsoft Windows 7 Enterprise (32-bit) SP1 Windows Server 2008 R2 SP1 Standard Edition Microsoft Office Office Enterprise 2007 SP3 Internet Explorer Adobe Reader 9.1 Adobe Flash Player Bullzip PDF Printer FreeMind EMC VSPEX End-User Computing

55 Chapter 4: Solution Overview Sizing for validated configuration When selecting servers for this solution, ensure that the processor core meets or exceeds the performance of the Intel Nehalem family at 2.66 GHz. As servers with greater processor speeds, performance, and higher core density become available, you can consolidate servers as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability. As with servers, you can also consolidate network interface card (NIC) speed and quantity as long as you maintain the overall bandwidth requirements for this solution and sufficient redundancy to support high availability. Table 6 shows the configurations of the servers that support this solution. Each server has two sockets of four cores and 128 GB of RAM, plus two 10 GbE for each blade chassis. Table 6. Configurations that support this solution Desktop Type No. of servers No. of virtual desktops Total cores Total RAM Desktop OS TB 16 1, TB 32 2, TB Server OS GB 25 1, GB 50 2, TB As shown in Table 13 on page 88, to support eight virtual desktops, at least one core is required with a minimum of 2 GB of RAM for each. You should consider the correct balance of memory and cores required for the number of virtual desktops to be supported by a server. For example, a server that supports 24 virtual desktops requires a minimum of three cores but also a minimum of 48 GB of RAM. IP network switches used to implement this reference architecture must have a minimum backplane capacity of 96 (for 500 virtual desktops), 192 (for 1,000 virtual desktops), or 320 (for 2,000 virtual desktops) Gb/s non-blocking and support the following features: IEEE 802.1x Ethernet flow control 802.1q VLAN tagging Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol SNMP management capability Jumbo frames EMC VSPEX End-User Computing 55

56 Chapter 4: Solution Overview Choose the number and type of switches required to support high availability and choose a network vendor that can provide easily available parts, good service, and optimal support contracts. The network configuration should include the following: A minimum of two switches to support redundancy Redundant power supplies A minimum of 40 1 GbE ports (for 500 virtual desktops), two 1 GbE and fourteen 10 GbE ports (for 1,000 virtual desktops), or two 1 GbE and twenty-two 10 GbE ports (for 2,000 virtual desktops), distributed for high availability The appropriate uplink ports for customer connectivity While use of 10 GbE ports should align with those on the server and storage, keep in mind the overall network requirements for the solution and the level of redundancy required to support high availability. Consider additional server NICs and storage connections for specific implementation requirements. The management infrastructure (Active Directory, DNS, DHCP, and SQL Server) can be supported on two servers similar to those previously defined, but requires a minimum of only 20 GB of RAM instead of 128 GB. 56 EMC VSPEX End-User Computing

57 Chapter 4: Solution Overview Server configuration guidelines When you are designing and ordering the compute/server layer of the VSPEX solution, you should consider several factors that might alter the final purchase. From a virtualization perspective, if you fully understand the system s workload, features like dynamic memory can reduce the aggregate memory requirement. If the virtual desktop pool does not have a high level of peak or concurrent usage, the number of vcpus can be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased might need to be increased. Table 7 provides configuration details for the virtual desktop servers and network hardware. Table 7. Server hardware Servers for virtual desktops CPU: Memory: Network: Configuration Desktop OS: 1 vcpu per desktop (8 desktops per core) 63 cores across all servers for 500 virtual desktops 125 cores across all servers for 1000 virtual desktops 250 cores across all servers for 2000 virtual desktops Server OS: 0.2 vcpu per desktop (5 desktops per core) 100 cores across all servers for 500 virtual desktops 200 cores across all servers for 1,000 virtual desktops 400 cores across all servers for 2,000 virtual desktops Desktop OS: 2 GB RAM per Desktop 1 TB RAM across all servers for 500 virtual desktops 2 TB RAM across all servers for 1000 virtual desktops 4 TB RAM across all servers for 2000 virtual machines 2 GB RAM reservation per Hyper-V host Server OS: 0.6 GB RAM per desktop 300 GB RAM across all servers for 500 virtual desktops 600 GB RAM across all servers for 1,000 virtual desktops 1.2 TB RAM across all servers for 2,000 virtual machines 2 GB RAM reservation per Hyper-V host 6 x 1 GbE NICs per server for 500 virtual desktops 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 1,000 virtual desktops 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 2,000 virtual desktops EMC VSPEX End-User Computing 57

58 Chapter 4: Solution Overview Microsoft Hyper-V memory virtualization for VSPEX Microsoft Hyper-V has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these pertain to memory management. This section describes some of these features and the items you must consider when using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources. Figure 17 shows an example of memory consumption at the hypervisor level. Figure 17. Hypervisor memory consumption 58 EMC VSPEX End-User Computing

59 Dynamic Memory Chapter 4: Solution Overview Dynamic Memory, which was introduced in Windows Server 2008 R2 SP1, increases physical memory efficiency by treating memory as a shared resource and allocating it to the virtual machines dynamically. Actual consumed memory of each virtual machine is adjusted on demand. Dynamic memory enables more virtual machines to run by reclaiming unused memory from idle virtual machines. In Windows Server 2012, dynamic memory enables the dynamic increase of maximum memory available to virtual machines. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access is costly in terms of performance. However, Windows Server 2012 employs a process affinity that strives to keep threads pinned to the particular CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows server 2012 extends this functionality to virtual machines, where it improves performance. Smart Paging With dynamic memory, Hyper-V allows virtual machines to exceed physical available memory. There is likely a gap between minimum memory and startup memory. Smart paging is a memory management technique that uses disk resources as a temporary memory replacement. It swaps out less-used memory to disk storage and swaps it in when needed. The drawback is that this can cause performance to degrade. Hyper-V continues to use guest paging when the host memory is oversubscribed, because it is more efficient than smart paging. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account Hyper-V memory overhead and the virtual machine memory settings. Hyper-V memory overhead The virtualization of memory resources incurs associated overhead, including the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB of memory for the Hyper-V parent partition for this solution. Allocating memory to virtual machines The proper sizing of memory for a virtual machine in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Table 13 on page 88 outlines the resources used by a single virtual machine. EMC VSPEX End-User Computing 59

60 Chapter 4: Solution Overview Network configuration guidelines This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account jumbo frames, VLAN, and LACP on EMC unified storage. Table 4 on page 50 provides detailed network resource requirements. Table 8. Hardware resources for network Component Configuration Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per Microsoft Hyper-V server 1 x 1 GbE port per Control Station for management 2 x FC/CEE/10GbE ports per Microsoft Hyper-V server, for storage network 2 x FC/CEE/10GbE ports per SP, for desktop data 2 x 10 GbE ports per Data Mover for user data File 2 physical switches 4 x 10 GbE ports per Microsoft Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Note: The solution can use 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLAN It is a best practice to isolate network traffic so that the traffic between hosts and storage and hosts and clients, as well as management traffic, all move over isolated networks. In some cases physical isolation might be required for regulatory or policy compliance reasons, but in many cases, logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs: Client access Storage Management 60 EMC VSPEX End-User Computing

61 Chapter 4: Solution Overview The VLANs are illustrated in Figure 18. Figure 18. Required networks Note: The diagram demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. A similar topology should be created for an array using 1 GbE network connections. The client access network is for users of the system (clients) to communicate with the infrastructure. The storage network is used for communication between the compute layer and the storage layer. The management network is used to give administrators a dedicated way to access the management connections on the storage array, network switches, and hosts. Notes: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks can be implemented, but they are not required. If you choose the Fibre Channel storage network option for the deployment, similar best practices and design principles apply. EMC VSPEX End-User Computing 61

62 Chapter 4: Solution Overview Enable jumbo frames Link aggregation This EMC VSPEX end-user computing solution recommends that MTU be set at 9,000 (jumbo frames) for efficient storage and migration traffic. A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Hyper-V allows more than one method of using storage when hosting virtual machines. We tested the solutions described in this section and in Table 9 using SMB, and the storage layout described adheres to all current best practices. Customers and architects can make modifications based on their understanding of the systems usage and load if required. This solution used Login VSI to simulate a user load against the desktops. Login VSI provides guidance to gauge the maximum number of users a desktop environment can support. The Login VSI medium workload is selected for this testing. The storage layouts for 500, 1,000, and 2,000 desktops are defined when the Login VSImax average response time is below the dynamically calculated maximum threshold. This maximum threshold is known as the VSImax dynamic. The Login VSI has two ways of defining the maximum threshold: classic and dynamic VSImax. The classic VSImax threshold is defined as 4,000 milliseconds. However, the dynamic VSImax threshold is calculated based on the initial response time of the user activities. 62 EMC VSPEX End-User Computing

63 Chapter 4: Solution Overview Table 9. Storage hardware Hardware Configuration Notes Storage Common: 2 x 10 GbE interfaces per Data Mover 2 x 8 Gb FC ports per storage processor (FC variant only) For 500 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks: Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 1,000 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks: Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 2,000 virtual desktops: 2 Data Movers ( active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks: Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 500 virtual desktops: 16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops: 24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops: 48 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks VNX shared storage for virtual desktops Optional for user data EMC VSPEX End-User Computing 63

64 Chapter 4: Solution Overview Hardware Configuration Notes For 500 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks Optional for infrastructure storage Hyper-V storage virtualization for VSPEX This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes (CSV) V2 and New Virtual Hard Disk Format (VHDX) features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 19, the storage array presents either block-based LUNs (as CSV) or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines. Figure 19. Hyper-V virtual disk types CIFS Windows Server 2012 supports using CIFS (SMB 3.0) file shares as shared storage for Hyper-V virtual machines. CSV A Cluster Shared Volume (CSV) is a shared disk containing an NTFS volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass-through disks Windows 2012 also supports pass-through disks, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured. 64 EMC VSPEX End-User Computing

65 SMB 3.0 (file-based storage only) Chapter 4: Solution Overview The SMB protocol is the file sharing protocol that is used by default in Windows environments. With the introduction of Windows Server 2012, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: SMB Transparent Failover SMB Scale Out SMB Multichannel SMB Direct SMB Encryption VSS for SMB file shares SMB Directory Leasing SMB PowerShell With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note: SMB is also known as Common Internet File System (CIFS). For more details about SMB 3.0, refer to the EMC VNX Series: Introduction to SMB 3.0 Support. ODX (block-based storage only) Offloaded Data Transfers (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within or between storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Since the file transfer offloads to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage arrays, ODX minimizes latencies and improves the transfer speed of large files, such as database or video files. When ODX-supported file operations are performed, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server EMC VSPEX End-User Computing 65

66 Chapter 4: Solution Overview New Virtual Hard Disk format Hyper-V in Windows Server 2012 contains an update to the VHD format, called VHDX, which has a much larger capacity and built-in resiliency. The main new features of the VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures Optimal structure alignment of the virtual hard disk format to suit large sector disks The VHDX format also has the following features: Larger block sizes for dynamic and differential disks, which enables the disks to meet the needs of the workload The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware) VSPEX storage building block Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for filebased storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual desktops in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the end-user-computing environment. Three building blocks (500, 1,000, and 2,000 desktops) are currently verified on the VNX series and provide a flexible solution for VSPEX sizing. Table 10 shows a simple list of the disks required to support various scales of configurations, excluding hot spare needs. Note: If a configuration is started with the 500-desktop building block for MCS, it can be expanded to the 1,000-desktop building block by adding ten matching SAS drives and allowing the pool to restripe. For details about pool expansion and restriping, refer to the EMC VNX Virtual Provisioning Applied Technology White Paper. 66 EMC VSPEX End-User Computing

67 Chapter 4: Solution Overview Table 10. Number of disks required for various numbers of virtual desktops Virtual desktops VNX platform Flash drives (FAST Cache) SAS drives (PVS/Non- PvD) SAS drives (PVS/PvD) SAS drives (MCS/Non- PvD) SAS drives (MCS/PvD) , , VSPEX end-user computing validated maximums VSPEX end-user-computing configurations are validated on the VNX5400 and VNX5600 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX enduser-computing configuration. As outlined in Table 10, the recommended maximum for a VNX5400 is 1,000 desktops and the recommended maximum for a VNX5600 is 2,000 desktops. Storage layout for 500 virtual desktops Core storage layout with PVS provisioning Figure 20 illustrates the layout of the disks that are required to store 500 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. vdisks and TFTP images UNUSED Storage Pool 2 RAID Hot Spare Bus 1 Enclosure 1 UNUSED FAST Cache Virtual Desktops (Write Cache) Hot Spare RAID Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 20. Core storage layout with PVS provisioning for 500 virtual desktops Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX Operating Environment (OE). EMC VSPEX End-User Computing 67

68 Chapter 4: Solution Overview The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_5 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Eight SAS disks (shown here as 1_0_7 to 1_0_14) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Two Flash drives (shown here as 1_0_5 and 1_0_6) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Five SAS disks (1_1_0 to 1_1_4) in the RAID 5 Storage Pool 2 are used to store PVS vdisks and TFTP images. FAST Cache is enabled for the entire pool. Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_6 to 1_1_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. Core storage layout with MCS provisioning Figure 21 illustrates the layout of the disks that are required to store 500 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache UNUSED RAID 1 Hot Spare Bus 1 Enclosure 1 UNUSED Virtual Desktops Hot Spare Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UN-BOUND RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 21. Core storage layout with MCS provisioning for 500 virtual desktops 68 EMC VSPEX End-User Computing

69 Core storage layout with MCS provisioning overview Chapter 4: Solution Overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_2 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Ten SAS disks (shown here as 1_0_5 to 1_0_14) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Note: If personal vdisk is implemented, half the drives (five SAS disks for 500 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vdisk with MCS provisioning with 5 SAS drives for 500 desktops. Two Flash drives (shown here as 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_3 to 1_1_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 69

70 Chapter 4: Solution Overview Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 22. This storage is in addition to the core storage shown in Figure 21. If storage for user data exists elsewhere in the production environment, this storage is not required. User Profiles and Home Directories Storage Pool 4 RAID 6 Personal vdisks Storage Pool 5 RAID Bus 1 Enclosure 2 Infrastructure VMs Storage Pool 6 RAID 5 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Bus 0 Enclosure 2 SAS SSD NL SAS UNUSED Figure 22. Optional storage layout for 500 virtual desktops Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vdisks. The following optional configuration is used in the reference architecture for 500 virtual desktops: The EMC VNX Series does not require a dedicated hot spare drive. The disk shown here as 0_2_14 is an unbound disk that can be used as a hot spare when needed. This disk is marked as hot spare in the storage layout diagram. Five SAS disks (shown here as 0_2_0 to 0_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV. Sixteen NL-SAS disks (shown here as 0_2_5 to 0_2_13 and 1_2_0 to 1_2_6) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 500 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles. 70 EMC VSPEX End-User Computing

71 Chapter 4: Solution Overview Eight SAS disks (1_2_7 to 1_2_14) in the RAID 10 Storage Pool 5 are used to store the Personal vdisks. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Storage layout for 1,000 virtual desktops Core storage layout with PVS provisioning Figure 23 illustrates the layout of the disks that are required to store 1,000 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. Virtual Desktops (Write Cache) Storage Pool 1 vdisks and TFTP images Storage Pool 2 UNUSED RAID RAID Bus 1 Enclosure 1 UNUSED FAST Cache Virtual Desktops (Write Cache) Hot Spare RAID Hot Spare 27 Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 23. Core storage layout with PVS provisioning for 1,000 virtual desktops Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 1,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_0_7 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Sixteen SAS disks (shown here as 1_0_8 to 1_0_14 and 1_1_0 to 1_1_8) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. EMC VSPEX End-User Computing 71

72 Chapter 4: Solution Overview For NAS, ten LUNs of 400 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, four LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Two Flash drives (shown here as 1_0_5 and 1_0_6) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Five SAS disks (1_1_9 to 1_1_13) in the RAID 5 Storage Pool 2 are used to store PVS vdisks and TFTP images. FAST Cache is enabled for the entire pool. The disk shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_14 is unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. Core storage layout with MCS provisioning Figure 24 illustrates the layout of the disks that are required to store 1,000 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache RAID 1 Hot Spare Virtual Desktops Storage Pool 1 RAID UNUSED 14 Bus 1 Enclosure 1 UNUSED Hot Spare Virtual Desktops Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 24. Core storage layout with MCS provisioning for 1,000 virtual desktops Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 1,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_2 are unbound disks that can be used as hot 72 EMC VSPEX End-User Computing

73 Chapter 4: Solution Overview spares when needed. These disks are marked as hot spares in the storage layout diagram. Twenty SAS disks (shown here as 1_0_5 to 1_0_14 and 1_1_3 to 1_1_12) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. For FC, four LUNs of 2 TB each are provisioned from the pool to present to the Hyper-V servers as a four CSVs. Note: If personal vdisk is implemented, half the drives (ten SAS disks for 1,000 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vdisk with MCS provisioning with 10 SAS drives for 1,000 desktops. Two Flash drives (shown here as 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 0_0_4 to 0_0_24 and 1_1_13 to 1_1_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 73

74 Chapter 4: Solution Overview Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 25. This storage is in addition to the core storage shown in Figure 24. If storage for user data exists elsewhere in the production environment, this storage is not required. Personal vdisks Storage Pool 5 UNUSED RAID Bus 1 Enclosure 3 Personal vdisks Storage Pool 5 RAID Hot Spare 14 Bus 0 Enclosure 3 User Profiles and Home Directories Storage Pool 4 RAID Bus 1 Enclosure 2 Infrastructure VMs Storage Pool 6 RAID 5 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Bus 0 Enclosure 2 SAS SSD NL SAS UNUSED Figure 25. Optional storage layout for 1,000 virtual desktops Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vdisks. The following optional configuration is used in the reference architecture for 1,000 virtual desktops: The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 0_2_14 and 0_3_14 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Five SAS disks (shown here as 0_2_0 to 0_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV. Twenty-four NL-SAS disks (shown here as 0_2_5 to 0_2_13 and 1_2_0 to 1_2_14) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 1 TB each are provisioned from the pool to provide the storage required to create two CIFS file systems. 74 EMC VSPEX End-User Computing

75 Chapter 4: Solution Overview If you have implemented multiple drive types, you can enable FAST VP to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles. Sixteen SAS disks (0_3_0 to 0_3_13 and 1_3_0 to 1_3_1) in the RAID 10 Storage Pool 5 are used to store the Personal vdisks. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 400 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, four LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Disks shown here as as 1_3_2 to 1_3_14 are unused. They were not used for testing this solution. EMC VSPEX End-User Computing 75

76 Chapter 4: Solution Overview Storage layout for 2,000 virtual desktops Core storage layout with PVS provisioning Figure 26 illustrates the layout of the disks that are required to store 2,000 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache UNUSED RAID 1 Hot Spare Bus 0 Enclosure 2 Virtual Desktops (Write Cache) Storage Pool 1 RAID vdisks and TFTP images Storage Pool 2 RAID 5 FAST Cache RAID 1 Hot Spare Bus 1 Enclosure 1 Virtual Desktops (Write Cache) Storage Pool 1 RAID Bus 0 Enclosure 1 UNUSED Virtual Desktops (Write Cache) Hot Spare Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 26. Core storage layout with PVS provisioning for 2,000 virtual desktops Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 2,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4, 1_1_14, and 0_2_2 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Thirty-two SAS disks (shown here as 1_0_5 to 1_0_14 and 0_1_0 to 0_1_14, and 1_1_0 to 1_1_6) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. 76 EMC VSPEX End-User Computing

77 Chapter 4: Solution Overview For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. For FC, eight LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Four Flash drives (shown here as 1_1_12 to 1_1_13 and 0_2_0 to 0_2_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Five SAS disks (1_1_7 to 1_1_11) in the RAID 5 Storage Pool 2 are used to store PVS vdisks and TFTP images. FAST Cache is enabled for the entire pool. Disks shown here as as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 0_2_3 to 0_2_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 77

78 Chapter 4: Solution Overview Core storage layout with MCS provisioning Figure 27 illustrates the layout of the disks that are required to store 2,000 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. Virtual Desktops Storage Pool 1 RAID Hot Spare UNUSED Bus 0 Enclosure 2 FAST Cache Virtual Desktops Storage Pool 1 RAID RAID Bus 1 Enclosure 1 FAST Cache Virtual Desktops RAID 1 Hot Spare Storage Pool 1 RAID Bus 0 Enclosure 1 UNUSED Hot Spare Virtual Desktops Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 27. Core storage layout with MCS provisioning for 2,000 virtual desktops Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 2,000 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4,0_1_2, and 0_2_5 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Forty SAS disks (shown here as 1_0_5 to 1_0_14, 0_1_3 to 0_1_14, 1_1_2 to 1_1_14, and 0_2_0 to 0_2_4) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 1,600 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. 78 EMC VSPEX End-User Computing

79 Chapter 4: Solution Overview For FC, eight LUNs of 2 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Note: If personal vdisk is implemented, half the drives (twenty SAS disks for 2,000 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vdisk with MCS provisioning with 20 SAS drives for 1,000 desktops. Two Flash drives (shown here as 0_1_0 to 0_1_1 and 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 0_2_6 to 0_2_14 are unbound. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 79

80 Chapter 4: Solution Overview Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 28. This storage is in addition to the core storage shown in Figure 27. If storage for user data exists elsewhere in the production environment, this storage is not required Personal vdisks Storage Pool 5 RAID 10 Hot Spare Hot Spare UNUSE D Bus 0 Enclosure 5 Personal vdisks Storage Pool 5 RAID Bus 1 Enclosure 4 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Personal vdisks Storage Pool RAID 10 Bus 0 Enclosure 4 User Profiles and Home Directories Storage Pool 4 RAID Bus 1 Enclosure 3 User Profiles and Home Directories Storage Pool 4 RAID Bus 0 Enclosure 3 Infrastructure VMs Storage Pool 6 RAID 5 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Bus 1 Enclosure 2 SAS SSD NL SAS UNUSED Figure 28. Optional storage layout for 2,000 virtual desktops Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vdisks. The following optional configuration is used in the reference architecture for 2,000 virtual desktops: The EMC VNX Series does not require a dedicated hot spare drive. The disk shown here as 1_2_14, 0_4_9, 0_5_12, and 0_5_13 are unbound disks that can be used as hot spares when needed. This disk is marked as hot spare in the storage layout diagram. 80 EMC VSPEX End-User Computing

81 Chapter 4: Solution Overview Five SAS disks (shown here as 1_2_0 to 1_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV. Forty eight NL-SAS disks (shown here as 1_2_5 to 1_2_13, 0_3_0 to 0_3_14, 1_3_0 to 1_3_14, and 0_4_0 to 0_4_8) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 2 TB each are provisioned from the pool to provide the storage required to create two CIFS file systems. If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles Thirty-two SAS disks (0_4_10 to 0_4_14, 1_4_0 to 1_4_14, and 0_5_0 to 0_5_11) in the RAID 10 Storage Pool 5 are used to store the Personal vdisks. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. For FC, eight LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. EMC VSPEX End-User Computing 81

82 Chapter 4: Solution Overview High availability and failover This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive most single-unit failures with minimal to no impact to business operations. Virtualization layer As indicated earlier, configuring high availability in the virtualization layer and allowing the hypervisor to automatically restart virtual machines that fail is recommended. Figure 29 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 29. High availability at the virtualization layer Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible. Compute layer While this solution offers flexibility in the type of servers to be used in the compute layer, we recommend that you use enterprise class servers designed for the data center. Connect these servers, with redundant power supplies, to separate Power Distribution Units (PDUs) in accordance with your server vendor s best practices. Figure 30. Redundant power supplies 82 EMC VSPEX End-User Computing

83 Chapter 4: Solution Overview Configuring high availability in the virtualization layer is also recommended. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 30. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network, as shown in Figure 31. Figure 31. Network layer high availability By designing the network with no single points of failure, you can ensure that the compute layer is able to access storage and communicate with users even if a component fails. EMC VSPEX End-User Computing 83

84 Chapter 4: Solution Overview Storage layer The VNX family is designed for proven five 9s ( percent) availability by using redundant components throughout the array. All of the array components are capable of continued operation in the event of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be allocated dynamically to replace a failing disk, as shown in Figure 32. Figure 32. VNX series high availability EMC storage arrays are designed to be highly available by default. When they are configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. 84 EMC VSPEX End-User Computing

85 Chapter 4: Solution Overview Validation test profile The VSPEX solution was validated with the environment profile characteristics shown in Table 11. Table 11. Validated environment profile Profile characteristic Number of virtual desktops Virtual desktop OS CPU per virtual desktop Number of virtual desktops per CPU core RAM per virtual desktop Desktop provisioning method Average storage available for each virtual desktop Average IOPS per virtual desktop at steady state Average peak IOPS per virtual desktop during boot storm Number of datastores to store virtual desktops Number of virtual desktops per datastore Disk and RAID type for datastores Disk and RAID type for CIFS shares to host roaming user profiles and home directories (optional for user data) Value 500 for 500 virtual desktops 1,000 for 1,000 virtual desktops 2,000 for 2,000 virtual desktops Desktop OS: Windows 7 Enterprise (32-bit) SP1 Server OS: Windows Server 2008 R2 SP1 Desktop OS: 1 vcpu Server OS: 0.2 vcpu Desktop OS: 1 vcpu Server OS: 0.2 vcpu Desktop OS: 2 GB Server OS: 0.6 GB Provisioning Services (PVS) Machine Creation Services (MCS) 4 GB (PVS) 8 GB (MCS) 8 IOPS 60 IOPS (MCS/NFS variant) 8 IOPS (PVS/NFS variant) 116 IOPS (MCS/FC variant) 14 IOPS (PVS/FC variant) 2 for 500 virtual desktops 4 for 1,000 virtual desktops 8 for 2,000 virtual desktops 250 RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks RAID 6, 2 TB, 7,200 rpm, 3.5-inch NL-SAS disks EMC VSPEX End-User Computing 85

86 Chapter 4: Solution Overview Note: The Average IOPS per virtual desktop at steady state is measured when Login VSI medium profile workload is simulated on 500, 1,000, and 2,000 desktop configurations. On each configuration, the Login VSImax is below the dynamic VSImax threshold. Backup environment configuration guidelines This section provides guidelines for setting up the backup and recovery environment for this VSPEX solution. Backup characteristics Table 12 shows how the backup environment profile in this VSPEX solution was sized using three stacks. Table 12. Backup profile characteristics Profile characteristic User data Value 5 TB for 500 virtual desktops 10 TB for 1,000 virtual desktops 20 TB for 2,000 virtual desktops Note: 10.0 GB per desktop Daily change rate for user data User data 2% Retention per data types # Daily 30 Daily # Weekly 4 Weekly # Monthly 1 Monthly Backup layout Avamar provides various deployment options for specific use cases and recovery requirements. In this case, the solution is deployed with an Avamar Data Store. This enables the unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This backup solution unifies the backup process with industry-leading deduplication backup software and systems and achieves the highest levels of performance and efficiency. 86 EMC VSPEX End-User Computing

87 Chapter 4: Solution Overview Sizing guidelines Reference workload The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures discussed in this document. They provide guidance on how to correlate the reference workloads to actual customer workloads and how that can change the end delivery configuration from the server and network perspective. You can modify the storage definition by adding drives for greater capacity and performance and by adding features like FAST Cache for desktops and FAST VP for improved user data performance. The disk layouts were created to provide support for the specified number of virtual desktops at the defined performance level. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a reduced user experience because of higher response time. Each VSPEX Proven Infrastructure implements the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements, which rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, we have defined a representative reference workload. By comparing actual customer needs to this reference workload, you can extrapolate which reference architecture to choose. For the VSPEX end-user computing solution, the reference workload is defined as a single virtual desktop that can be deployed using a desktop or server OS. In the case of a desktop OS, each user accesses a dedicated virtual machine that is allocated 1 vcpu and 2 GB of RAM. As with server OS, each virtual machine is allocated 4 vcpus and 12 GB of RAM, and is shared among 20 virtual desktop sessions. Table 13 shows the characteristics of the virtual desktop. EMC VSPEX End-User Computing 87

88 Chapter 4: Solution Overview Table 13. Virtual desktop characteristics Characteristic Value Virtual desktop operating system Desktop OS: Microsoft Windows 7 Enterprise Edition (32-bit) SP1 Server OS: Windows Server 2008 R2 SP1 Virtual processors per virtual desktop RAM per virtual desktop Available storage capacity per virtual desktop Average IOPS per virtual desktop at steady state Available storage capacity per virtual desktop Desktop OS: 1 vcpu Server OS: 0.2 vcpu Desktop OS: 2 GB Server OS: 0.6 GB 4 GB (PVS) 8 GB (MCS) 8 8 GB (MCS) This desktop definition is based on user data that resides on shared storage. The I/O profile is defined using a test framework that runs all desktops concurrently with a steady load generated by the constant use of office-based applications like browsers, office productivity software, and other standard task utilities. Applying the reference workload In addition to the supported desktop numbers (500, 1,000, and 2,000), consider the following factors when deciding which end-user computing solution to deploy. Concurrency The workloads used to validate VSPEX solutions assume that all desktop users will be active at all times. In other words, the 1,000-desktop architecture was tested with 1,000 desktops, all generating workload in parallel, all booted at the same time, and so on. If your customer expects to have 1,200 users, but only 50 percent of them will be logged on at any given time because of time zone differences or alternate shifts, 600 active users out of the total 1,200 users can be supported by the 1,000-desktop architecture. Heavier desktop workloads The workload defined in Table 13 and used to test these VSPEX end-user computing configurations is considered a typical office worker load. However, some customers have users with a more active profile. If a company has 800 users and, because of custom corporate applications, each user generates 12 IOPS as compared to the 8 IOPS used in the VSPEX workload, the solution will need 9,600 IOPS (800 users * 12 IOPS per desktop). The 1,000-desktop configuration would be insufficient in this case because it has been rated to 8, EMC VSPEX End-User Computing

89 Chapter 4: Solution Overview IOPS (1,000 desktops * 8 IOPS per desktop). This customer should move up to the 2,000-desktop solution. Implementing the reference architectures The reference architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements. Resource types The reference architectures define the hardware requirements for the solution in terms of four basic types of resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, how they are used in the reference architectures, and key considerations for implementing them in a customer environment. CPU resources The architectures define the number of CPU cores that are required, but not a specific type or configuration. It is assumed that new deployments use recent revisions of common processor technologies, and it is assumed that these will perform as well as, or better than, the systems used to validate the solution. In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual desktop and the required hardware resources in the reference architectures assume that there will be no more than eight virtual CPUs for each physical processor core (8:1 ratio) when desktop OS is used. In most cases, this provides an appropriate level of resources for the hosted virtual desktops. In cases where this ratio might not be appropriate, monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual desktop in the reference architecture is defined as having 2 GB of memory dedicated to a single instance of the desktop operating system. In a virtual environment, because of budget constraints, it is not uncommon to provision virtual desktops with more memory than the hypervisor physically has. The memory overcommitment technique takes advantage of the fact that each virtual desktop does not fully utilize the amount of memory allocated to it. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate so that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. This solution was validated with statically assigned memory and no over-commitment of memory resources. If memory over-commitment is used in a real world EMC VSPEX End-User Computing 89

90 Chapter 4: Solution Overview environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. Network resources The reference architectures outline the minimum needs of the system. If additional bandwidth is needed, it is important to add capability to both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports and provide the option of adding ports using EMC FLEX I/O modules. For reference purposes, in the validated environment, EMC assumes that each virtual desktop generates 8 I/Os per second with an average size of 4 KB. Each virtual desktop is generating at least 32 KB/s of traffic on the storage network. For an environment rated for 500 virtual desktops, this equates to a minimum of approximately 16 MB/sec. This is well within the bounds of modern networks. However, this does not take into account other operations. For example, additional bandwidth is needed for: User network traffic Virtual desktop migration Administrative and management operations The requirements for each of these vary depending on how the environment is being used, so it is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the described use cases. Regardless of the network traffic requirements, always have at least two physical network connections that are shared with a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few layers to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision storage to the Microsoft Hyper-V cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is generally acceptable to replace drive types with a type that has more capacity and the same performance characteristics or with ones that have higher performance characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. 90 EMC VSPEX End-User Computing

91 Chapter 4: Solution Overview In other cases where there is a need to deviate from the proposed number and type of drives specified or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Backup resources Expanding existing VSPEX EUC environments Implementation summary The solution outlines both the initial and growth backup storage and retention needs of the system. You can gather additional information to further size Avamar, including tape-out needs, RPO and RTO specifics, and multi-site environment replication needs. The EMC VSPEX EUC solution supports a flexible implementation model that enables you to easily expand your environment as the needs of the business change. You can combine the building block configurations presented in this solution to form larger implementations. For example, you can build the 1,000 desktop configuration all at once, or you can start with the 500-desktop configuration and expand it as needed. In the same way, you can implement the 2,000 desktop configuration all at once or gradually by expanding the storage resources as they are needed. The requirements stated in the reference architectures are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual desktop. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual desktops differ significantly from the reference definition, and vary in the same resource group, then you might need to add more of that resource to the system. EMC VSPEX End-User Computing 91

92 Chapter 4: Solution Overview Quick assessment An assessment of the customer environment will help ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the user types that you plan to migrate into the VSPEX end-user computing environment. For each group, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual desktops required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as shown in Table 14. Table 14. Blank worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Equivalent reference virtual desktops Number of users Total reference desktops Example user type Resource requirements Equivalent reference desktops Fill out the resource requirements for the User Type. The row requires input on three different resources: CPU, Memory, and IOPS. CPU requirements The reference virtual desktop assumes most desktop applications are optimized for a single CPU in a desktop OS deployment. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to provide for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, then consider that your pool needs to provide 120 virtual desktops of capability. Memory requirements Memory plays a key role in ensuring application functionality and performance. Therefore, each group of desktops will have different targets for the acceptable amount of available memory. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of desktops you are planning for to accommodate the additional resource requirements. For example, if you have 200 desktops that will be virtualized using desktop OS, but each one needs 4 GB of memory instead of the 2 GB that is provided in the reference, plan for 400 virtual desktops. 92 EMC VSPEX End-User Computing

93 Chapter 4: Solution Overview Storage performance requirements Storage capacity requirements Determining equivalent reference virtual desktops The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations. The storage capacity requirements for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops presented in this solution rely on additional shared storage for user profile data and user documents. This requirement is covered as an optional component that can be met with the addition of specific storage hardware from the reference architecture or with existing file shares in the environment. With all of the resources defined, determine an appropriate value for the Equivalent reference virtual desktops rows in Table 14 by using the relationships in Table 15. Round all values up to the nearest whole number. Table 15. Reference virtual desktop resources Desktop type Desktop OS Resource Value for reference virtual desktop Relationship between requirements and equivalent reference virtual desktops CPU 1 Equivalent reference virtual desktops = Resource requirements Memory 2 Equivalent reference virtual desktops = (Resource requirements)/2 IOPS 8 Equivalent reference virtual desktops = (Resource requirements)/8 Server OS CPU 0.2 Equivalent reference virtual desktops = (Resource requirements)/0.2 Memory 0.6 Equivalent reference virtual desktops = (Resource requirements)/0.6 IOPS 8 Equivalent reference virtual desktops = (Resource requirements)/8 For example, if a group of 100 users needs two virtual CPUs and 12 IOPS per desktop in a desktop OS deployment along with 8 GB of memory, describe them as needing two reference desktops of CPU, four reference desktops of memory, and two reference desktops of IOPS based on the virtual desktop characteristics in Table 13 on page 88. These figures go in the Equivalent Reference Virtual Desktops row, as shown in Table 16. Use the maximum value in the row to complete the Equivalent Reference Virtual Desktops column. Multiply the number of equivalent reference virtual desktops by the number of users to arrive at the total resource needs for that type of user. EMC VSPEX End-User Computing 93

94 Chapter 4: Solution Overview Table 16. Example worksheet row User type Heavy users Resource requirements Equivalent reference virtual desktops CPU (Virtual CPUs) Memory (GB) IOPS Equivalent reference virtual desktops Number of users Total reference desktops After completing the worksheet for each user type to be migrated into the virtual infrastructure, compute the total number of reference virtual desktops that are required in the pool by computing the sum of the Total column on the right side of the worksheet, as shown in Table 17. Table 17. Example applications User type Heavy users Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops CPU (Virtual CPUs) Memory (GB) IOPS Equivalent reference virtual desktops Number of users Total reference desktops Total 900 The VSPEX end-user computing solutions define definite resource pool sizes. For this solution set, the pool sizes are 500, 1,000, and 2,000. In the case in Table 17, the customer requires 900 virtual desktops of capability from the pool. Therefore, the resource pool of 1,000 virtual desktops provides sufficient resources for the current needs as well as room for growth. 94 EMC VSPEX End-User Computing

95 Chapter 4: Solution Overview Fine-tuning In most cases, the recommended hardware for servers and storage can be sized appropriately based on the process described in the previous section. However, in some cases, further customization of available hardware resources might be necessary. A complete description of system architecture is beyond the scope of this document, however, additional customization can be done at this point. Storage resources In some applications, it might be necessary to separate some storage workloads from others. The storage layouts in the VSPEX architectures put all of the virtual desktops in a single resource pool. To achieve workload separation, purchase additional disk drives for each group that needs workload isolation and add them to a dedicated pool. It is not appropriate to reduce the size of the main storage resource pool in order to support isolation, or to reduce the capability of the pool, without additional guidance beyond this document. The storage layouts presented in this paper are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult-topredict impacts on other areas of the system. Server resources In the VSPEX end-user computing solution, it is possible to customize the server hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 18. Note the addition of the Total CPU Resources and Total Memory Resources columns on the right side of the table. Table 18. Server resource component totals User type CPU (virtual CPUs) Memory (GB) Number of users Total CPU resources Total memory resources Heavy users Moderate users Typical users Resource requirements Resource requirements Resource requirements Total In this example, the target architecture required 700 virtual CPUs and 1800 GB of memory. With the stated assumptions of eight desktops per physical processor core in desktop OS deployment and no memory over-provisioning, this translates to 88 physical processor cores and 1800 GB of memory. In contrast, the 1,000 virtualdesktop resource pool as documented in the reference architecture calls for 2,000 GB of memory and at least 125 physical processor cores. In this environment, the solution can be implemented effectively with fewer server resources. Note: Keep high availability requirements in mind when customizing the resource pool hardware. EMC VSPEX End-User Computing 95

96 Chapter 4: Solution Overview Table 19 is a blank worksheet for gathering customer information. Table 19. Blank customer worksheet User type CPU (virtual CPUs) Memory (GB) IOPS Equivalent reference virtual desktops Number of users Total reference desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Total 96 EMC VSPEX End-User Computing

97 Chapter 5: VSPEX Configuration Guidelines Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Preparing switches, connecting the network, and configuring switches Preparing and configuring the storage array Installing and configuring Microsoft Hyper-V hosts Installing and configuring SQL Server database Deploying System Center Virtual Machine Manager server Installing and configuring XenDesktop controller Installing and configuring Provisioning Services (PVS only) Setting up EMC Avamar Summary EMC VSPEX End-User Computing 97

98 Chapter 5: VSPEX Configuration Guidelines Overview Table 20 describes the stages of the solution deployment process. When the deployment is completed, the VSPEX infrastructure is ready for integration with the existing customer network and server infrastructure. Table 20. Deployment process overview Stage Description Reference 1 Verify prerequisites. Pre-deployment tasks 2 Obtain the deployment tools. Pre-deployment tasks 3 Gather customer configuration data. Pre-deployment tasks 4 Rack and cable the components. Vendor s documentation 5 Configure the switches and networks; connect to the customer network. Preparing switches, connecting the network, and configuring switches 6 Install and configure the VNX. Preparing and configuring the storage array 7 Configure virtual machine storage. Preparing and configuring the storage array 8 Install and configure the servers. Installing and configuring Microsoft Hyper-V hosts 9 Set up SQL Server (used by SCVMM, PVS Server and XenDesktop). Installing and configuring SQL Server database 10 Install and configure SCVMM. Deploying System Center Virtual Machine Manager server 11 Set up XenDesktop Controller. Installing and configuring XenDesktop controller 12 Test and install. Validating the Solution 98 EMC VSPEX End-User Computing

99 Chapter 5: VSPEX Configuration Guidelines Pre-deployment tasks Pre-deployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results will be needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. Be sure to perform these tasks, shown in Table 21, before the customer visit to decrease the time required onsite. Table 21. Tasks for pre-deployment Task Description Reference Gather documents Gather tools Gather data Gather the related documents listed in the references. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 22 to confirm that all equipment, software, and appropriate licenses are available before the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information on the customer configuration data worksheet for reference during the deployment process. EMC documentation Other documentation Table 22 Appendix B Deployment prerequisites Complete the VNX Block Configuration Worksheet for Fibre Channel, available on EMC Online Support, to provide the most comprehensive array-specific information. Table 22 itemizes the hardware, software, and license requirements for the solution. For additional information, refer to the hardware and software tables in this guide. Table 22. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual desktops: Sufficient physical server capacity to host desktops Microsoft Hyper-v Server 2012 to host virtual infrastructure servers Note: This requirement might be covered by existing infrastructure. EMC VSPEX End-User Computing 99

100 Chapter 5: VSPEX Configuration Guidelines Requirement Description Reference Networking: Switch port capacity and capabilities as required by the end-user computing EMC VNX: Multiprotocol storage array with the required disk layout Software Microsoft SCVMM 2012 SP1 installation media Citrix Provisioning Service 7 installation media Citrix XenDesktop 7 installation media Citrix Provisioning Services 7 installation media ESI for Microsoft EMC Online Support Software FC variant only Licenses Licenses - FC variant only Microsoft Windows Server 2012 installation media (AD/DHCP/DNS/Hypervisor) Microsoft Windows 7 SP1 installation media Microsoft SQL Server 2012 installation media EMC PowerPath Citrix XenDesktop 7 license files Microsoft Windows Server 2012 Standard (or higher) license keys Note: This requirement might be covered in the existing Microsoft Key Management Server (KMS). Microsoft Windows 7 license keys Note: This requirement might be covered in the existing Microsoft Key Management Server (KMS). Microsoft SQL Server license key Note: This requirement might be covered in the existing infrastructure. SCVMM 2012 SP1 license keys EMC PowerPath license files EMC Online Support 100 EMC VSPEX End-User Computing

101 Chapter 5: VSPEX Configuration Guidelines Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process. Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information can be added, modified, and recorded as deployment progresses. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. Preparing switches, connecting the network, and configuring switches This section provides the requirements for network infrastructure to support this architecture. Table 23 offers a summary of the tasks to complete along with references for further information. Table 23. Tasks for switch and network configuration Task Description Reference Configure infrastructure network Configure storage network (FC variant) Configure VLANs Complete network cabling Configure storage array and windows host infrastructure networking as specified in Solution architecture on page 45. Configure Fibre Channel switch ports, zoning for Hyper-V hosts, and the storage array. Configure private and public VLANs as required. Connect switch interconnect ports. Connect VNX ports. Vendor s switch configuration guide Vendor s switch configuration guide Preparing network switches Configuring infrastructure network For validated levels of performance and high availability, this solution requires the switching capacity provided in the Solution hardware table on page 50. If the existing infrastructure meets the requirements, new hardware installation is not necessary. The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. EMC VSPEX End-User Computing 101

102 Chapter 5: VSPEX Configuration Guidelines Figure 33 and Figure 34 show a sample redundant network infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that no single points of failure exist in network connectivity. Figure 33. Sample network architecture SMB variant 102 EMC VSPEX End-User Computing

103 Chapter 5: VSPEX Configuration Guidelines Figure 34. Sample network architecture FC Variant Configuring VLANs Completing network cabling Ensure that you have an adequate number of switch ports for the storage array and Hyper-V hosts configured with a minimum of three VLANs for: Virtual machine networking and Hyper-V management traffic (customer-facing networks, which can be separated if necessary) Storage networking (private network) Live Migration (private network) Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network. Note: At this point, the new equipment is in the process of connecting to the existing customer network. Take care to ensure that unforeseen interactions do not cause service issues on the customer network. EMC VSPEX End-User Computing 103

104 Chapter 5: VSPEX Configuration Guidelines Preparing and configuring the storage array This section provides resources and instructions for configuring and provisioning core storage and optional storage. Configuring VNX This section describes how to configure the VNX storage array. In this solution, the VNX series provides CIFS or FC SAN-connected block storage for Hyper-V hosts. Table 24 shows the tasks for the storage configuration. Table 24. Tasks for storage configuration Task Description Reference Set up initial VNX configuration Provision FC storage for Hyper V (FC only) Provision optional storage for user data Configure the IP address information and other key parameters on the VNX. Create FC LUNs that will be presented to the Hyper-V servers as CSV hosting the virtual desktops. Create FC LUNs that will be presented to the Hyper-V servers as CSV hosting the virtual desktops. VNX5400 Unified Installation Guide VNX5600 Unified Installation Guide VNX File and Unified Worksheet Unisphere System Getting Started Guide Your vendor s switch configuration guide Preparing VNX The VNX5400 Unified Installation Guide provides instructions for assembling, racking, cabling, and powering the VNX. For 2,000 virtual desktops, refer to the VNX5600 Unified Installation Guide instead. There are no specific setup steps for this solution. Setting up the initial VNX configuration After completing the initial VNX setup, you must configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information: DNS NTP Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership The reference documents listed in Table 24 provide more information about how to configure the VNX platform. Server configuration guidelines on page 57 provides more information about the disk layout. 104 EMC VSPEX End-User Computing

105 Chapter 5: VSPEX Configuration Guidelines Provisioning core data storage Core data storage is a repository for virtual desktops operating system data. It can be FC variant and SMB variant. Figure 20, Figure 24, Figure 26, Figure 27, Figure 29, and Figure 30 depict the target storage layout for both Fibre Channel (FC) and SMB variants of the three solution stacks in this VSPEX solution. The following sections describe provisioning steps for both FC and SMB variants. Provisioning storage for Hyper-V Cluster (FC variant only) Complete the following steps in the EMC Unisphere interface to configure FC LUNs on VNX that will be used to store virtual desktops: 1. Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 600 GB SAS drives (10 drives for 500 virtual desktops, 20 for 1,000 virtual desktops, or 40 for 2,000 virtual desktops) for the MCS/Non-PvD configuration, and present them to the ESXi servers as VMFS datastores. For other MCS or PVS configuration, refer to Storage configuration guidelines to choose the appropriate LUN size. Enable FAST Cache for the storage pool. a. Log in to EMC Unisphere. b. Choose the array that will be used in this solution. c. Go to Storage > Storage Configuration > Storage Pools. d. Select the Pools tab. e. Click Create. 2. In the block storage pool, create four LUNs (for 500 virtual desktops), eight LUNs (for 1,000 virtual desktops), or sixteen LUNs (for 2,000 virtual desktops), and present them to the Hyper-V servers as CSV. a. Go to Storage > LUNs. b. Click Create. c. In the dialog box, choose the pool created in step1; MAX for User Capacity; and 4, 8, or 16 for Number of LUNs to create. LUNs will be provisioned after this operation. 3. Configure a storage group to allow Hyper-V servers access to the newly created LUNs. a. Go to Hosts > Storage Groups. b. Create a new storage group. c. Select the LUNs and Hyper-V hosts to be added to this storage group. EMC VSPEX End-User Computing 105

106 Chapter 5: VSPEX Configuration Guidelines Provisioning storage for CIFS Share (SMB variant only) Complete the following steps in EMC Unisphere to configure CIFS file systems on VNX that will be used to store virtual desktops: 1. Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 600 GB SAS drives (10 drives for 500 virtual desktops, 20 drives for 1,000 virtual desktops, or 40 drives for 2,000 virtual desktops) for a MCS/Non-PvD configuration. For other MCS or PVS configurations, refer to Storage configuration guidelines to choose the appropriate RAID type and disk count. Enable FAST Cache for the storage pool. a. Log in to EMC Unisphere. b. Choose the array that will be used in this solution. c. Go to Storage > Storage Configuration > Storage Pools. d. Select the Pools tab. e. Click Create. 2. Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. Each LUN should be 200 GB (for 500 virtual desktops), 400 GB (for 1,000 virtual desktops), or 800 GB (for 2,000 virtual desktops) for the MCS/Non-PvD configuration. Present them to the Hyper-V server as CSVs. For other MCS or PVS configuration, refer to Storage configuration guidelines to choose the appropriate LUN size. a. Go to Storage > LUNs. b. Click Create. c. In the dialog box, choose the pool created in step 1, MAX for User Capacity, and 10 for Number of LUNs to create. Note: Ten LUNs are created because EMC Performance Engineering recommends creating approximately one LUN for every four drives in the storage pool and creating LUNs in even multiples of ten. Refer to EMC VNX Unified Best Practices for Performance Applied Best Practices Guide. d. Go to Hosts > Storage Groups. e. Choose filestorage. f. Click Connect LUNs. g. In the Available LUNs panel, choose the 10 LUNs you just created. The LUNS immediately appear in the Selected LUNs panel. The Volume Manager automatically detects a new storage pool for file, or you can click Rescan Storage System under Storage Pool for File to scan for it immediately. Do not proceed until the new storage pool for file is present in the GUI. 106 EMC VSPEX End-User Computing

107 Chapter 5: VSPEX Configuration Guidelines 3. For the MCS/Non-PvD configuration, create four, eight, or sixteen file systems of 500 GB each (four file systems for 500 virtual desktops, eight for 1,000, or sixteen for 2,000), and present them to the Hyper-V servers as SMB shares. For other MCS or PVS configuration, refer to Storage configuration guidelines to choose the appropriate file system size. a. Go to Storage > Storage Configuration > File Systems. b. Click Create. c. In the dialog box, choose Create from Storage Pool. d. Enter the Storage Capacity, for example, 500 GB. e. Accept the default values for all other parameters. 4. Export the file systems using CIFS. a. Go to Storage > Shared Folders > CIFS. b. Click Create. 5. In Unisphere: a. Click Settings > Data Mover Parameters to make changes to the Data Mover configuration. b. In the Set Parameters list, choose All Parameters. c. Scroll down to the nthreads parameter, as shown in Figure 36. d. Click Properties to update the setting. The default number of threads dedicated to serve NFS requests is 384 per Data Mover on VNX. Because this solution requires up to 2,000 desktop connections, increase the number of active NFS threads to a maximum of 1,024 (for 500 virtual desktops), or 2,048 (for 1,000 and 2,000 virtual desktops) on each Data Mover. EMC VSPEX End-User Computing 107

108 Chapter 5: VSPEX Configuration Guidelines Figure 35. Set nthread parameter Configuring Fast Cache To configure FAST Cache on the storage pool(s) for this solution, complete the following steps: 1. Configure flash drives as FAST Cache: a. Click Properties (in the dashboard of the Unisphere window) or Manage Cache (in the left-hand pane of the Unisphere window) to open the Storage System Properties dialog box. Figure 36. Storage System Properties dialog box b. Click the FAST Cache tab to view FAST Cache information. 108 EMC VSPEX End-User Computing

109 Chapter 5: VSPEX Configuration Guidelines c. Click Create to open the Create FAST Cache dialog box. Figure 37. Create FAST Cache dialog box d. The RAID Type field is displayed as RAID 1 when the FAST Cache has been created. e. You can also choose the number of flash drives. The bottom portion of the window shows the flash drives that will be used for creating FAST Cache. You can choose the drives manually by selecting the Manual option. Refer to Storage configuration guidelines to determine the number of flash drives that are used in this solution. Note: If a sufficient number of flash drives are not available, an error message is displayed and FAST Cache cannot be created. 2. Enable FAST Cache on the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. In other words, all the LUNs created in the storage pool will have FAST Cache enabled or disabled. 3. To configure FAST Cache on an existing storage pool, use the Advanced tab in the Create Storage Pool dialog box. EMC VSPEX End-User Computing 109

110 Chapter 5: VSPEX Configuration Guidelines Figure 38. Advanced tab in the Create Storage Pool dialog box After FAST Cache is installed in the VNX series, it is enabled by default when a storage pool is created. Figure 39. Advanced tab in the Storage Pool Properties dialog box Note: The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves. Provisioning optional storage for user data If storage required for user data (for example, roaming user profiles and home directories) does not already exist in the production environment and the optional user data disk pack has been purchased, complete the following steps in Unisphere to configure two CIFS file systems on VNX: 1. Create a block-based RAID 6 storage pool that consists of sixteen, twentyfour, or forty-eight two-tb NL-SAS drives (sixteen drives for 500 virtual desktops, twenty-four drives for 1,000 virtual desktops, or forty-eight for 2,000 virtual desktops). Figure 22, Figure 28, and Figure 31 depict the target user data storage layout for the solution. 110 EMC VSPEX End-User Computing

111 Chapter 5: VSPEX Configuration Guidelines 2. Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. The capacity of each LUN should be 1 TB (for 500 virtual desktops), 2 TB (for 1,000 virtual desktops), or 4 TB (for 2,000 virtual desktops). 3. Create two file systems from the system-defined NAS pool containing the ten new LUNs. Export the file systems as CIFS shares. Configuring FAST VP (optional) Optionally, you can configure FAST VP to automate data movement between storage tiers. You can configure FAST VP at the pool level or at the LUN level. Configuring FAST VP at the pool level 1. Select a storage pool and click Properties to open the Storage Pool Properties dialog box. Figure 40 shows the tiering information for a specific FAST VP enabled pool. Figure 40. Storage Pool Properties window The Tier Status box displays FAST VP relocation information for the selected pool. 2. In the Auto-Tiering list, select Manual or Automatic for the Relocation Schedule. The Tier Details panel displays the exact distribution of the data. 3. Click Relocation Schedule to open the Manage Auto-Tiering dialog box. EMC VSPEX End-User Computing 111

112 Chapter 5: VSPEX Configuration Guidelines Figure 41. Manage Auto-Tiering dialog box 4. Optionally, from the Manage Auto-Tiering dialog box, you can change the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O. 5. Click OK to save your changes. Note: FAST VP is a completely automated tool that schedules relocations to occur automatically. Schedule relocations during nonpeak hours to minimize potential impact on performance. Configuring FAST VP at the LUN level Some FAST VP properties are managed at the LUN level. 1. Click Properties for a specific LUN. 2. Select the Tiering tab to view the tiering information for the LUN. 112 EMC VSPEX End-User Computing

113 Chapter 5: VSPEX Configuration Guidelines Figure 42. LUN Properties window The Tier Details section displays the current distribution of slices within the LUN. 3. Use the Tiering Policy list to select the tiering policy for the LUN. 4. Click OK to save your changes. Provisioning optional storage for infrastructure virtual machines If the storage required for infrastructure virtual machines (that is, SQL Server, domain controller, vcenter Server, and/or XenDesktop controllers) does not already exist in the production environment and the optional user data disk pack has been purchased, configure a CIFS file system on VNX to be used as an SMB share in which the infrastructure virtual machines reside. Repeat the configuration steps shown in Provisioning storage for CIFS Share (SMB variant only) to provision the optional storage, while taking into account the smaller number of drives. EMC VSPEX End-User Computing 113

114 Chapter 5: VSPEX Configuration Guidelines Installing and configuring Microsoft Hyper-V hosts This section provides the requirements for the installation and configuration of the Windows hosts and infrastructure servers to support the architecture. Table 25 describes the required tasks. Table 25. Tasks for server installation Task Description Reference Install Windows hosts Install Hyper-V and configure Failover Clustering Configure windows hosts networking Install PowerPath on Windows Servers Install Windows Server 2012 on the physical servers for the solution. 1. Add the Hyper-V Server role. 2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster. Configure Windows hosts networking, including NIC teaming and the Virtual Switch network. Install and configure PowerPath to manage multipathing for VNX LUNs PowerPath and PowerPath/VE for Windows Installation and Administration Guide Installing Windows hosts Installing Hyper-V and configuring failover clustering Follow Microsoft best practices to install Windows Server 2012 and the Hyper-V role on the physical servers for this solution. To install and configure Failover Clustering, complete the following steps: 1. On each Windows host, install Windows Server 2012 and patches. 2. Configure the Hyper-V role and the Failover Clustering feature. 3. Install the HBA drivers, or configure iscsi initiators on each Windows host. For details, refer to EMC Host Connectivity Guide for Windows. Table 25 provides the steps and references to accomplish the configuration tasks. Configuring Windows host networking To ensure performance and availability, the following network interface cards (NICs) are required: At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary). At least two 10 GbE NICs for the storage network. At least one NIC for Live Migration. Note: Enable Jumbo Frames for NICS that transfer SMB data. Set the MTU to 9,000. Consult the NIC vendor configuration guide for instructions. 114 EMC VSPEX End-User Computing

115 Chapter 5: VSPEX Configuration Guidelines Installing PowerPath on Windows servers Enabling jumbo frames Install PowerPath on Windows Servers to improve and enhance the performance and capabilities of the VNX storage array. For detailed installation steps, refer to PowerPath and PowerPath/VE for Windows Installation and Administration Guide. A jumbo frame is an Ethernet frame with a payload greater than 1,500 bytes and up to 9,000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9,000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames should be enabled end-to-end. This includes the network switches and VNX interfaces. To enable jumbo frames on the VNX: 1. Use Unisphere > Settings > Network > Settings for File. 2. Select the appropriate network interface under the Interfaces tab. 3. Select Properties. 4. Set the MTU size to Select OK to apply the changes. You might need to enable Jumbo frames on each network switch. Consult your switch configuration guide for instructions. Planning virtual machine memory allocations Server capacity serves two purposes in the solution: Supports the new virtualized desktop infrastructure. Supports the required infrastructure services such as authentication/authorization, DNS and database. For information on minimum infrastructure service hosting requirements, refer to Table 5. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Configuring memory Take care to properly size and configure the server memory for this solution. This section provides an overview of memory management in a Hyper-V environment. Memory virtualization techniques enable the hypervisor to abstract physical host resources such as Dynamic Memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. With advanced processors (such as Intel processors with EPT support), this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. Many techniques are available within the hypervisor to maximize the use of system resources such as memory. Do not substantially over commit resources, because this can lead to poor system performance. The exact implications of memory over commitment in a real-world environment are difficult to predict. Performance EMC VSPEX End-User Computing 115

116 Chapter 5: VSPEX Configuration Guidelines degradation due to resource-exhaustion increases with the amount of memory overcommitted. Installing and configuring SQL Server database This section describes how to set up and configure a SQL Server database for the solution. At the end of this section, you will have Microsoft SQL server on a virtual machine, with the databases required by Microsoft SCVMM, Citrix Provisioning Service, and Citrix XenDesktop configured for use. Table 26 identifies the tasks for the SQL Server database setup. Table 26. Tasks for SQL Server database setup Task Description Reference Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install Microsoft SQL Server Configure database for Microsoft SCVMM Configure XenDesktop database permissions Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. Install Microsoft Windows Server 2012 Standard Edition on the virtual machine created to host SQL Server. Install Microsoft SQL Server on the virtual machine designated for that purpose. Create the database required for the SCVMM Server on the appropriate datastore. Configure the database server with appropriate permissions for the XenDesktop installer Database Access and Permissions for XenDesktop 7 Creating a virtual machine for Microsoft SQL Server Note: The customer environment might already contain a SQL Server designated for this role. In that case, refer to Configuring database for Microsoft SCVMM. The requirements for processor, memory, and operating system vary for different versions of SQL Server. The virtual machine should be created on one of the Hyper-V servers designated for infrastructure virtual machines, and it should use the CSV designated for the shared infrastructure. Installing Microsoft Windows on the virtual machine The SQL Server service must run on Microsoft Windows. Install Windows on the virtual machine and select the appropriate network, time, and authentication settings. 116 EMC VSPEX End-User Computing

117 Chapter 5: VSPEX Configuration Guidelines Installing SQL Server Install SQL Server on the virtual machine from the SQL Server installation media. The Microsoft TechNet website provides information on how to install SQL Server. One of the components in the SQL Server installer is the SQL Server Management Studio (SSMS). You can install this component on the SQL server directly as well as on an administrator s console. Be sure to install SSMS on at least one system. In many implementations, you might want to store data files in locations other than the default path. To change the default path, right-click the server object in SSMS and select Database Properties. This action opens a properties interface from which you can change the default data and log directories for new databases created on the server. Note: For high availability, SQL Server can be installed in a Microsoft Failover Cluster. Configuring database for Microsoft SCVMM To use Microsoft SCVMM in this solution, you must create a database for the service to use. Note: Do not use the Microsoft SQL Server Express-based database option for this solution. It is a best practice to create individual login accounts for each service accessing a database on SQL Server. EMC VSPEX End-User Computing 117

118 Chapter 5: VSPEX Configuration Guidelines Deploying System Center Virtual Machine Manager server This section provides information on how to configure SCVMM. Complete the tasks in Table 27. Table 27. Tasks for SCVMM configuration Task Description Reference Create the SCVMM host virtual machine Install the SCVMM guest OS Create a virtual machine for the SCVMM Server. Install Windows Server 2012 Datacenter Edition on the SCVMM host virtual machine. Install the SCVMM server Install an SCVMM server. Install the SCVMM Management Console Install the SCVMM agent locally on the hosts Add a Hyper-V cluster into SCVMM Add file share storage in SCVMM (file variant only) Create a virtual machine in SCVMM Create a template virtual machine Deploy virtual machines from the template virtual machine Install an SCVMM Management Console. Install an SCVMM agent locally on the hosts SCVMM manages. Add the Hyper-V cluster into SCVMM. Add SMB file share storage to a Hyper-V cluster in SCVMM. Create a virtual machine in SCVMM. Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest Operating System profile at this time. Deploy the virtual machines from the template virtual machine Creating a SCVMM host virtual machine To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS configuration by using an infrastructure server datastore presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines SCVMM must manage. Installing the SCVMM guest OS Install the guest OS on the SCVMM host virtual machine. Install the required Windows Server version on the virtual machine and select the appropriate network, time, and authentication settings. 118 EMC VSPEX End-User Computing

119 Chapter 5: VSPEX Configuration Guidelines Installing the SCVMM server Set up the VMM database and the default library server, and then install the SCVMM server. Refer to the article, Installing the VMM Server, to install the SCVMM server. Installing the SCVMM Management Console Installing the SCVMM agent locally on a host SCVMM Management Console is a client tool used to manage the SCVMM server. Install the VMM Management Console on the same computer as the VMM server. Refer to the article, Installing the VMM Administrator Console, to install the SCVMM Management Console. If the hosts must be managed on a perimeter network, install a SCVMM agent locally on the host before adding it to VMM. Optionally, install a VMM agent locally on a host in a domain before adding the host to VMM. Refer to the article, Installing a VMM Agent Locally, to install a VMM agent locally on a host. Adding a Hyper-V cluster into SCVMM Adding file share storage to SCVMM (file variant only) Creating a virtual machine in SCVMM Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V cluster. Refer to the article, How to Add a Host Cluster to VMM, to add the Hyper-V cluster. To add file share storage to SCVMM, complete the following steps: 1. Open the VMs and Services workspace. 2. In the VMs and Services pane, right-click the Hyper-V Cluster name. 3. Click Properties. 4. In the Properties window, click File Share Storage. 5. Click Add, and then add the file share storage to SCVMM. Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, then install the software, and change the Windows and application settings. Refer to How to Create a Virtual Machine with a Blank Virtual Hard Disk to create a virtual machine. Creating a template virtual machine Converting a virtual machine into a template removes the virtual machine. Backup the virtual machine, because the virtual machine could be destroyed during template creation. Create a hardware profile and a Guest Operating System profile when creating a template. Use the profiler to deploy the virtual machines. Refer to How to Create a Template from a Virtual Machine to create the template. EMC VSPEX End-User Computing 119

120 Chapter 5: VSPEX Configuration Guidelines Deploying virtual machines from the template virtual machine Refer to How to Deploy a Virtual Machine to deploy the virtual machines. The deployment wizard allows you to save the PowerShell scripts and reuse them to deploy other virtual machines with the same configuration. Installing and configuring XenDesktop controller This section provides information on how to set up and configure Citrix XenDesktop controllers for the solution. For a new installation of XenDesktop, Citrix recommends that you complete the tasks in Table 28 in the order shown. Table 28. Tasks for XenDesktop controller setup Task Description Reference Create virtual machines for XenDesktop controllers. Install the guest operating system for XenDesktop controllers and PVS servers. Install server-side components of XenDesktop. Install Citrix Studio. Configure a site. Add a second XenDesktop delivery controller. Prepare a master virtual machine. Create four virtual machines in Hyper V. Two of the virtual machines are used as XenDesktop delivery controllers. Install Windows Server 2008 R2 or Windows Server 2012 guest operating system. Install XenDesktop server components on the first delivery controller. Install Citrix Studio to manage XenDesktop deployment remotely. Configure a site in Citrix Studio. Install additional delivery controller for high availability. Create a master virtual machine as the base image for the virtual desktops. Installing serverside components of XenDesktop Install the following server-side components of XenDesktop on the first controller: Delivery Controller Distributes applications and desktops, manages user access, and optimizes connections Citrix Studio Allows you to create, configure and manage infrastructure components, applications, and desktops Citrix Director Enables you to monitor performance and troubleshoot problems License server Manages product licenses Citrix StoreFront Provides authentication and resource delivery services for Citrix Receiver Note: Citrix supports installation of XenDesktop components only through the procedures described in Citrix documentation. 120 EMC VSPEX End-User Computing

121 Chapter 5: VSPEX Configuration Guidelines Configuring a site Adding a second controller Start Citrix Studio and configure a site. For site configuration, do the following: 1. License the site and specify which edition of XenDesktop to use. 2. Set up the site database using a designated login credential for SQL Server. 3. Provide information about your virtual infrastructure, including the Microsoft SCVMM path that the controller will use to establish a connection to the Hyper-V infrastructure. After you have configured a site, you can add a second controller to provide high availability. The server-side components of XenDesktop required for the second controller are: Delivery Controller Citrix Studio Citrix Director Citrix StoreFront Do not install the license server component on the second controller because it is centrally managed on the first controller. Installing Citrix Studio Preparing master virtual machine Install Citrix Studio on the appropriate administrator consoles to manage your XenDesktop deployment remotely. Optimize the master virtual machine to avoid unnecessary background services that generate extraneous I/O operations and adversely affect the overall performance of the storage array. Complete the following steps to prepare the master virtual machine: 1. Install the Windows 7 guest operating system. 2. Install the appropriate integration tools, such as Hyper-V Tools. 3. Optimize the operating system settings by referring to Citrix Windows 7 Optimization Guide for Desktop Virtualization. 4. Install the Virtual Delivery Agent. 5. Install third-party tools or applications relevant to your environment, such as Microsoft Office. EMC VSPEX End-User Computing 121

122 Chapter 5: VSPEX Configuration Guidelines Provisioning virtual desktops Complete the following steps to deploy virtual desktops using Machine Creation Services (MCS) in Citrix Studio: 1. Create a machine catalog using the master virtual machine as the base image. MCS allows the creation of a machine catalog that contains various types of desktops. The following desktop types were tested in this solution: Windows Desktop OS: Random: users connect to a new (random) desktop each time they log on Personal vdisk: users connect to the same (static) desktop each time they log on. Changes are saved on a separate Personal vdisk. Windows Server OS: provides hosted shared desktops for a large-scale deployment of standardized machines 2. Add the machines created in the catalog to a delivery group so that the virtual desktops are available to the end users. 122 EMC VSPEX End-User Computing

123 Chapter 5: VSPEX Configuration Guidelines Installing and configuring Provisioning Services (PVS only) This section provides information on how to set up and configure Citrix Provisioning Services for the solution. For a new installation of Provisioning Services, Citrix recommends that you complete the tasks in Table 29 in the order shown. Table 29. Tasks for XenDesktop controller setup Task Description Reference Create virtual machines for PVS servers Install guest operating system for PVS servers Install server-side components of Provisioning Services Configure PVS server farm Add a second PVS server Create a PVS store Configure inbound communication Configure a bootstrap file Set up TFTP server on VNX Configure boot options 66 and 67 on DHCP server Prepare a master virtual machine Provision virtual desktops Create two virtual machines in Hyper-V Server. These virtual machines are used as PVS servers. Install Windows Server 2008 R2 or Windows Server 2012 guest operating system. Install PVS server components and console on the PVS server. Run Provisioning Services Configuration Wizard to create a PVS server farm. Install PVS server components and console on the second server and join it to the existing server farm. Specify the store path where the vdisks will reside. Adjust the total number of threads that will be used to communicate with each virtual desktop Update the bootstrap image to use both PVS servers to provide streaming services Copy the bootstrap image to the TFTP server hosted on VNX Specify the TFTP server IP and the name of the bootstrap image used for the Preboot execution Environment (PXE) boot Create a master virtual machine as the base image for the virtual desktops. Provision desktops using PVS. Citrix website EMC VSPEX End-User Computing 123

124 Chapter 5: VSPEX Configuration Guidelines Configuring a PVS server farm Adding a second PVS server Create a PVS store Configuring inbound communication After the PVS server components are installed on the PVS server, start the Provisioning Services Configuration Wizard and configure a new server farm using the following options: 1. Specify the DHCP service to be run on another computer. 2. Specify the PXE service to be run on another computer. 3. Select Create Farm to create a new PVS server farm using a designated SQL database instance. 4. When creating a new server farm, you must create a site. Provide an appropriate name for the new site and target device collection. 5. Select the license server that is running on the XenDesktop controller. 6. If you choose to run the TFTP service on VNX, do not use the TFTP service hosted on the PVS server: clear the option Use the Provisioning Services TFTP service. After you have configured a PVS server farm, you can add a second PVS server to provide high availability. Install the PVS server components and console on the second PVS server and run the Provisioning Services Configuration Wizard to join the second server to the existing server farm. A PVS store is a logical container for vdisks. PVS supports the use of CIFS share as the storage target of a store. When creating a PVS store, set the default store path to the universal naming convention (UNC) path of a CIFS share that is hosted on the VNX storage. In the Provisioning Services console, right-click a store, select Properties and Validate to confirm that all PVS servers in the server farm can access the CIFs share. Each PVS server maintains a range of User Datagram Protocol (UDP) ports to manage all inbound communications from virtual desktops. Ideally, you should dedicate one thread for each desktop session. The total number of threads supported by a PVS server is calculated as: Total threads = (Number of UDP ports * Threads per port * Number of network adapters) Adjust the thread count accordingly to match the number of deployed virtual desktops. 124 EMC VSPEX End-User Computing

125 Chapter 5: VSPEX Configuration Guidelines Configuring a bootstrap file To update the bootstrap file required for the virtual desktops to PXE boot, complete the following steps: 1. In the Provisioning Services console, navigate to Farm > Sites > Site-name > Servers. 2. Right-click a server and select Configure Bootstrap. Figure 43. Configure Bootstrap dialog box 3. In the Configure Bootstrap dialog box, update the bootstrap image to reflect the IP addresses used for all PVS servers that provide streaming services in a round-robin fashion. Select Read Servers from Database to obtain a list of PVS servers automatically or select Add to manually add the server information. 4. After modifying the configuration, click OK to update the ARDBP32.BIN bootstrap file, which is located at C:\ProgramData\Citrix\Provisioning Services\Tftpboot. 5. Navigate to the folder and examine the timestamp of the bootstrap file to ensure that it is updated on the intended PVS server. Setting up a TFTP server on VNX In addition to the NFS/CIFS server, the VNX platform is also used as a TFTP server that provides a bootstrap image when virtual desktops PXE boot. To configure the VNX TFTP server, complete the following steps: 1. Enable the TFTP service by using the following command syntax: server_tftp <movername> -service -start 2. Use the following command syntax to set the TFTP working directory and enable read/write access for file transfer: server_tftp <movername> -set path <pathname> -readaccess all -writeaccess all EMC VSPEX End-User Computing 125

126 Chapter 5: VSPEX Configuration Guidelines 3. Use a TFTP client to upload the ARDBP32.BIN bootstrap file from C:\ProgramData\Citrix\Provisioning Services\Tftpboot on the PVS server to the VNX TFTP server. 4. Use the following syntax to set the TFTP working directory access to read-only to prevent accidental modification of the bootstrap file: server_tftp <movername> -set path <pathname> -writeaccess none Configuring boot options 66 and 67 on DHCP server Preparing the master virtual machine To PXE boot the virtual desktops successfully from the bootstrap image supplied by the PVS servers, set the boot options 066 and 067 on the DHCP server. Complete the following steps to configure the boot options on the Microsoft DHCP server: 1. From the DHCP management interface of the Microsoft DHCP server, right-click Scope Options, and then select Configure Options. 2. Select 066 Boot Server Host Name. 3. In String Value, type the IP address of the Data Mover configured as TFTP server. 4. Similarly, select 067 Bootfile Name, and then type ARDBP32.BIN in the String value box. The ARDBP32.BIN bootstrap image is loaded on a virtual desktop before the vdisk image is streamed from the PVS servers. Optimize the master virtual machine to avoid unnecessary background services generating inessential I/O operations that adversely affect the overall performance of the storage array. Complete the following steps to prepare the master virtual machine: 1. Install the appropriate integration tools. 2. Optimize the operating system settings by referring to the following document: Citrix Windows 7 Optimization Guide for Desktop Virtualization. 3. Install the Virtual Delivery Agent. 4. Install third-party tools or applications, such as Microsoft Office, relevant to your environment. 5. Install the PVS target device software on the master virtual machine. 6. Modify the BIOS of the master virtual machine so that the network adapter is at the top of the boot order to ensure PXE boot of the PVS bootstrap image. 126 EMC VSPEX End-User Computing

127 Chapter 5: VSPEX Configuration Guidelines Provisioning the virtual desktops Complete the following steps to deploy PVS-based virtual desktops: 1. Run the PVS imaging wizard to clone the master image onto a vdisk. 2. When the cloning is complete, shut down the master virtual machine and modify the following vdisk properties: Access mode: Standard Image Cache type: Cache on device hard drive 3. Prepare a virtual machine template to be used by XenDesktop Setup Wizard in the next step. 4. Run the XenDesktop Setup Wizard in the PVS console to create a machine catalog that contains the specified number of virtual desktops. 5. Add the virtual desktops created in the catalog to a delivery group so that the virtual desktops are available to the end users. Setting up EMC Avamar This section provides information about the installation and configuration of Avamar that is required to support in-guest backup of user files. There are other methods for backing up user files with Avamar; however, this method provides end-user restore capabilities using a common GUI. For this configuration, we assume that only a user s files and profile are being backed up. Table 30 describes the tasks you must complete. Note: Regular backups of the data center infrastructure components required by Citrix XenDesktop virtual desktops should supplement the backups produced by the procedure described here. A full disaster recovery plan requires the ability to restore Citrix XenDesktop end-user computing as well as the ability to restore Citrix XenDesktop desktop user data and files. EMC VSPEX End-User Computing 127

128 Chapter 5: VSPEX Configuration Guidelines Table 30. Tasks for Avamar integration Task Description Reference Microsoft Active Directory Preparation GPO Additions for EMC Avamar Create and configure Group Policy Object (GPO) to enable EMC Avamar backups of user files and profiles. Citrix XenDesktop (Master) Image Preparation Master Image Preparation for EMC Avamar Install and configure the EMC Avamar Client to run in user mode. EMC Avamar Preparation Defining Datasets Defining Schedules Adjust Maintenance Window Schedule Defining Retention Policies Group and Group Policy Creation Create and configure EMC Avamar Datasets to support user files and profiles. Create and configure EMC Avamar backup schedule to support virtual desktop backups. Modify Maintenance Window schedule to Support virtual desktop backups. Create and configure EMC Avamar Retention Policy. Create and configure EMC Avamar Group and Group Policy EMC Avamar 7.0 Administrator Guide EMC Avamar 7.0 Operational Best Practices Post Desktop Deployment Activate Clients (Desktops) Activate Citrix XenDesktop virtual desktops using EMC Avamar Enterprise Manager. EMC Avamar 7.0 Administrator Guide 128 EMC VSPEX End-User Computing

129 Chapter 5: VSPEX Configuration Guidelines GPO additions for EMC Avamar You must use mapped drives to reduce the management burden and because of current EMC Avamar limitations (such as no support for client side variables like %username%). Configure the Windows Folder Redirection to create the UNC paths required for the mapped drives. You must create a new GPO. Folder redirection To configure Windows Folder Redirection: 1. Edit the GPO by navigating to User Configuration > Policies > Windows Settings > Folder Redirection. 2. Right-click Documents. 3. Select Properties. 4. In the Settings list, select Basic Redirect everyone s folder to the same location. 5. In the Root Path box, type \\CIFS_server\folder, as shown in Figure 45, and then click OK. Figure 44. Configuring Windows Folder Redirection EMC VSPEX End-User Computing 129

130 Chapter 5: VSPEX Configuration Guidelines Mapped drives Create two mapped drive configurations one for the user s files and one for the user s profile. Repeat the following procedure twice, changing three variables each time (Location, Label As, and Drive Letter Used) to create the two mapped drives. To configure Drive Mappings: 1. Edit the GPO, and then navigate to User Configuration > Preferences > Windows Settings > Drive Maps. 2. Right click the blank/white area in the right pane. 3. In the context menu, select New > Mapped Drive, as shown in Figure 45. Figure 45. Create a Windows network drive mapping for user files 4. In the mapped drive properties dialog box, set the following items, as shown in Figure 46, to create the User_Files mapped drive: a. In the Action list, select Create. b. In Location, type \\cifs_server\folder\%username%. c. Select Reconnect. d. In Label as, type User_Files. e. In the Drive Letter box, select Use, and then select U. f. In Hide/Show this drive, select Hide this drive. 130 EMC VSPEX End-User Computing

131 Chapter 5: VSPEX Configuration Guidelines Figure 46. Configure drive mapping settings 5. At the top of the Properties window, select the Common tab, and then select Run in logged-on user s security context (user policy option). Figure 47. Configure drive mapping common settings Repeat steps 1 through 5 to create a User_Profile mapped drive using the following variables: a. In Location, type \\cifs_server\folder\%username%.domain.v2 where domain is the Active Directory domain name. b. In Label as, type User_Profile. c. In the Drive Letter box, select Use, and then select P. EMC VSPEX End-User Computing 131

132 Chapter 5: VSPEX Configuration Guidelines Figure 49 shows a sample configuration. Figure 48. Create a Windows network drive mapping for user profile data 6. Close the Group Policy Editor to ensure that the changes are saved. Preparing the master image for EMC Avamar This section provides information about using the Avamar Client for Windows to provide backup and restore support for Citrix XenDesktop virtual desktops that store user-generated files in EMC VNX home directories. The Avamar Client for Windows installs and runs as a Windows service called Backup Agent. Backup and restore capabilities are provided by this server service. Windows security limits the access of services logged on using the Local System account to local resources only. In its default configuration, the Backup Agent uses the Local System account to log on. It cannot access network resources, including the Citrix XenDesktop user profiles or data file shares. To access Citrix XenDesktop user profiles and data file shares, the Backup Agent must run as the currently logged on user. You can accomplish this by using a batch file that starts Backup Agent and logs it on as a user when the user logs in. Note: The commands in this batch file assume that the drive letter of the user data disk for the redirected Avamar Client for Windows var directory is D. When a different drive letter is assigned, replace D in all instances of D:\ with the correct letter. Redirection of the var directory is described in Re-direct the Avamar Client for Windows var directory. Replace D with P using the configuration steps in Mapped drives. Modify the vardir path value within the avamar.cmd file located in C:\Program Files\avs\var to --vardir=p:\avs\var. 132 EMC VSPEX End-User Computing

133 Chapter 5: VSPEX Configuration Guidelines Defining datasets For the following sections, assume the Avamar Grid is up and functional and that you have logged into Avamar Administrator. Refer to the EMC Avamar 7.0 Administration Guide for information on accessing Avamar Administrator. Avamar datasets are lists of directories and files to backup from a client. Assigning a dataset to a client or group enables you to save backup selections. Refer to the EMC Avamar 7.0 Administration Guide for additional information about datasets. This section provides procedures to configure the Citrix XenDesktop virtual desktop datasets that are required to ensure successful backups of user files and user profiles. Create two datasets: one for the user files and one for the user profile, using the following procedures. Creating the User Files dataset 1. In Avamar Administrator, select Tools > Manage Datasets. Figure 49. Avamar tools menu 2. In the Manage All Datasets window, click New. Figure 50. Avamar Manage All Datasets dialog box EMC VSPEX End-User Computing 133

134 Chapter 5: VSPEX Configuration Guidelines The New Dataset window appears. Figure 51. Avamar New Dataset dialog box 3. Select each plug-in and click remove ( ) to remove all plug-ins from the list. 4. In the Name field, type View-User-Files. 5. Select Enter Explicitly. 6. In the Select Plug-in Type list, select Windows File System. 7. In Select Files and/or Folders, type U:\, and then click add (+). Figure 52. Configure Avamar Dataset settings 8. Click OK to save the Dataset. 134 EMC VSPEX End-User Computing

135 Chapter 5: VSPEX Configuration Guidelines Creating the User Profile dataset To create a new dataset for User Profile data, complete the following steps: 1. Complete the steps in Creating the User Files using the following values: Name: View-User-Profile. Select Files and/or Folders: P:\. Figure 53. User Profile data dataset 2. Select the Exclusions tab. 3. In the Select Plug-in Type list, select Windows File System. 4. In Select Files and/or Folders, type P:\avs, and then click add (+). Figure 54. User Profile data dataset Exclusion settings 5. Click the Options tab. EMC VSPEX End-User Computing 135

136 Chapter 5: VSPEX Configuration Guidelines 6. In the Select Plug-in Type list, select Windows File System. 7. Select Show Advanced Options. Figure 55. User Profile data dataset Options settings 8. Scroll down the list of options until you locate the Volume Freezing Options section. 9. In the Method to freeze volumes list, select None. 10. Click OK to save the dataset. Figure 56. User Profile data dataset Advanced Options settings 136 EMC VSPEX End-User Computing

137 Chapter 5: VSPEX Configuration Guidelines Defining schedules Adjusting the maintenance window schedule Avamar schedules are reusable objects that control when Group backups and custom notifications occur. Define a reoccurring schedule that satisfies your recovery point objectives (RPO). Refer to the EMC Avamar 7.0 Administration Guide for additional information about datasets. Avamar server maintenance includes three essential activities: Checkpoint a snapshot of the Avamar server taken for the express purpose of facilitating server rollbacks. Checkpoint validation an internal operation that validates the integrity of a specific checkpoint. Once a checkpoint passes validation, it can be considered reliable enough to be used for a server rollback. Garbage collection an internal operation that recovers storage space from deleted or expired backups. Each 24-hour day is divided into three operational windows during which various system activities are performed: Backup window Blackout window Maintenance window Figure 57 illustrates the default Avamar backup, blackout, and maintenance windows. Figure 57. Avamar default Backup/Maintenance Windows schedule The backup window is that portion of each day reserved to perform normal scheduled backups. No maintenance activities are performed during the backup window. EMC VSPEX End-User Computing 137

138 Chapter 5: VSPEX Configuration Guidelines The blackout window is that portion of each day reserved to perform server maintenance activities, primarily Garbage Collection, that require unrestricted access to the server. No backup or administrative activities are allowed during the blackout window. However, you can perform restores. The maintenance window is that portion of each day reserved to perform routine server maintenance activities, primarily checkpoint creation and validation. User files and profile data should not be backed up during the day while the users are logged onto their virtual desktop. Adjust the backup window start time to prevent backups from occurring during that time. Figure 58 illustrates modified backup, blackout, and maintenance windows for backing up Citrix XenDesktop virtual desktops. Figure 58. Avamar modified Backup/Maintenance Windows schedule To adjust the schedule to appear as shown above, change the Backup Window Start Time from 8:00 PM to 8:00 AM, and then click OK to save the changes. Refer to the EMC Avamar 7.0 Administration Guide for additional information about Avamar server maintenance activities. Defining retention policies Avamar backup retention policies enable you to specify how long to keep a backup in the system. A retention policy is assigned to each backup when the backup occurs. Specify a custom retention policy to perform an on-demand backup, or create a retention policy that is assigned automatically to a group of clients during a scheduled backup. When the retention for a backup expires, the backup is automatically marked for deletion. The deletion occurs in batches during times of low system activity. 138 EMC VSPEX End-User Computing

139 Chapter 5: VSPEX Configuration Guidelines Refer to the EMC Avamar 7.0 Administration Guide for additional information on defining retention policies. Creating groups and group policy Avamar uses groups to implement various policies to automate backups and enforce consistent rules and system behavior across an entire segment, or group, of the user community. Group members are client machines that have been added to a particular group to perform scheduled backups. In addition to specifying which clients belong to a group, groups also specify: Datasets Schedules Retention Polices These three objects comprise the group policy. Group policy controls backup behavior for all members of the group unless you override these settings at the client level. Refer to the EMC Avamar 7.0 Administration Guide for additional information about Groups and Group Policies. This section provides group configuration information that is required to ensure proper backups of user files and user profiles. Create two groups and their respective group policies: one for the user files and one for the user profile. Repeat the following procedure twice; change two variables each time (Name, and Dataset Used). Creating the User File Group 1. From the Actions menu, select New Group. Figure 59. Create new Avamar backup group EMC VSPEX End-User Computing 139

140 Chapter 5: VSPEX Configuration Guidelines The New Group dialog box appears. Figure 60. New backup group settings 2. In Name, type View_User_Data. 3. Ensure that Disabled is cleared. 4. Click Next. 5. In the Select An Existing Dataset list, select Citrix Xendesktop-User-Data. Figure 61. Select backup group dataset 6. Click Next. 140 EMC VSPEX End-User Computing

141 7. In the Select An Existing Schedule list, select a schedule. Chapter 5: VSPEX Configuration Guidelines Figure 62. Select backup group schedule 8. Click Next. 9. In the Select An Existing Retention Policy list, select a retention policy. Figure 63. Select backup group retention policy 10. Click Finish. Note: If you click Next, you can select the clients to be added to the group. This step is unnecessary, because clients will be added to the group during activation. EMC VSPEX End-User Computing 141

142 Chapter 5: VSPEX Configuration Guidelines EMC Avamar Enterprise Manager: activating clients Avamar Enterprise Manager is a web-based multi-system management console application that provides centralized Avamar system administration capabilities, including the ability to add and activate Avamar Clients all at once. In this section, we assume you know how to log into Avamar Enterprise Manager (EM), and that the Citrix XenDesktop desktops are created. After you log in to Avamar Enterprise Manager, the dashboard appears. Figure 64. Avamar Enterprise Manager 1. Click Client Manager. 2. In the EMC Avamar Client Manager window, click Activate. Figure 65. Avamar Client Manager 3. Click the Client Information list arrow. 142 EMC VSPEX End-User Computing

143 Chapter 5: VSPEX Configuration Guidelines Figure 66. Avamar Activate Client dialog box 4. From the Client Information list, select Directory Service. Figure 67. Avamar Activate Client menu 5. In the Directory Service dialog box, enter the required user credentials, as shown in Figure 68. This assumes an Active Directory service has been configured in Avamar; refer to the EMC Avamar 7.0 Administration Guide for additional information about enabling LDAP Management. a. In the User Domain list, select a directory service domain. b. In User Name and Password, type the user name and password required for directory service authentication. c. In Directory Domain, select a directory domain to query for client information, and then click OK. EMC VSPEX End-User Computing 143

144 Chapter 5: VSPEX Configuration Guidelines Figure 68. Avamar Directory Service configuration The Active Directory information appears in the left pane of the EMC Avamar Client Manager window. Figure 69. Avamar Client Manager post configuration 6. In the Client Information directory tree, locate the Citrix XenDesktop virtual desktops. 144 EMC VSPEX End-User Computing

145 In this example, an OU was created named VSPEX. Chapter 5: VSPEX Configuration Guidelines Figure 70. Avamar Client Manager virtual desktop clients 7. Select the virtual machine desktops you want to add to the Avamar server. Figure 71 shows a selected list in the Client Information pane and the target domain in the Server Information pane. Figure 71. Avamar Client Manager select virtual desktop clients 8. Drag and drop the selected list to an existing Avamar domain in the Server Information pane. EMC VSPEX End-User Computing 145

146 Chapter 5: VSPEX Configuration Guidelines The Select Groups window appears. Figure 72. Select Avamar groups 9. Under Group Name, select the groups to which you want to add these desktops, and then click Add. The EMC Avamar Client Manager window reappears. 10. Click the Avamar domain to which you just added the XenDesktop desktops, and then click Activate. Figure 73. Activate Avamar clients The Show Clients for Activation window appears. 146 EMC VSPEX End-User Computing

147 Chapter 5: VSPEX Configuration Guidelines 11. Click Commit. Figure 74. Commit Avamar client activation An Alert appears, indicating that the client activation will be performed as a background process. 12. Click OK. Figure 75. Avamar client activation informational prompt one A second Alert indicates that the activation process has been initiated and that you should check the logs for status. 13. Click OK. EMC VSPEX End-User Computing 147

148 Chapter 5: VSPEX Configuration Guidelines Figure 76. Avamar client activation informational prompt two The EMC Avamar Client Manager window reappears and displays the activated clients. Figure 77. Avamar Client Manager activated clients 14. Log out from EMC Avamar Enterprise Manager. 148 EMC VSPEX End-User Computing

149 Chapter 5: VSPEX Configuration Guidelines Summary In this chapter, we presented the steps required to deploy and configure the various aspects of the VSPEX solution, which included both the physical and logical components. At this point, you should have a fully functional VSPEX solution. The following chapter covers post-installation and validation activities. EMC VSPEX End-User Computing 149

150 Chapter 6: Validating the Solution Chapter 6 Validating the Solution This chapter presents the following topics: Overview Post-installation checklist Deploying and testing a single virtual desktop Verifying the redundancy of the solution components EMC VSPEX End-User Computing

151 Chapter 6: Validating the Solution Overview This section provides a list of items that should be reviewed after the solution has been configured. The goal of this section is to verify the configuration and functionality of specific aspects of the solution and to ensure that the configuration supports core availability requirements. Table 31 describes the tasks that must be completed. Table 31. Tasks for testing the installation Task Description Reference Post install checklist Deploy and test a single virtual server Verify redundancy of the solution components Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Verify that each Hyper-V host has access to the required datastores and VLANs. Verify that interfaces are configured correctly on all Microsoft Windows Hyper-V hosts. Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Perform a reboot of each storage processor in turn and ensure that LUN connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. vinmcshera/archive/2011/03/ 27/ aspx com/wiki/contents/articles/15 1.hyper-v-virtual-networkingsurvival-guide-en-us.aspx nts/teched/northamerica/201 2/VIR310 Steps shown below Reference vendor s documentation n-us/library/gg aspx Post-installation checklist Prior to deployment into production, verify the following configuration items because they are critical to the solution functionality. Before deployment into production, verify the following on each Windows Server: The VLAN for virtual machine networking is configured correctly. The storage networking is configured correctly. Each server can access the required Cluster Shared Volumes/Hyper-V SMB shares. A network interface is configured correctly for Live Migration. EMC VSPEX End-User Computing 151

152 Chapter 6: Validating the Solution Deploying and testing a single virtual desktop To verify the operation of the solution perform a deployment of a virtual machine to verify that the procedure completes as expected. Verify that the virtual machine joins the applicable domain, has access to the expected networks, and is able to log in. Verifying the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, it is important to test specific scenarios related to maintenance or hardware failure. Perform a restart of each VNX Storage Processor in turn and verify that connectivity to Hyper-V storage is maintained throughout each restart, as follows: 1. Log on to the Control Station with administrator rights. 2. Navigate to /nas/sbin. 3. Restart SPA: use the command./navicli spa rebootsp. 4. During the restart cycle, check for the presence of storage on Hyper-V hosts. 5. When the cycle completes, restart SPB:./navicli spb rebootsp. Perform a failover of each VNX Data Mover in turn and verify that connectivity to Hyper-V Storage is maintained and that connections to CIFS file systems are reestablished. For simplicity, use the following approach for each data mover. Restart can also be accomplished through the Unisphere interface. From the Control Station $ prompt, use the command, server_cpu <movername> - reboot (where <movername> is the name of the data mover). To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure as well. To verify that HA features function as expected for XenDesktop delivery controllers, StoreFront servers, and Provisioning Services servers, disable each of the redundant servers in turn and verify that the virtual desktops remain accessible. On a Hyper-V host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host. 152 EMC VSPEX End-User Computing

153 Appendix A: Bills of Materials Appendix A Bills of Materials This appendix presents the following topics: Bill of materials for 500 virtual desktops Bill of materials for 1,000 virtual desktops Bill of materials for 2,000 virtual desktops EMC VSPEX End-User Computing 153

154 Appendix A: Bills of Materials Bill of materials for 500 virtual desktops Table 32. List of components used in the VSPEX solution for 500 virtual desktops Component Microsoft Hyper-V servers CPU Memory Network FC option Network 1Gb option Solution for 500 Virtual Desktops Desktop OS 1 x vcpu per virtual desktop 8 x virtual desktops per physical core 500 x vcpus Minimum of 63 physical cores Server OS 0.2 x vcpu per virtual desktop 5 x virtual desktops per physical core 100 x vcpus Minimum of 100 physical cores Desktop OS 2 GB RAM per desktop Minimum of 1 TB RAM Server OS 0.6 GB RAM per desktop Minimum of 300 GB RAM 2 x 4/8 GB FC HBAs per server 6 x 1 GbE NICs per server Note: To implement Microsoft Cluster Services functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Network infrastructure Fibre Channel 2 x physical switches 2 x 1 GbE ports per Hyper-V server 4 x 4/8 Gb FC ports for VNX back end (Two per SP) 2 x 4/8 Gb FC ports per Hyper-V server 1 Gb network 2 x physical switches 1 x 1 GbE port per Control Station for management 6 x 1 GbE ports per Hyper-V server 10 Gb network 2 x physical switches 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per data mover for data 154 EMC VSPEX End-User Computing

155 Appendix A: Bills of Materials Component Solution for 500 Virtual Desktops Note: When choosing the Fibre Channel option for storage, you still must choose one of the IP network options to have full connectivity. EMC nextgeneration backup EMC VNX series storage array Avamar Common FC option 1 x Gen4 utility node 1 x Gen4 3.9 TB spare node 3 x Gen4 3.9 TB storage nodes EMC VNX x Data Movers (active / standby) 600GB, 15k rpm 3.5-inch SAS drives Core Desktops Drive count PvD Non-PvD HSD PVS MCS x 100 GB, 3.5-inch flash drives FAST Cache 17 x 2 TB, 3.5-inch NL-SAS drives (optional) User Data 2 x 8 Gb FC ports per Storage Processor 1 Gb Network option 4 x 1 Gb I/O module for each Data Mover (each module includes four ports) 10 Gb Network option 2 x 10 Gb I/O module for each Data Mover (each module includes two ports) EMC VSPEX End-User Computing 155

156 Appendix A: Bills of Materials Bill of materials for 1,000 virtual desktops Table 33. List of components used in the VSPEX solution for 1,000 virtual desktops Component Microsoft Hyper- V servers CPU Memory Network FC option Network 1Gb option Network 10 Gb option Solution for 1,000 Virtual Desktops Desktop OS 1 x vcpu per virtual desktop 8 x virtual desktops per physical core 1000 x vcpus Minimum of 125 physical cores Server OS 0.2 x vcpu per virtual desktop 5 x virtual desktops per physical core 200 x vcpus Minimum of 200 physical cores Desktop OS 2 GB RAM per desktop Minimum of 2 TB RAM Server OS 0.6 GB RAM per desktop Minimum of 600 GB RAM 2 x 4/8 GB FC HBAs per server 6 x 1 GbE NICs per blade chassis 3 x 10 GbE NICs per blade chassis Note: To implement Microsoft Cluster Services functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Network infrastructure Fibre Channel 2 x physical switches 2 x 1 GbE ports per Hyper-V server 4 x 4/8 Gb FC ports for VNX back end (two per SP) 2 x 4/8 Gb FC ports per Hyper-V server 156 EMC VSPEX End-User Computing

157 Appendix A: Bills of Materials Component Solution for 1,000 Virtual Desktops 1 Gb network 2 x physical switches 1 x 1 GbE port per Control Station for management 6 x 1 GbE ports per Hyper-V server 2 x 10 GbE ports per data mover for data 10 Gb network 2 x physical switches 1 x 1 GbE port per Control Station for management 3 x 10 GbE ports per blade chassis 2 x 10 GbE ports per data mover for data Note: When choosing the Fibre Channel option for storage, you still must choose one of the IP network options to have full connectivity. EMC nextgeneration backup EMC VNX series storage array Avamar Common FC option 1 Gb Network option 10 Gb Network option 1 x Gen4 utility node 1 x Gen4 3.9 TB spare node 3 x Gen4 3.9 TB storage nodes EMC VNX x Data Movers (active / standby) 600 GB, 15 k rpm 3.5-inch SAS drives Core Desktops Drive count PvD Non-PvD HSD PVS MCS x 100 GB, 3.5-inch flash drives FAST Cache 25 x 2 TB, 3.5-inch NL-SAS drives (optional) User Data 2 x 8 Gb FC ports per Storage Processor 4 x 1 Gb I/O module for each Data Mover (each module includes four ports) 2 x 10 Gb I/O module for each Data Mover (each module includes two ports) EMC VSPEX End-User Computing 157

158 Appendix A: Bills of Materials Bill of materials for 2,000 virtual desktops Table 34. List of components used in the VSPEX solution for 2,000 virtual desktops Component Microsoft Hyper V servers CPU Memory Network FC option Network 1 Gb option Network 10 Gb option Solution for 2,000 Virtual Desktops Desktop OS 1 x vcpu per virtual desktop 8 x virtual desktops per physical core 2,000 x vcpus Minimum of 250 physical cores Server OS 0.2 x vcpu per virtual desktop 5 x virtual desktops per physical core 400 x vcpus Minimum of 400 physical cores Desktop OS 2 GB RAM per desktop Minimum of 4 TB RAM Server OS 0.6 GB RAM per desktop Minimum of 1.2 TB RAM 2 x 4/8 GB FC HBAs per server 6 x 1 GbE NICs per server 3 x 10 GbE NICs per blade chassis Note: To implement Microsoft Cluster Services functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Network infrastructure Fibre Channel 2 x physical switches 2 x 1 GbE ports per Hyper-V server 4 x 4/8 Gb FC ports for VNX back end (two per SP) 2 x 4/8 Gb FC ports per Hyper-V server 158 EMC VSPEX End-User Computing

159 Appendix A: Bills of Materials Component 1 Gb network option 10 Gb network option Solution for 2,000 Virtual Desktops 2 x physical switches 1 x 1 GbE port per Control Station for management 6 x 1 GbE ports per Hyper-V server 2 x 10 GbE ports per data mover for data 2 x physical switches 1 x 1 GbE port per Control Station for management 3 x 10 GbE ports per blade chassis 2 x 10 GbE ports per data mover for data Note: When choosing the Fibre Channel option for storage, you must still choose one of the IP network options to have full connectivity. EMC nextgeneration backup EMC VNX series storage array Avamar Common FC option 1 Gb network option 10 Gb network option 1 x Gen4 utility node 1 x Gen4 3.9 TB spare node 3 x Gen4 3.9 TB storage nodes EMC VNX x Data Movers (active / standby) 600GB, 15k rpm 3.5-inch SAS drives Core Desktops Drive count PvD Non-PvD HSD PVS MCS x 100GB, 3.5-inch flash drives FAST Cache 50 x 2 TB, 3.5-inch NL-SAS drives (optional) User Data 2 x 8 Gb FC ports per Storage Processor 4 x 1 Gb I/O module for each Data Mover (each module includes four ports) 2 x 10 Gb I/O module for each Data Mover (each module includes two ports) EMC VSPEX End-User Computing 159

160 Appendix B: Customer Configuration Data Sheet Appendix B Customer Configuration Data Sheet This appendix presents the following topics: Customer configuration data sheets EMC VSPEX End-User Computing

161 Appendix B: Customer Configuration Data Sheet Customer configuration data sheets Before you start the configuration, gather customer-specific network and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a leave behind document for future reference. The VNX File and Unified Worksheet should be cross-referenced to confirm customer information. Table 35. Common server information Server Name Purpose Primary IP Domain Controller DNS Primary DNS Secondary DHCP NTP SMTP SNMP SCVMM Console XenDesktop Console Provisioning Services Console SQL Server Table 36. Hyper-V server information Server Name Purpose Primary IP Private Net (storage) Addresses Hyper-V Host 1 Hyper-V Host 2 EMC VSPEX End-User Computing 161

162 Appendix B: Customer Configuration Data Sheet Table 37. Array information Field Array name Value Admin account Management IP Storage pool name Datastore name CIFS Server IP Table 38. Network infrastructure information Name Purpose IP Address Subnet Mask Ethernet Switch 1 Default Gateway Ethernet Switch 2 Table 39. VLAN information Name Network Purpose VLAN ID Allowed Subnets Virtual Machine Networking Hyper-V Management CIFS Storage Network Live Migration 162 EMC VSPEX End-User Computing

163 Appendix B: Customer Configuration Data Sheet Table 40. Service accounts Account root Purpose Windows Server administrator Hyper-V Administrator Array administrator SCVMM administrator XenDesktop administrator SQL Server administrator Windows Server administrator Hyper-V Administrator Array root Array administrator SCVMM administrator XenDesktop administrator SQL Server administrator Password (optional, secure appropriately) EMC VSPEX End-User Computing 163

164 Appendix C: References Appendix C References This appendix presents the following topics: References EMC VSPEX End-User Computing

165 Appendix C: References References The following references provide additional and relevant information. EMC documentation Other documentation The following documents are located on EMC Online Support. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative: EMC Infrastructure for Citrix XenDesktop 7, EMC VNX Series (NFS and FC), Citrix XenDesktop 7, VMware vsphere 5.1 Reference Architecture EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC), VMware vsphere 4.1, and Citrix XenDesktop 5 Proven Solution Guide EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS), VMware vsphere 4.1, and Citrix XenDesktop 5 Proven Solution Guide EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure Applied Best Practices EMC VNX Unified Best Practices for Performance Applied Best Practices Guide VNX FAST Cache: A Detailed Review Sizing EMC VNX Series for VDI Workload EMC Infrastructure for Citrix XenDesktop 5.6, EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Reference Architecture EMC Infrastructure for Citrix XenDesktop 5.6: EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide EMC Infrastructure for Citrix XenDesktop 5.5 (PVS): EMC VNX Series (NFS), Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6 Reference Architecture EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6 Proven Solution Guide EMC Infrastructure for Citrix XenDesktop 5.5: EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Reference Architecture EMC Infrastructure for Citrix XenDesktop 5.5: EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Proven Solution Guide For Citrix or Microsoft documentation, refer to the Citrix and Microsoft websites at and EMC VSPEX End-User Computing 165

166 Appendix D: About VSPEX Appendix D About VSPEX This appendix presents the following topics: About VSPEX EMC VSPEX End-User Computing

167 Appendix D: About VSPEX About VSPEX EMC has joined forces with the industry s leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-in-class technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that uses their existing IT infrastructures, while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers who are looking for the simplicity that is characteristic of truly converged infrastructures and that, at the same time, provide more choice in individual stack components. VSPEX solutions are proven by EMC and packaged and sold exclusively by EMC channel partners. For channel partners, VSPEX provides more opportunity, a faster sales cycle, and end-to-end enablement. By working even more closely together, EMC and its channel partners can deliver infrastructures that accelerate the journey to the cloud for even more customers. EMC VSPEX End-User Computing 167

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx) EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC

More information

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200,

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP Enabled By EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This describes how to design virtualized Oracle Database resources on

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE White Paper EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE EMC Next-Generation VNX, EMC Storage Integrator for Windows Suite, Microsoft System Center 2012 SP1 Reduce storage

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX White Paper EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX Citrix XenDesktop 5.6 with Provisioning Services 6.1 for 5000 Desktops Including: Citrix XenDesktop Citrix Provisioning Services EMC

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using NFS and DNFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

Making the Move to Desktop Virtualization No More Reasons to Delay

Making the Move to Desktop Virtualization No More Reasons to Delay Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Maximizing Your Investment in Citrix XenDesktop With Sanbolic Melio By Andrew Melmed, Director of Enterprise Solutions, Sanbolic Inc. White Paper September 2011 www.sanbolic.com Introduction This white

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide www.citrix.com Overview XenDesktop offers IT administrators many options in order to implement virtual

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

Desktop Virtualization and Storage Infrastructure Optimization

Desktop Virtualization and Storage Infrastructure Optimization Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

SIZING EMC VNX SERIES FOR VDI WORKLOAD

SIZING EMC VNX SERIES FOR VDI WORKLOAD White Paper SIZING EMC VNX SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

How To Protect Data On Network Attached Storage (Nas) From Disaster

How To Protect Data On Network Attached Storage (Nas) From Disaster White Paper EMC FOR NETWORK ATTACHED STORAGE (NAS) BACKUP AND RECOVERY Abstract This white paper provides an overview of EMC s industry leading backup and recovery solutions for NAS systems. It also explains

More information

Introduction. Options for enabling PVS HA. Replication

Introduction. Options for enabling PVS HA. Replication Software to Simplify and Share SAN Storage Enabling High Availability for Citrix XenDesktop and XenApp - Which Option is Right for You? White Paper By Andrew Melmed, Director of Enterprise Solutions, Sanbolic,

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Simplify management and decrease TCO Streamline Application

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2 Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching

More information

Infortrend EonNAS 3000 and 5000: Key System Features

Infortrend EonNAS 3000 and 5000: Key System Features Infortrend EonNAS 3000 and 5000: Key System Features White paper Abstract This document introduces Infortrend s EonNAS 3000 and 5000 systems and analyzes key features available on these systems. Table

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions Citrix XenDesktop Modular Reference Architecture Version 2.0 Prepared by: Worldwide Consulting Solutions TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 3 Design Planning... 9 Design Examples...

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE White Paper MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC VNX Family, EMC Symmetrix VMAX Systems, and EMC Xtrem Server Products Design and sizing best practices

More information

How To Backup With Ec Avamar

How To Backup With Ec Avamar BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel A Detailed Review EMC Information Infrastructure Solutions Abstract

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Luxembourg June 3 2014

Luxembourg June 3 2014 Luxembourg June 3 2014 Said BOUKHIZOU Technical Manager m +33 680 647 866 [email protected] SOFTWARE-DEFINED STORAGE IN ACTION What s new in SANsymphony-V 10 2 Storage Market in Midst of Disruption

More information

Windows Server 2012 2,500-user pooled VDI deployment guide

Windows Server 2012 2,500-user pooled VDI deployment guide Windows Server 2012 2,500-user pooled VDI deployment guide Microsoft Corporation Published: August 2013 Abstract Microsoft Virtual Desktop Infrastructure (VDI) is a centralized desktop delivery solution

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE VSPEX IMPLEMENTATION GUIDE SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE Silver Peak Abstract This Implementation Guide describes the deployment of Silver Peak

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review White Paper EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review Abstract This white paper introduces EMC Unisphere for VNXe, a web-based management environment for creating storage

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read

More information

CXS-203-1 Citrix XenServer 6.0 Administration

CXS-203-1 Citrix XenServer 6.0 Administration Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,

More information

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide 605: Design and implement a desktop virtualization solution based on a mock scenario Hands-on Lab Exercise Guide Contents Overview... 2 Scenario... 5 Quick Design Phase...11 Lab Build Out...12 Implementing

More information