EMC VSPEX END-USER COMPUTING

Size: px
Start display at page:

Download "EMC VSPEX END-USER COMPUTING"

Transcription

1 VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document describes the EMC VSPEX end-user computing solution with Citrix XenDesktop, Microsoft Hyper-V Server 2012, and EMC Next-Generation VNX for up to 2,000 virtual desktops. December 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published December 2013 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX End-User Computing with Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Part Number H EMC VSPEX End-User Computing

3 Contents Contents Chapter 1 Executive Summary 12 Introduction Audience Purpose of this guide Business needs Chapter 2 Solution Overview 15 Solution overview Desktop broker Virtualization Compute Network Backup Storage Chapter 3 Solution Technology Overview 22 Solution technology Summary of key components Desktop virtualization Citrix XenDesktop Machine Creation Services Citrix Provisioning Services Citrix Personal vdisk Citrix Profile Management Virtualization Microsoft Hyper-V Server Microsoft System Center Virtual Machine Manager Hyper-V High Availability EMC Storage Integrator for Windows Compute Network Storage EMC VNX Snapshots EMC VNX SnapSure EMC VSPEX End-User Computing 3

4 Contents EMC VNX Virtual Provisioning VNX FAST Cache VNX FAST VP (optional) VNX file shares ROBO Backup and recovery EMC Avamar ShareFile ShareFile StorageZones ShareFile StorageZone Architecture Using ShareFile StorageZone with VSPEX architectures Chapter 4 Solution Overview 44 Solution overview Solution architecture Logical architecture Key components Hardware resources Software resources Sizing for validated configuration Server configuration guidelines Microsoft Hyper-V memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines VLAN Enable jumbo frames Link aggregation Storage configuration guidelines Hyper-V storage virtualization for VSPEX VSPEX storage building block VSPEX end-user computing validated maximums Storage layout for 500 virtual desktops Storage layout for 1,000 virtual desktops Storage layout for 2,000 virtual desktops High availability and failover Virtualization layer Compute layer Network layer Storage layer EMC VSPEX End-User Computing

5 Contents Validation test profile Backup environment configuration guidelines Backup characteristics Backup layout Sizing guidelines Reference workload Defining the reference workload Applying the reference workload Implementing the reference architectures Resource types Backup resources Expanding existing VSPEX EUC environments Implementation summary Quick assessment CPU requirements Memory requirements Storage performance requirements Storage capacity requirements Determining equivalent reference virtual desktops Fine-tuning Chapter 5 VSPEX Configuration Guidelines 97 Overview Pre-deployment tasks Deployment prerequisites Customer configuration data Preparing switches, connecting the network, and configuring switches Preparing network switches Configuring infrastructure network Configuring VLANs Completing network cabling Preparing and configuring the storage array Configuring VNX Provisioning core data storage Provisioning optional storage for user data Provisioning optional storage for infrastructure virtual machines Installing and configuring Microsoft Hyper-V hosts Installing Windows hosts Installing Hyper-V and configuring failover clustering EMC VSPEX End-User Computing 5

6 Contents Configuring Windows host networking Installing PowerPath on Windows servers Enabling jumbo frames Planning virtual machine memory allocations Installing and configuring SQL Server database Creating a virtual machine for Microsoft SQL Server Installing Microsoft Windows on the virtual machine Installing SQL Server Configuring database for Microsoft SCVMM Deploying System Center Virtual Machine Manager server Creating a SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server Installing the SCVMM Management Console Installing the SCVMM agent locally on a host Adding a Hyper-V cluster into SCVMM Adding file share storage to SCVMM (file variant only) Creating a virtual machine in SCVMM Creating a template virtual machine Deploying virtual machines from the template virtual machine Installing and configuring XenDesktop controller Installing server-side components of XenDesktop Configuring a site Adding a second controller Installing Citrix Studio Preparing master virtual machine Provisioning virtual desktops Installing and configuring Provisioning Services (PVS only) Configuring a PVS server farm Adding a second PVS server Create a PVS store Configuring inbound communication Configuring a bootstrap file Setting up a TFTP server on VNX Configuring boot options 66 and 67 on DHCP server Preparing the master virtual machine Provisioning the virtual desktops Setting up EMC Avamar GPO additions for EMC Avamar EMC VSPEX End-User Computing

7 Contents Preparing the master image for EMC Avamar Defining datasets Defining schedules Adjusting the maintenance window schedule Defining retention policies Creating groups and group policy EMC Avamar Enterprise Manager: activating clients Summary Chapter 6 Validating the Solution 150 Overview Post-installation checklist Deploying and testing a single virtual desktop Verifying the redundancy of the solution components Appendix A Bills of Materials 153 Bill of materials for 500 virtual desktops Bill of materials for 1,000 virtual desktops Bill of materials for 2,000 virtual desktops Appendix B Customer Configuration Data Sheet 160 Customer configuration data sheets Appendix C References 164 References EMC documentation Other documentation Appendix D About VSPEX 166 About VSPEX EMC VSPEX End-User Computing 7

8 Contents Figures Figure 1. Next-Generation VNX with multicore optimization Figure 2. Active/active processors increase performance, resiliency, and efficiency Figure 3. Latest Unisphere Management Suite Figure 4. Solution components Figure 5. XenDesktop 7 architecture components Figure 6. Compute layer flexibility Figure 7. Example of highly-available network design Figure 8. Storage pool rebalance progress Figure 9. Thin LUN space utilization Figure 10. Examining storage pool space utilization Figure 11. Defining storage pool utilization thresholds Figure 12. Defining automated notifications for block Figure 13. ShareFile high-level architecture Figure 14. Logical architecture: VSPEX end-user computing for Citrix XenDesktop with ShareFile StorageZone Figure 15. Logical architecture for SMB variant Figure 16. Logical architecture for FC variant Figure 17. Hypervisor memory consumption Figure 18. Required networks Figure 19. Hyper-V virtual disk types Figure 20. Core storage layout with PVS provisioning for 500 virtual desktops Figure 21. Core storage layout with MCS provisioning for 500 virtual desktops Figure 22. Optional storage layout for 500 virtual desktops Figure 23. Figure 24. Core storage layout with PVS provisioning for 1,000 virtual desktops Core storage layout with MCS provisioning for 1,000 virtual desktops Figure 25. Optional storage layout for 1,000 virtual desktops Figure 26. Figure 27. Core storage layout with PVS provisioning for 2,000 virtual desktops Core storage layout with MCS provisioning for 2,000 virtual desktops Figure 28. Optional storage layout for 2,000 virtual desktops Figure 29. High availability at the virtualization layer Figure 30. Redundant power supplies Figure 31. Network layer high availability Figure 32. VNX series high availability Figure 33. Sample network architecture SMB variant EMC VSPEX End-User Computing

9 Contents Figure 34. Sample network architecture FC Variant Figure 35. Set nthread parameter Figure 36. Storage System Properties dialog box Figure 37. Create FAST Cache dialog box Figure 38. Advanced tab in the Create Storage Pool dialog box Figure 39. Advanced tab in the Storage Pool Properties dialog box Figure 40. Storage Pool Properties window Figure 41. Manage Auto-Tiering dialog box Figure 42. LUN Properties window Figure 43. Configure Bootstrap dialog box Figure 44. Configuring Windows Folder Redirection Figure 45. Create a Windows network drive mapping for user files Figure 46. Configure drive mapping settings Figure 47. Configure drive mapping common settings Figure 48. Create a Windows network drive mapping for user profile data Figure 49. Avamar tools menu Figure 50. Avamar Manage All Datasets dialog box Figure 51. Avamar New Dataset dialog box Figure 52. Configure Avamar Dataset settings Figure 53. User Profile data dataset Figure 54. User Profile data dataset Exclusion settings Figure 55. User Profile data dataset Options settings Figure 56. User Profile data dataset Advanced Options settings Figure 57. Avamar default Backup/Maintenance Windows schedule Figure 58. Avamar modified Backup/Maintenance Windows schedule Figure 59. Create new Avamar backup group Figure 60. New backup group settings Figure 61. Select backup group dataset Figure 62. Select backup group schedule Figure 63. Select backup group retention policy Figure 64. Avamar Enterprise Manager Figure 65. Avamar Client Manager Figure 66. Avamar Activate Client dialog box Figure 67. Avamar Activate Client menu Figure 68. Avamar Directory Service configuration Figure 69. Avamar Client Manager post configuration Figure 70. Avamar Client Manager virtual desktop clients Figure 71. Avamar Client Manager select virtual desktop clients Figure 72. Select Avamar groups EMC VSPEX End-User Computing 9

10 Contents Figure 73. Activate Avamar clients Figure 74. Commit Avamar client activation Figure 75. Avamar client activation informational prompt one Figure 76. Avamar client activation informational prompt two Figure 77. Avamar Client Manager activated clients Tables Table 1. Thresholds and settings under VNX OE Block Release Table 2. Table 3. Minimum hardware resources to support ShareFile StorageZone with Storage Center Recommended EMC VNX storage needed for ShareFile StorageZone CIFS share Table 4. Solution hardware Table 5. Solution software Table 6. Configurations that support this solution Table 7. Server hardware Table 8. Hardware resources for network Table 9. Storage hardware Table 10. Number of disks required for various numbers of virtual desktops Table 11. Validated environment profile Table 12. Backup profile characteristics Table 13. Virtual desktop characteristics Table 14. Blank worksheet row Table 15. Reference virtual desktop resources Table 16. Example worksheet row Table 17. Example applications Table 18. Server resource component totals Table 19. Blank customer worksheet Table 20. Deployment process overview Table 21. Tasks for pre-deployment Table 22. Deployment prerequisites checklist Table 23. Tasks for switch and network configuration Table 24. Tasks for storage configuration Table 25. Tasks for server installation Table 26. Tasks for SQL Server database setup Table 27. Tasks for SCVMM configuration Table 28. Tasks for XenDesktop controller setup Table 29. Tasks for XenDesktop controller setup Table 30. Tasks for Avamar integration EMC VSPEX End-User Computing

11 Contents Table 31. Tasks for testing the installation Table 32. Table 33. Table 34. List of components used in the VSPEX solution for 500 virtual desktops List of components used in the VSPEX solution for 1,000 virtual desktops List of components used in the VSPEX solution for 2,000 virtual desktops Table 35. Common server information Table 36. Hyper-V server information Table 37. Array information Table 38. Network infrastructure information Table 39. VLAN information Table 40. Service accounts EMC VSPEX End-User Computing 11

12 Chapter 1: Executive Summary Chapter 1 Executive Summary This chapter presents the following topics: Introduction Audience Purpose of this guide Business needs EMC VSPEX End-User Computing

13 Chapter 1: Executive Summary Introduction EMC VSPEX validated and modular architectures are built with proven technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When you are embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, more choices, greater efficiency, and lower risk. This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces. Customers are free to select any server and networking hardware that meets or exceeds the stated minimums. Audience Purpose of this guide This guide assumes you have the necessary training and background to install and configure an end-user computing solution based on Citrix XenDesktop with Microsoft Hyper-V as a hypervisor, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and you should be familiar with these documents. You should also be familiar with the infrastructure and database security policies of the customer installation. Individuals focused on selling and sizing a VSPEX end-user computing solution for Citrix XenDesktop should pay particular attention to the first four chapters of this document. Implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This guide presents an initial introduction to the VSPEX end-user computing architecture, an explanation of how to modify the architecture for specific engagements, and instructions for effectively deploying the system. The VSPEX end-user computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on a Microsoft Hyper-V virtualization layer backed by the highly available VNX storage family for storage and Citrix s XenDesktop desktop broker. The compute and network components, while vendor-definable, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. The 500, 1,000, and 2,000 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same EMC VSPEX End-User Computing 13

14 Chapter 1: Executive Summary Business needs requirements, this document provides adjustment methods and guidance for deploying a cost-effective system. An end-user computing or virtual desktop architecture is a complex system offering. This document facilitates setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. Validation tests are provided to ensure that your system is up and running properly after the last component has been installed. Follow the guidelines in this document to ensure an efficient and painless desktop deployment. The use of business applications is becoming more common in the consolidated compute, network, and storage environment. Using Citrix for EMC VSPEX end-user computing reduces the complexity of configuring the components of a traditional deployment model. It simplifies integration management while maintaining the application design and implementation options. Citrix unifies administration while enabling the control and monitoring of process separation. The following are the business needs addressed by the VSPEX end-user computing solution for Citrix architecture: Provides an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components Provides a solution for efficiently virtualizing 500, 1,000, or 2,000 virtual desktops for varied customer use cases Provides a reliable, flexible, and scalable reference design 14 EMC VSPEX End-User Computing

15 Chapter 2: Solution Overview Chapter 2 Solution Overview This chapter presents the following topics: Solution overview Desktop broker Virtualization Compute Network Backup Storage EMC VSPEX End-User Computing 15

16 Chapter 2: Solution Overview Solution overview Desktop broker The EMC VSPEX end-user computing solution for Citrix XenDesktop on Microsoft Hyper-V server 2012 provides a complete system architecture capable of supporting and protecting up to 2,000 virtual desktops with a redundant server and network topology, highly available storage, and trusted EMC backup solutions. The core components that make up this particular solution are desktop broker, virtualization, storage, network, and compute. XenDesktop is the virtual desktop solution from Citrix that allows virtual desktops to run on the Microsoft Hyper-V virtualization environment. It enables the centralization of desktop management and provides increased control for IT organizations. XenDesktop allows end users to connect to their desktops from multiple devices across a network connection. Virtualization Compute Microsoft Hyper-V is a virtualization platform that provides flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core Microsoft virtualization components are the Microsoft Hyper-V hypervisor and the Microsoft System Center Virtual Machine Manager for system management. The Microsoft Hyper-V hypervisor runs on a dedicated server and allows multiple operating systems to execute on the system simultaneously as virtual machines. Microsoft clustered services allows multiple Hyper-V servers to operate in a clustered configuration. The Microsoft Hyper-V cluster configuration is managed as a larger resource pool through the Microsoft System Center Virtual Machine, allowing dynamic allocation of CPU, memory, and storage across the cluster. High-availability features of Microsoft Hyper-V Server 2012, such as live Migration and Storage Migration, enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact. VSPEX allows flexibility in the design and implementation of the vendor s choice of server components. The infrastructure must conform to the following attributes: Sufficient RAM, CPU cores, and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to support failover after a server failure in the environment 16 EMC VSPEX End-User Computing

17 Chapter 2: Solution Overview Network VSPEX allows flexibility in the design and implementation of the vendor s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage Support for link aggregation Traffic isolation based on industry-accepted best practices Backup EMC Avamar delivers the protection and efficiency needed to accelerate the deployment of a VSPEX end-user computing solution. Avamar enables administrators to centrally back up and manage the policies and end-user computing infrastructure components, while it allows end users to efficiently recover their own files from a simple and intuitive web-based interface. Avamar only moves new, unique sub-file data segments, resulting in fast daily full backups. This results in an up to 90 percent reduction in backup times, and can reduce the required daily network bandwidth by up to 99 percent and the required backup storage by 10 to 30 times. Storage The EMC Next-Generation VNX storage series provides both file and block access with a broad feature set, making it an ideal choice for any end-user computing implementation. VNX storage includes the following components, sized for the stated reference architecture workload: Host adapter ports (for block) Provide host connectivity through fabric to the array Data Movers (for file) Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB, NFS services) Storage processors (SPs) The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays Disk drives Disk spindles and solid state drives (SSDs) that contain the host/application data and their enclosures Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables CIFS (SMB) and NFS protocols on the VNX. The desktop solutions described in this document are based on the EMC VNX5400 and EMC VNX5600 storage arrays. The VNX5400 can support a maximum of 250 drives and the VNX5600 can host up to 500 drives. EMC VSPEX End-User Computing 17

18 Chapter 2: Solution Overview The EMC VNX series supports a wide range of business-class features that are ideal for the end-user computing environment, including: EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP) EMC FAST Cache File-level data deduplication and compression Block deduplication Thin provisioning Replication Snapshots and checkpoints File-level retention Quota management Features and enhancements The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency and management and protection software to meet the demanding needs of today s virtualized application environments. The next-generation VNX series includes many features and enhancements designed and built upon the first generation s success. These features and enhancements include: More capacity with multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx ) Greater efficiency with a flash-optimized hybrid array Better protection by increasing application availability with active/active storage processors Easier administration and deployment by increasing productivity with a new Unisphere Management Suite VSPEX is built with the next-generation VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts 18 EMC VSPEX End-User Computing

19 Chapter 2: Solution Overview the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. New data tends to be accessed more frequently than older data, so it is stored on flash drives to provide the best performance. As data ages and becomes less active, FAST VP automatically tiers the data from high-performance drives to high-capacity drives, based on customer-defined policies. This functionality has been enhanced to provide four times better efficiency with new FAST VP solid-state disks (SSDs) that are based on enterprise multi-level cell (emlc) technology, lowering the cost per gigabyte. FAST Cache dynamically absorbs unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier. VNX Intel MCx code path optimization The advent of flash technology has been a catalyst in significantly changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores, as shown in Figure 1. The VNX series with MCx dramatically improves the file performance for transactional applications like databases and virtual machines over network-attached storage (NAS). Figure 1. Next-Generation VNX with multicore optimization EMC VSPEX End-User Computing 19

20 Chapter 2: Solution Overview Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is fundamental to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNX performance VNX storage, enabled with the MCx architecture, is optimized for FLASH 1st and provides unprecedented overall performance. It optimizes the system for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and provides optimal capacity efficiency (cost per GB). VNX provides the following performance improvements: Up to four times more file transactions when compared with dual controller arrays Increased file performance for transactional applications (for example, Microsoft Exchange on VMware over NFS) by up to three times with a 60 percent better response time Up to four times more Oracle and Microsoft SQL Server OLTP transactions Up to six times more virtual machines Active/active array storage processors The new VNX architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O. Load balancing is also improved and applications can achieve up to two times better performance. Active/active for block is ideal for applications that require the highest levels of availability and performance but do not require tiering or efficiency services like compression, deduplication, or snapshot. With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file-system migrations between systems. This process migrates all checkpoints and settings automatically and enables the clients to continue operation during the migration. 20 EMC VSPEX End-User Computing

21 Chapter 2: Solution Overview Figure 2. Active/active processors increase performance, resiliency, and efficiency Unisphere management The latest Unisphere Management Suite extends Unisphere s easy-to-use interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 3, the suite also includes Unisphere Remote for centrally managing up to thousands of VNX and VNXe systems with added support for XtremSW Cache. Figure 3. Latest Unisphere Management Suite EMC VSPEX End-User Computing 21

22 Chapter 3: Solution Technology Overview Chapter 3 Solution Technology Overview This chapter presents the following topics: Solution technology Summary of key components Desktop virtualization Virtualization Compute Network Storage Backup and recovery ShareFile EMC VSPEX End-User Computing

23 Chapter 3: Solution Technology Overview Solution technology This VSPEX solution uses EMC VNX5400 (for up to 1,000 virtual desktops) or VNX5600 (for up to 2,000 virtual desktops) storage arrays and Microsoft Hyper-V Server 2012 to provide the storage and computer resources for a Citrix XenDesktop 7 environment for Windows 7 virtual desktops, which are provisioned by Provisioning Services (PVS) or Machine Creation Services (MCS). Figure 4 shows the components of the solution. Figure 4. Solution components Planning and designing the storage infrastructure for Citrix XenDesktop is a critical step, because the shared storage must be able to absorb large bursts of input/output (I/O) that occur during some use cases like when many desktops boot at the beginning of a workday or when required patches are applied. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. If planning does not take these use cases into account, users can quickly become frustrated by unpredictable performance. To provide predictable performance for an end-user computing environment, the storage must be able to handle peak I/O loads from clients while still providing fast response times. Typically, the design for this type of workload involves deploying several disks to handle brief periods of extreme I/O pressure, and this can be expensive to implement. This solution uses EMC VNX FAST Cache, allowing for a reduction in the number of disks required. EMC s next-generation backup enables protection of user data and end-user recoverability by using EMC Avamar and its desktop client within the desktop image. EMC VSPEX End-User Computing 23

24 Chapter 3: Solution Technology Overview Summary of key components This section describes the key components of this solution. Desktop virtualization The desktop virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software enables on-demand creation of desktop images, allows maintenance of the image without affecting user productivity, and prevents the environment from growing in an unconstrained way. Virtualization The virtualization layer allows physical resources to be uncoupled from the applications that use them. This allows applications to use resources that are not directly tied to hardware, enabling many key features for end-user computing. Compute The compute layer provides memory and processing resources for the virtualization layer software and the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required and allows the customer to choose any compute hardware that meets the requirements. Network The network layer connects the users of the environment to the resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture. It allows the customer to implement the requirements using any network hardware that meets these requirements. Storage The storage layer is a critical resource for the implementation of the end-user computing environment. Because of the way desktops are used, the storage layer must be able to absorb large bursts of transient activity without causing undue impact on the user experience. This solution uses EMC VNX FAST Cache to handle this workload efficiently. Backup and recovery The optional backup and recovery component of the solution provides data protection in the event that the data in the primary system is deleted, damaged, or otherwise becomes unusable. ShareFile Security components from RSA provide customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. Solution architecture provides details about the components that make up the reference architecture. 24 EMC VSPEX End-User Computing

25 Chapter 3: Solution Technology Overview Desktop virtualization Desktop virtualization encapsulates and delivers the user desktop to a remote client device, which can be a thin client, zero client, smartphone, or tablet. It allows subscribers from different locations to access virtual desktops hosted on centralized computing resources at remote data centers. In this solution, Citrix XenDesktop is used to provision, manage, broker, and monitor the desktop virtualization environment. Citrix XenDesktop 7 Under the XenDesktop 7 architecture, management and delivery components are shared between XenDesktop and XenApp to give administrators a unified management experience. Figure 5 shows the XenDesktop 7 architecture components. Citrix Director Users Users (Receiver) (Receiver) Receiver communicates with virtual delivery agent for desktop access Client side network Server side network StoreFront StoreFront communicates with controllers to broker connections between users and virtual desktops Citrix Studio Delivery controller Virtual Delivery Agent Virtual Machines Hypervisors Virtual Delivery Agent Server OS Machines Virtual Delivery Agent Desktop OS Machines Virtual Delivery Agent Remote PCs Database Controllers communicate with hypervisor management suite to deploy virtual desktops Figure 5. XenDesktop 7 architecture components The XenDesktop 7 architecture components are described as follows: Receiver: Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of the user s devices including smartphones, tablets, and PCs. Receiver provides on-demand access to Windows, Web, and Software-as-a-Service (SaaS) applications. StoreFront: StoreFront authenticates users to sites hosting resources and manages stores of desktops and applications that users access. Studio: Studio is the management console that enables you to configure and manage the deployment, eliminating the need for separate management consoles for managing delivery of applications and desktops. Studio provides various wizards to guide you through the process of setting up your EMC VSPEX End-User Computing 25

26 Chapter 3: Solution Technology Overview environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users. Delivery Controller: Installed on servers in the data center, the Delivery Controller consists of services that communicate with the hypervisor to distribute applications and desktops, authenticate and manage user access, and broker connections between users and their virtual desktops and applications. The controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller allows you to install profile management to manage user personalization settings in virtualized or physical Windows environments. Each site has one or more delivery controllers. Virtual Delivery Agent (VDA): Installed on server or workstation operating systems, the VDA enables connections for desktop s and applications. For Remote PC access, install the VDA on the office PC. Server OS machines: Server OS machines are virtual machines or physical machines based on Windows Server operating systems and are used for delivering applications or hosted shared desktops (HSD) to users. Desktop OS machines: Desktop OS machines are virtual machines or physical machines based on Windows Desktop operating system and are used for delivering personalized desktops to users or applications from desktop operating systems. Remote PC access: User devices that are included on a whitelist enable users to access resources on their office PCs remotely from any device running Citrix Receiver. Machine Creation Services Machine Creation Services (MCS) is a provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management. MCS allows the management of several types of machines within a catalog in Citrix Studio. Desktop customization is persistent for machines that use Personal vdisk, while non-personal vdisk machines are appropriate if desktop changes are to be discarded when the user logs off. Desktops provisioned using MCS share a common base image within a catalog. Because of this, the base image is typically accessed often enough to use EMC VNX FAST Cache, while frequently accessed data is promoted to flash drives to provide optimal I/O response time with fewer physical disks. Citrix Provisioning Services Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between the hardware and the software that runs on it. By streaming a single shared disk image (vdisk) instead of copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage. As the number of machines continues to grow, PVS provides the efficiency of a centralized management with the benefits of distributed processing. 26 EMC VSPEX End-User Computing

27 Chapter 3: Solution Technology Overview Because machines stream the disk data dynamically in real time from a single shared image, consistency of the machine image is ensured. In addition, the configuration, applications, and even OS of large pools of machines can change completely during the reboot restart operation. In this solution, PVS provisions 500, 1,000, or 2,000 virtual desktops that are running Windows 7 or 8. The desktops are deployed from a single vdisk image. Citrix Personal vdisk The Citrix Personal vdisk (PvDisk or PvD) feature was introduced in Citrix XenDesktop 5.6. With Personal vdisk, users can preserve customization settings and userinstalled applications in a pooled desktop. This capability is accomplished by redirecting the changes from the user s pooled virtual machine to a separate disk called Personal vdisk. During runtime, the content of the Personal vdisk is blended with the content from the base virtual machine to provide a unified experience to the end user. The Personal vdisk data is preserved during restart and refresh operations. In this solution, PVS provisions 500, 1,000, or 2,000 virtual desktops that are running Windows 7. The desktops are deployed from a single vdisk image. Citrix Profile Management Citrix Profile Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Management ensures that personal settings are applied to desktops and applications, regardless of the user s login location or client device. The combination of Citrix Profile Management and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization. Citrix Profile Management dynamically downloads a user s remote profile when the user logs in to a Citrix XenDesktop. Profile Management downloads user profile information only when the user needs it. Virtualization The virtualization layer is a key component of any end-user computing solution. It allows the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance and allows the physical capability of the system to change without affecting the hosted applications. Microsoft Hyper-V Server 2012 Microsoft Hyper-V server 2012 is used to build the virtualization layer for this solution. Microsoft Hyper-V transforms a computer s physical resources by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. High-availability features of Microsoft Hyper-V such as Live Migration and Storage Migration enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact. EMC VSPEX End-User Computing 27

28 Chapter 3: Solution Technology Overview Microsoft System Center Virtual Machine Manager Hyper-V High Availability Microsoft System Center Virtual Machine Manager is a centralized management platform for the Microsoft Hyper-V infrastructure. It provides administrators with a single interface that can be accessed from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure. The Microsoft Hyper-V Cluster High Availability feature allows the virtualization layer to automatically restart virtual machines in various failure conditions. If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster. Note: For Microsoft Hyper-V Cluster High Availability to restart virtual machines on different hardware, those servers must have resources available. The Compute section provides specific recommendations to enable this functionality. Microsoft Hyper-V Cluster allows you to configure policies to determine which machines are restarted automatically and under what conditions these operations should be performed. EMC Storage Integrator for Windows EMC Storage Integrator (ESI) 3.0 for Windows is a management interface that provides the ability to view and provision block and file storage for Windows environments. ESI simplifies the steps involved in creating and provisioning storage to Hyper-V servers as a local disk or a mapped share. ESI also supports storage discovery and provisioning through PowerShell. The ESI for Windows product guides that are available on EMC Online Support provide more information. Compute The choice of a server platform for an EMC VSPEX infrastructure is based not only on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and other factors. For these reasons, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents the requirements for the number of processor cores and the amount of RAM. This can be implemented with 2 servers or 20 and still be considered the same VSPEX solution. For example, let us assume that the compute layer requirements for a given implementation are 25 processor cores and 200 GB of RAM. One customer wants to use white-box servers containing 16 processor cores and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. 28 EMC VSPEX End-User Computing

29 Chapter 3: Solution Technology Overview In this example, the first customer needs four servers while the second customer needs two, as shown in Figure 6. Figure 6. Compute layer flexibility Note: To enable high availability at the compute layer, each customer needs one additional server with sufficient capacity to provide a failover platform in the event of a hardware outage. In the compute layer, observe the following best practices: Use a number of identical or at least compatible servers. VSPEX implements hypervisor-level high-availability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you are implementing hypervisor-layer high availability, then the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single-server failures. This allows you to implement minimal-downtime upgrades and tolerate single-unit failures. EMC VSPEX End-User Computing 29

30 Chapter 3: Solution Technology Overview Network Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX is flexible enough to meet your specific needs. The key constraint is the provision of sufficient processor cores and RAM per core to meet the needs of the target environment. The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 7. Note: The example is for IP-based networks, but the same underlying principles regarding multiple connections and elimination of single points of failure also apply to Fibre Channelbased networks. Figure 7. Example of highly-available network design 30 EMC VSPEX End-User Computing

31 Chapter 3: Solution Technology Overview This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage The storage layer is a key component of any cloud infrastructure solution, providing storage efficiency, management flexibility, and reduced total cost of ownership. This VSPEX solution uses the EMC VNX series to provide virtualization at the storage layer. EMC VNX Snapshots VNX Snapshots is a software feature that creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation, and local rapid restores. VNX Snapshots improves on the existing EMC VNX SnapView snapshot functionality by integrating with storage pools. Note: LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView snapshots. This limitation exists because VNX Snapshots requires pool space as part of its technology. VNX Snapshots supports 256 writeable snapshots per pool LUN. It supports branching, also called Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is the hard limit. VNX Snapshots uses redirect-on-write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy-on-first-write (CoFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release also supports consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. EMC VNX SnapSure EMC VNX SnapSure is an EMC VNX file software feature that enables you to create and manage checkpoints that are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block s original contents is saved to a separate volume called the SavVol. EMC VSPEX End-User Computing 31

32 Chapter 3: Solution Technology Overview Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint was created. SnapSure supports these types of checkpoints: Read-only checkpoints Read-only file systems created from a PFS Writeable checkpoints Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. Using VNX SnapSure provides more detailed information. EMC VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide predictable high performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNX snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and the User Capacity Threshold setting. Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. You can monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure EMC VSPEX End-User Computing

33 Chapter 3: Solution Technology Overview Figure 8. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. You can expand a pool LUN with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. LUN shrink Use LUN shrink to reduce the capacity of existing thin LUNs. VNX can shrink a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process involves these steps: 1. Shrink the file system from Windows Disk Management. 2. Shrink the pool LUN using a command window and the DISKRAID utility. The DISKRAID utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package. The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is complete, any other LUN in that pool can use the reclaimed space. For more detailed information on LUN expansion/shrinkage, refer to the EMC VNX Virtual Provisioning Applied Technology White Paper. EMC VSPEX End-User Computing 33

34 Chapter 3: Solution Technology Overview Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages are avoided. Figure 9 demonstrates why provisioning with thin pools requires monitoring. Figure 9. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host-reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation must never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. 34 EMC VSPEX End-User Computing

35 Chapter 3: Solution Technology Overview Figure 10 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free, Percent Full, Total Allocation, Total Subscription of physical capacity, Percent Subscribed and Oversubscribed By of virtual capacity. Figure 10. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization so you are alerted when thresholds are reached; set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by clicking Advanced in the Storage Pool Properties dialog box, as shown in Figure 11. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization so you are alerted when thresholds are reached; set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by clicking Advanced in the Storage Pool Properties dialog box, as shown in Figure 11. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, EMC VSPEX End-User Computing 35

36 Chapter 3: Solution Technology Overview the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. Figure 11. Defining storage pool utilization thresholds View alerts by clicking Alert in Unisphere. Figure 12 shows the Unisphere Event Monitor Wizard, where you can also select the option of receiving alerts through , a paging service, or an SNMP trap. Figure 12. Defining automated notifications for block 36 EMC VSPEX End-User Computing

37 Table 1 lists the information about thresholds and their settings. Table 1. Thresholds and settings under VNX OE Block Release 33 Threshold type Threshold range Threshold default Chapter 3: Solution Technology Overview Alert severity User settable 1%-84% 70% Warning None Side effect Built-in N/A 85% Critical Clears user settable alert Allowing total allocation to exceed 90 percent of total capacity puts you at risk of running out of space and affecting all applications that use thin LUNs in the pool. VNX FAST Cache VNX FAST VP (optional) VNX file shares ROBO VNX FAST Cache, a part of the VNX FAST Suite, enables the use of flash drives as an expanded cache layer for the array. FAST Cache is array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN. VNX FAST VP, a part of the VNX FAST Suite, enables you to automatically tier data across multiple types of drives to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. Configuring and Managing CIFS on VNX provides more information. Organizations with remote offices and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments must balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also use Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers. EMC VSPEX End-User Computing 37

38 Chapter 3: Solution Technology Overview Backup and recovery Backup and recovery provides data protection by backing up data files or volumes according to defined schedules and restoring data from backup in case recovery is needed after a disaster. In this VSPEX solution, EMC Avamar is used for the stack, which supports up to 2,000 virtual machines. EMC Avamar EMC Avamar provides methods to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk (VHDX) level for image backups and at the file level for guest-based backups. Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop in the event of hardware failure, corruption, or accidental deletion of the virtual desktop. Avamar significantly reduces the backup and recovery time of the virtual desktop by using change block tracking (CBT) on both backup and recovery. Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an operating system for which an Avamar backup client is available. It enables detailed control over the content and inclusion and exclusion patterns. This can be used to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected enables self-service recoverability of user data. ShareFile ShareFile is a cloud-based file-sharing and storage service built for enterprise class storage and security. ShareFile enables users to securely share documents with other users. ShareFile users include employees and users who are outside of the enterprise directory (referred to as clients). ShareFile StorageZones ShareFile StorageZones allow businesses to share files across the company while meeting compliance and regulatory concerns. StorageZones allow customers to keep their data on storage systems that are onsite. It allows for sharing of large files with full encryption and provides the ability to synchronize files with multiple devices. By keeping data onsite and closer to users than data residing on the public cloud, StorageZone can provide improved performance as well as improved security. ShareFile StorageZone allows you to: Use StorageZone with or instead of the ShareFile-managed cloud storage. Configure Citrix CloudGateway Enterprise to integrate ShareFile services with Citrix Receiver for user authentication and user provisioning. Take advantage of automated reconciliation between the ShareFile cloud and a company s StorageZone deployment. Enable automated antivirus scans of uploaded files. 38 EMC VSPEX End-User Computing

39 Chapter 3: Solution Technology Overview Enable file recovery from the Storage Center backup (the server component of a StorageZone is called a Storage Center). You can browse the file records for a particular date and time and tag any files and folders to restore from Storage Center backup. ShareFile StorageZone Architecture Figure 13 shows the ShareFile high-level architecture. Figure 13. ShareFile high-level architecture ShareFile consists of three components: Client accesses the ShareFile service through one of the native tools like a browser, Citrix Receiver, or directly through the application programming interface (API). Control Plane performs functions such as storing files, folders and account information, access control, reporting and various other brokering functions. The Control Plane resides in multiple Citrix data centers located worldwide. StorageZone defines locations where data is stored. The server component of StorageZone is called Storage Center. High availability requires at least two Storage Centers per StorageZone. A StorageZone must use a single file share for all of its Storage Centers. ShareFile Storage Center extends the ShareFile Software-as-a-Service (SaaS) cloud storage by providing the ShareFile account with on-premises private storage, referred to as StorageZone. The ShareFile on-premises storage differs from cloud storage as follows: ShareFile-managed cloud storage is a public multi-tenant storage system maintained by Citrix. A ShareFile Storage Center is a private single-tenant storage system maintained by the customer that can be used only by approved customer accounts. By default, ShareFile stores data in the secure ShareFile-managed cloud storage. The ShareFile Storage Center feature enables you to configure a private, onsite EMC VSPEX End-User Computing 39

40 Chapter 3: Solution Technology Overview StorageZone. StorageZone defines locations where data is stored and enables performance optimization by locating data storage close to users. Determine the number of StorageZones and their location based on the organization s performance and compliance requirements. In general, assigning users to the StorageZone that is geographically closest to them is the best practice for optimizing performance. Storage Center is a Web service that handles all HTTPS operations from end users and the ShareFile control subsystem. The ShareFile control subsystem handles all operations not related to file contents, such as authentication, authorization, file browsing, configuration, metadata, sending and requesting files, and load balancing. The control subsystem also performs Storage Center health checks and prevents offline servers from sending requests. The ShareFile control subsystem is maintained in Citrix Online data centers. The ShareFile storage subsystem handles operations related to file contents such as uploads, downloads, and antivirus verification. When you create a StorageZone, you are creating a private storage subsystem for your ShareFile data. For a production deployment of ShareFile, the recommended best practice is to use at least two servers with Storage Center installed for high availability. When you install Storage Center, you create a StorageZone. You can then install Storage Center on another server and join it to the same StorageZone. Storage Centers that belong to the same StorageZone must use the same file share for storage. 40 EMC VSPEX End-User Computing

41 Chapter 3: Solution Technology Overview Using ShareFile StorageZone with VSPEX architectures Figure 14 illustrates the VSPEX end-user computing for Citrix XenDesktop environment with added infrastructure to support ShareFile StorageZone with Storage Center. Server capacity is specified in generic terms for required minimums of CPU and memory. The customer is free to select the server and networking hardware that meets or exceeds the stated minimums. The recommended storage delivers a highly available architecture for the ShareFile StorageZone deployment. Desktop users (ICA clients) Virtual desktop #1 Virtual desktop #N Microsoft Server 2012 Hyper-V virtual desktops Active Directory / DNS / DHCP XenDesktop 7 Controller #1 SQL Server XenDesktop 7 Controller #2 Virtual Machine Manager Microsoft Server 2012 Hyper-V VMs Microsoft Server 2012 Hyper-V Cluster virtual desktops Network Microsoft Server 2012 Hyper-V Cluster Infrastructure VNX5400 Storage center #1 Storage center #2 Microsoft Server 2012 Hyper-V VMs ShareFile StorageZone 10 Gb Ethernet network 1 Gb Ethernet network Microsoft Server 2012 Hyper-V Cluster ShareFile StorageZone VMs Figure 14. Logical architecture: VSPEX end-user computing for Citrix XenDesktop with ShareFile StorageZone EMC VSPEX End-User Computing 41

42 Chapter 3: Solution Technology Overview Server A high availability production environment requires a minimum of two servers (virtual machines) with Storage Center installed. Table 2 summarizes the requirements for CPU/Memory to implement ShareFile StorageZone with Storage Center. Table 2. Minimum hardware resources to support ShareFile StorageZone with Storage Center CPU (cores) Memory (GB) Reference Storage Center 2 4 Storage Center system requirements on Citrix edocs Network Provide sufficient network ports to support the additional two-storage-center server requirements. You can implement the networking components using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. Storage ShareFile StorageZone requires a CIFS share to provide private data storage for Storage Center. The EMC VNX storage family has the ability to provide both file and block access with a broad feature set, making it an ideal choice for ShareFile StorageZone storage implementation. The EMC VNX series supports a wide range of business class features ideal for ShareFile StorageZone storage including: Fully Automated Storage Tiering for Virtual Pools (FAST VP) FAST Cache Data compression and file deduplication Thin provisioning Replication Checkpoints File-level retention Quota management 42 EMC VSPEX End-User Computing

43 Chapter 3: Solution Technology Overview Table 3 provides the recommended EMC VNX storage needed for ShareFile StorageZone CIFS share. Table 3. Recommended EMC VNX storage needed for ShareFile StorageZone CIFS share Storage Configuration Notes CIFS share For 500 users: 2 x Data Movers (active/standby CIFS variant only) 8 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 users: 2 x Data Movers (active/standby CIFS variant only) 16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 users: 2 x Data Movers (active/standby CIFS variant only) 24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks The configuration assumes that each user utilizes 10 GB of private storage space. EMC VSPEX End-User Computing 43

44 Chapter 4: Solution Overview Chapter 4 Solution Overview This chapter presents the following topics: Solution overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High availability and failover Validation test profile Backup environment configuration guidelines Sizing guidelines Reference workload Applying the reference workload Implementing the reference architectures Quick assessment EMC VSPEX End-User Computing

45 Chapter 4: Solution Overview Solution overview This chapter provides a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network interfaces. You can select the server and networking hardware that meets or exceeds the stated minimums. EMC has validated the specified storage architecture, along with a system meeting the server and network requirements outlined, to provide high levels of performance while delivering a highly available architecture for your end-user computing deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops and has been validated by EMC. In practice, each virtual desktop type has its own set of requirements that rarely fit a predefined idea of what a virtual desktop should be. In any discussion about end-user computing, a reference workload should first be defined. Not all servers perform the same tasks, and building a reference that takes into account every possible combination of workload characteristics is impractical. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment might not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Applying the reference workload provides a detailed description. Solution architecture We 1 validated the VSPEX end-user computing solution with EMC VNX at three different points of scale. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload. Logical architecture The architecture diagrams in this section show the layout of the major components in the solutions for the two storage variants SMB and FC. 1 In this, "we" refers to the EMC Solutions engineering team that validated the solution. EMC VSPEX End-User Computing 45

46 Chapter 4: Solution Overview Figure 15 depicts the logical architecture of the SMB variant, where 10 GbE carries all network traffic. Desktop users (ICA clients) Virtual desktop #1 Virtual desktop #N Hyper-V virtual desktops AD/DNS/ DHCP XenDesktop 7 Controller #1 SQL Server PVS Server #1 XenDesktop 7 Controller #2 Hyper-V Server 2012 virtual servers Virtual Machine Manager PVS Server #2 Network Hyper-v Server 2012 Cluster virtual desktops Hyper-v Server 2012 Cluster infrastructure 10 Gb Ethernet network Figure 15. Logical architecture for SMB variant EMC VNX Series Note: You can implement the networking components of the solution using 1 Gb/s or 10 Gb/s IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. 46 EMC VSPEX End-User Computing

47 Chapter 4: Solution Overview Figure 16 depicts the logical architecture of the FC variant, wherein an FC SAN carries storage traffic and 10 GbE carries management and application traffic. Desktop users (ICA clients) Virtual desktop #1 Virtual desktop #N Hyper-V virtual desktops AD/DNS/ DHCP XenDesktop 7 Controller #1 SQL Server PVS Server #1 XenDesktop 7 Controller #2 Hyper-V Server 2012 virtual servers Virtual Machine Manager PVS Server #2 Network Hyper-v Server 2012 Cluster virtual desktops Hyper-v Server 2012 Cluster infrastructure Fibre Channel Storage Network 10 Gb Ethernet network Figure 16. Logical architecture for FC variant EMC VNX Series Note: You can implement the networking components of the solution using 1 Gb/s or 10 Gb/s IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. Key components Citrix XenDesktop 7 delivery controller We used two Citrix XenDesktop controllers to provide redundant virtual desktop delivery, authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. In this reference architecture, the controllers are installed on Windows Server 2012 and hosted as virtual machines on Hyper-V Server Citrix Provisioning Services server We used two Citrix Provisioning Services (PVS) servers to provide redundant stream services to stream desktop images from vdisks, as needed, to target devices. In this reference architecture, vdisks are stored on a CIFS share that is hosted by the VNX storage system. EMC VSPEX End-User Computing 47

48 Chapter 4: Solution Overview Virtual desktops We provisioned virtual desktops running Windows 7 or 8 using MCS and PVS. Microsoft Hyper-V Server 2012 Microsoft Hyper-V provides a common virtualization layer to host a server environment. Table 13 on page 93 lists the specific characteristics of the validated environment. Microsoft Hyper-V server 2012 provides a highly available infrastructure through features such as the following: Live Migration Provides live migration of virtual machines within clustered and non-clustered servers with no virtual machine downtime or service disruption Storage Live Migration Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption Microsoft System Center Virtual Manager 2012 SP1 Microsoft System Center Virtual Manager Server provides a scalable and extensible platform that forms the foundation for virtualization management for the Microsoft Hyper-V cluster. Microsoft System Center Virtual Manager manages all Hyper V hosts and their virtual machines. SQL Server Microsoft System Center Virtual Manager Server and XenDesktop controllers require a database service to store configuration and monitoring details. A Microsoft SQL Server 2012 running on a Windows 2012 Server is used for this purpose. Active Directory server Active Directory (AD) services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. DHCP server The DHCP server centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose. DNS server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose. EMC SMI-S Provider for Microsoft System Center Virtual Machine Manager 2012 SP1 EMC SMI-S Provider for Microsoft System Center Virtual Machine Manager is a plug-in to the Microsoft System Center Virtual Machine Manager that provides storage management for EMC arrays directly from the client. EMC SMI-S Provider helps provide a unified management interface. 48 EMC VSPEX End-User Computing

49 IP/Storage Networks Chapter 4: Solution Overview All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while SMB storage traffic is carried over a private, non-routable subnet. IP network The Ethernet network infrastructure provides IP connectivity between virtual desktops, Hyper-V clusters, and VNX storage. For the SMB variant, the IP infrastructure allows Hyper-V servers to access CIFS shares on the VNX and desktop streaming from PVS servers with high bandwidth and low latency. It also allows desktop users to redirect their user profiles and home directories to the centrally maintained CIFS shares on the VNX. Fibre Channel (FC) network For the FC variant, storage traffic between all Hyper-V hosts and the VNX storage system is carried over an FC network. All other traffic is carried over the IP network. EMC VNX5400 array A VNX5400 array provides storage by presenting SMB/FC storage to Hyper-V hosts for up to 1,000 virtual desktops. EMC VNX5600 array A VNX5600 array provides storage by presenting SMB/FC storage to Hyper-V hosts for up to 2,000 virtual desktops. VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and Fibre Channel over Ethernet (FCoE) protocols. The SPs provide access for all external hosts and for the file side of the VNX array. The Disk-Processor Enclosure (DPE) is 3U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500. X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. The Data Mover Enclosure (DME) is 2U in size and houses the Data Movers (X- Blades). The DME is similar in form to the SPE and is used on all VNX models that support file protocol. Standby power supplies are 1U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted. Control Stations are 1U in size and provide management functions to the fileside components referred to as X-Blades. The Control Station is responsible for EMC VSPEX End-User Computing 49

50 Chapter 4: Solution Overview X-Blade failover. The Control Station optionally can be configured with a matching secondary Control Station to ensure redundancy on the VNX array. Disk-Array Enclosures (DAEs) house the drives used in the array. EMC Avamar Avamar software provides the platform for the protection of virtual machines. This protection strategy uses persistent virtual desktops. It also enables image protection and end-user recoveries. Hardware resources Table 4 lists the hardware used in this solution. Table 4. Solution hardware Hardware Configuration Notes Servers for virtual desktops Memory: Desktop OS: 2 GB RAM per desktop 1 TB RAM across all servers for 500 virtual desktops 2 TB RAM across all servers for 1,000 virtual desktops 4 TB RAM across all servers for 2,000 virtual desktops Server OS: 0.6 GB RAM per desktop CPU: 300 GB RAM across all servers for 500 virtual desktops 600 GB RAM across all servers for 1,000 virtual desktops 1.2 TB RAM across all servers for 2,000 virtual desktops Desktop OS: 1 vcpu per desktop (8 desktops per core) 63 cores across all servers for 500 virtual desktops 125 cores across all servers for 1,000 virtual desktops 250 cores across all servers for 2,000 virtual desktops Server OS: 0.2 vcpu per desktop (5 desktops per core) 100 cores across all servers for 500 virtual Network: 6 x 1 GbE NICs per standalone server for 500 virtual desktops 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 1,000/2,000 desktops 200 cores across all servers for 1,000 virtual desktops 400 cores across all servers for 2,000 virtual desktops Total server capacity required to host virtual desktops 50 EMC VSPEX End-User Computing

51 Chapter 4: Solution Overview Hardware Configuration Notes Network infrastructure Storage Minimum switching capability for SMB variant: Two physical switches 6 x 1 GbE ports per Hyper-V server or three 10 GbE ports per blade chassis 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Minimum switching capability for FC variant: 2 x 1 GbE ports per Hyper-V server 4 x 4/8 Gb FC ports for VNX back end 2 x 4/8 Gb FC ports per Hyper-V server Common 2 x 10 GbE interfaces per Data Mover 2 x 8 FC ports per storage processor (FC variant only) Redundant LAN configuration Redundant LAN/SAN configuration EMC VSPEX End-User Computing 51

52 Chapter 4: Solution Overview Hardware Configuration Notes For 500 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks VNX shared storage for virtual desktops Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 1,000 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 2,000 virtual desktops: 2 Data Movers ( active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks Drive count PvD Non-PvD HSD PVS MCS x 100 GB, 3.5-inch flash drives For 500 virtual desktops: 16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops: 24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops: 48 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks Optional for user data 52 EMC VSPEX End-User Computing

53 Chapter 4: Solution Overview Hardware Configuration Notes Shared infrastructure EMC nextgeneration backup Servers for customer infrastructure For 500 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks In most cases, a customer environment will already have infrastructure services such as Active Directory and DNS configured. The setup of these services is beyond the scope of this document. If this solution is being implemented with no existing infrastructure, a minimum number of additional servers is required: 2 x physical servers 20 GB RAM per server 4 x processor cores per server 2 x 1 GbE ports per server Avamar 1 x Gen4 utility node 1 x Gen4 3.9TB spare node 3 x Gen4 3.9TB storage nodes Minimum number required: 2 x physical servers 20 GB RAM per server 4 x processor cores per server 2 x 1 GbE ports per server Optional for infrastructure storage Services can be migrated into VSPEX postdeployment but must exist before VSPEX can be deployed Servers and the roles they fulfill might already exist in the customer environment EMC VSPEX End-User Computing 53

54 Chapter 4: Solution Overview Software resources Table 5 lists the software used in this solution. Table 5. Solution software Software Configuration VNX5400 or 5600 (shared storage, file systems) VNX Operating Environment (OE) for file Release VNX OE for block Release 33 ( ) ESI for Windows Version 3.0 XenDesktop Desktop Virtualization Citrix XenDesktop Controller Operating system for XenDesktop Controller Microsoft SQL Server Version 7 Platinum Edition Windows Server 2012 Standard Edition Version 2012 Standard Edition Next-generation backup Avamar 7.0 Microsoft Hyper-V Hyper-V Server Hyper-V Server 2012 System Center Virtual Machine Manager Operating system for System Center Virtual Machine Manager 2012 SP1 Windows Server 2012 Standard PowerPath Edition (FC variant only) 5.7 Virtual desktops Note: Other than base OS, this software was used for solution validation and is not required. Base operating system Microsoft Windows 7 Enterprise (32-bit) SP1 Windows Server 2008 R2 SP1 Standard Edition Microsoft Office Office Enterprise 2007 SP3 Internet Explorer Adobe Reader 9.1 Adobe Flash Player Bullzip PDF Printer FreeMind EMC VSPEX End-User Computing

55 Chapter 4: Solution Overview Sizing for validated configuration When selecting servers for this solution, ensure that the processor core meets or exceeds the performance of the Intel Nehalem family at 2.66 GHz. As servers with greater processor speeds, performance, and higher core density become available, you can consolidate servers as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability. As with servers, you can also consolidate network interface card (NIC) speed and quantity as long as you maintain the overall bandwidth requirements for this solution and sufficient redundancy to support high availability. Table 6 shows the configurations of the servers that support this solution. Each server has two sockets of four cores and 128 GB of RAM, plus two 10 GbE for each blade chassis. Table 6. Configurations that support this solution Desktop Type No. of servers No. of virtual desktops Total cores Total RAM Desktop OS TB 16 1, TB 32 2, TB Server OS GB 25 1, GB 50 2, TB As shown in Table 13 on page 88, to support eight virtual desktops, at least one core is required with a minimum of 2 GB of RAM for each. You should consider the correct balance of memory and cores required for the number of virtual desktops to be supported by a server. For example, a server that supports 24 virtual desktops requires a minimum of three cores but also a minimum of 48 GB of RAM. IP network switches used to implement this reference architecture must have a minimum backplane capacity of 96 (for 500 virtual desktops), 192 (for 1,000 virtual desktops), or 320 (for 2,000 virtual desktops) Gb/s non-blocking and support the following features: IEEE 802.1x Ethernet flow control 802.1q VLAN tagging Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol SNMP management capability Jumbo frames EMC VSPEX End-User Computing 55

56 Chapter 4: Solution Overview Choose the number and type of switches required to support high availability and choose a network vendor that can provide easily available parts, good service, and optimal support contracts. The network configuration should include the following: A minimum of two switches to support redundancy Redundant power supplies A minimum of 40 1 GbE ports (for 500 virtual desktops), two 1 GbE and fourteen 10 GbE ports (for 1,000 virtual desktops), or two 1 GbE and twenty-two 10 GbE ports (for 2,000 virtual desktops), distributed for high availability The appropriate uplink ports for customer connectivity While use of 10 GbE ports should align with those on the server and storage, keep in mind the overall network requirements for the solution and the level of redundancy required to support high availability. Consider additional server NICs and storage connections for specific implementation requirements. The management infrastructure (Active Directory, DNS, DHCP, and SQL Server) can be supported on two servers similar to those previously defined, but requires a minimum of only 20 GB of RAM instead of 128 GB. 56 EMC VSPEX End-User Computing

57 Chapter 4: Solution Overview Server configuration guidelines When you are designing and ordering the compute/server layer of the VSPEX solution, you should consider several factors that might alter the final purchase. From a virtualization perspective, if you fully understand the system s workload, features like dynamic memory can reduce the aggregate memory requirement. If the virtual desktop pool does not have a high level of peak or concurrent usage, the number of vcpus can be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased might need to be increased. Table 7 provides configuration details for the virtual desktop servers and network hardware. Table 7. Server hardware Servers for virtual desktops CPU: Memory: Network: Configuration Desktop OS: 1 vcpu per desktop (8 desktops per core) 63 cores across all servers for 500 virtual desktops 125 cores across all servers for 1000 virtual desktops 250 cores across all servers for 2000 virtual desktops Server OS: 0.2 vcpu per desktop (5 desktops per core) 100 cores across all servers for 500 virtual desktops 200 cores across all servers for 1,000 virtual desktops 400 cores across all servers for 2,000 virtual desktops Desktop OS: 2 GB RAM per Desktop 1 TB RAM across all servers for 500 virtual desktops 2 TB RAM across all servers for 1000 virtual desktops 4 TB RAM across all servers for 2000 virtual machines 2 GB RAM reservation per Hyper-V host Server OS: 0.6 GB RAM per desktop 300 GB RAM across all servers for 500 virtual desktops 600 GB RAM across all servers for 1,000 virtual desktops 1.2 TB RAM across all servers for 2,000 virtual machines 2 GB RAM reservation per Hyper-V host 6 x 1 GbE NICs per server for 500 virtual desktops 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 1,000 virtual desktops 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 2,000 virtual desktops EMC VSPEX End-User Computing 57

58 Chapter 4: Solution Overview Microsoft Hyper-V memory virtualization for VSPEX Microsoft Hyper-V has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these pertain to memory management. This section describes some of these features and the items you must consider when using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources. Figure 17 shows an example of memory consumption at the hypervisor level. Figure 17. Hypervisor memory consumption 58 EMC VSPEX End-User Computing

59 Dynamic Memory Chapter 4: Solution Overview Dynamic Memory, which was introduced in Windows Server 2008 R2 SP1, increases physical memory efficiency by treating memory as a shared resource and allocating it to the virtual machines dynamically. Actual consumed memory of each virtual machine is adjusted on demand. Dynamic memory enables more virtual machines to run by reclaiming unused memory from idle virtual machines. In Windows Server 2012, dynamic memory enables the dynamic increase of maximum memory available to virtual machines. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access is costly in terms of performance. However, Windows Server 2012 employs a process affinity that strives to keep threads pinned to the particular CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows server 2012 extends this functionality to virtual machines, where it improves performance. Smart Paging With dynamic memory, Hyper-V allows virtual machines to exceed physical available memory. There is likely a gap between minimum memory and startup memory. Smart paging is a memory management technique that uses disk resources as a temporary memory replacement. It swaps out less-used memory to disk storage and swaps it in when needed. The drawback is that this can cause performance to degrade. Hyper-V continues to use guest paging when the host memory is oversubscribed, because it is more efficient than smart paging. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account Hyper-V memory overhead and the virtual machine memory settings. Hyper-V memory overhead The virtualization of memory resources incurs associated overhead, including the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB of memory for the Hyper-V parent partition for this solution. Allocating memory to virtual machines The proper sizing of memory for a virtual machine in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Table 13 on page 88 outlines the resources used by a single virtual machine. EMC VSPEX End-User Computing 59

60 Chapter 4: Solution Overview Network configuration guidelines This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account jumbo frames, VLAN, and LACP on EMC unified storage. Table 4 on page 50 provides detailed network resource requirements. Table 8. Hardware resources for network Component Configuration Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per Microsoft Hyper-V server 1 x 1 GbE port per Control Station for management 2 x FC/CEE/10GbE ports per Microsoft Hyper-V server, for storage network 2 x FC/CEE/10GbE ports per SP, for desktop data 2 x 10 GbE ports per Data Mover for user data File 2 physical switches 4 x 10 GbE ports per Microsoft Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Note: The solution can use 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLAN It is a best practice to isolate network traffic so that the traffic between hosts and storage and hosts and clients, as well as management traffic, all move over isolated networks. In some cases physical isolation might be required for regulatory or policy compliance reasons, but in many cases, logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs: Client access Storage Management 60 EMC VSPEX End-User Computing

61 Chapter 4: Solution Overview The VLANs are illustrated in Figure 18. Figure 18. Required networks Note: The diagram demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. A similar topology should be created for an array using 1 GbE network connections. The client access network is for users of the system (clients) to communicate with the infrastructure. The storage network is used for communication between the compute layer and the storage layer. The management network is used to give administrators a dedicated way to access the management connections on the storage array, network switches, and hosts. Notes: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks can be implemented, but they are not required. If you choose the Fibre Channel storage network option for the deployment, similar best practices and design principles apply. EMC VSPEX End-User Computing 61

62 Chapter 4: Solution Overview Enable jumbo frames Link aggregation This EMC VSPEX end-user computing solution recommends that MTU be set at 9,000 (jumbo frames) for efficient storage and migration traffic. A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Hyper-V allows more than one method of using storage when hosting virtual machines. We tested the solutions described in this section and in Table 9 using SMB, and the storage layout described adheres to all current best practices. Customers and architects can make modifications based on their understanding of the systems usage and load if required. This solution used Login VSI to simulate a user load against the desktops. Login VSI provides guidance to gauge the maximum number of users a desktop environment can support. The Login VSI medium workload is selected for this testing. The storage layouts for 500, 1,000, and 2,000 desktops are defined when the Login VSImax average response time is below the dynamically calculated maximum threshold. This maximum threshold is known as the VSImax dynamic. The Login VSI has two ways of defining the maximum threshold: classic and dynamic VSImax. The classic VSImax threshold is defined as 4,000 milliseconds. However, the dynamic VSImax threshold is calculated based on the initial response time of the user activities. 62 EMC VSPEX End-User Computing

63 Chapter 4: Solution Overview Table 9. Storage hardware Hardware Configuration Notes Storage Common: 2 x 10 GbE interfaces per Data Mover 2 x 8 Gb FC ports per storage processor (FC variant only) For 500 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks: Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 1,000 virtual desktops: 2 Data Movers (active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks: Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 2,000 virtual desktops: 2 Data Movers ( active/standby SMB variant only) 600 GB 15 k rpm 3.5-inch SAS disks: Drive count PvD Non-PvD HSD PVS MCS x 100 GB 3.5-inch flash drives For 500 virtual desktops: 16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops: 24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops: 48 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks VNX shared storage for virtual desktops Optional for user data EMC VSPEX End-User Computing 63

64 Chapter 4: Solution Overview Hardware Configuration Notes For 500 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops: 5 x 600 GB 15 k rpm 3.5-inch SAS disks Optional for infrastructure storage Hyper-V storage virtualization for VSPEX This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes (CSV) V2 and New Virtual Hard Disk Format (VHDX) features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 19, the storage array presents either block-based LUNs (as CSV) or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines. Figure 19. Hyper-V virtual disk types CIFS Windows Server 2012 supports using CIFS (SMB 3.0) file shares as shared storage for Hyper-V virtual machines. CSV A Cluster Shared Volume (CSV) is a shared disk containing an NTFS volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass-through disks Windows 2012 also supports pass-through disks, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured. 64 EMC VSPEX End-User Computing

65 SMB 3.0 (file-based storage only) Chapter 4: Solution Overview The SMB protocol is the file sharing protocol that is used by default in Windows environments. With the introduction of Windows Server 2012, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: SMB Transparent Failover SMB Scale Out SMB Multichannel SMB Direct SMB Encryption VSS for SMB file shares SMB Directory Leasing SMB PowerShell With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note: SMB is also known as Common Internet File System (CIFS). For more details about SMB 3.0, refer to the EMC VNX Series: Introduction to SMB 3.0 Support. ODX (block-based storage only) Offloaded Data Transfers (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within or between storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Since the file transfer offloads to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage arrays, ODX minimizes latencies and improves the transfer speed of large files, such as database or video files. When ODX-supported file operations are performed, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server EMC VSPEX End-User Computing 65

66 Chapter 4: Solution Overview New Virtual Hard Disk format Hyper-V in Windows Server 2012 contains an update to the VHD format, called VHDX, which has a much larger capacity and built-in resiliency. The main new features of the VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures Optimal structure alignment of the virtual hard disk format to suit large sector disks The VHDX format also has the following features: Larger block sizes for dynamic and differential disks, which enables the disks to meet the needs of the workload The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware) VSPEX storage building block Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for filebased storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual desktops in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the end-user-computing environment. Three building blocks (500, 1,000, and 2,000 desktops) are currently verified on the VNX series and provide a flexible solution for VSPEX sizing. Table 10 shows a simple list of the disks required to support various scales of configurations, excluding hot spare needs. Note: If a configuration is started with the 500-desktop building block for MCS, it can be expanded to the 1,000-desktop building block by adding ten matching SAS drives and allowing the pool to restripe. For details about pool expansion and restriping, refer to the EMC VNX Virtual Provisioning Applied Technology White Paper. 66 EMC VSPEX End-User Computing

67 Chapter 4: Solution Overview Table 10. Number of disks required for various numbers of virtual desktops Virtual desktops VNX platform Flash drives (FAST Cache) SAS drives (PVS/Non- PvD) SAS drives (PVS/PvD) SAS drives (MCS/Non- PvD) SAS drives (MCS/PvD) , , VSPEX end-user computing validated maximums VSPEX end-user-computing configurations are validated on the VNX5400 and VNX5600 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX enduser-computing configuration. As outlined in Table 10, the recommended maximum for a VNX5400 is 1,000 desktops and the recommended maximum for a VNX5600 is 2,000 desktops. Storage layout for 500 virtual desktops Core storage layout with PVS provisioning Figure 20 illustrates the layout of the disks that are required to store 500 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. vdisks and TFTP images UNUSED Storage Pool 2 RAID Hot Spare Bus 1 Enclosure 1 UNUSED FAST Cache Virtual Desktops (Write Cache) Hot Spare RAID Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 20. Core storage layout with PVS provisioning for 500 virtual desktops Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX Operating Environment (OE). EMC VSPEX End-User Computing 67

68 Chapter 4: Solution Overview The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_5 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Eight SAS disks (shown here as 1_0_7 to 1_0_14) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Two Flash drives (shown here as 1_0_5 and 1_0_6) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Five SAS disks (1_1_0 to 1_1_4) in the RAID 5 Storage Pool 2 are used to store PVS vdisks and TFTP images. FAST Cache is enabled for the entire pool. Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_6 to 1_1_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. Core storage layout with MCS provisioning Figure 21 illustrates the layout of the disks that are required to store 500 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache UNUSED RAID 1 Hot Spare Bus 1 Enclosure 1 UNUSED Virtual Desktops Hot Spare Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UN-BOUND RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 21. Core storage layout with MCS provisioning for 500 virtual desktops 68 EMC VSPEX End-User Computing

69 Core storage layout with MCS provisioning overview Chapter 4: Solution Overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_2 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Ten SAS disks (shown here as 1_0_5 to 1_0_14) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Note: If personal vdisk is implemented, half the drives (five SAS disks for 500 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vdisk with MCS provisioning with 5 SAS drives for 500 desktops. Two Flash drives (shown here as 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_3 to 1_1_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 69

70 Chapter 4: Solution Overview Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 22. This storage is in addition to the core storage shown in Figure 21. If storage for user data exists elsewhere in the production environment, this storage is not required. User Profiles and Home Directories Storage Pool 4 RAID 6 Personal vdisks Storage Pool 5 RAID Bus 1 Enclosure 2 Infrastructure VMs Storage Pool 6 RAID 5 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Bus 0 Enclosure 2 SAS SSD NL SAS UNUSED Figure 22. Optional storage layout for 500 virtual desktops Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vdisks. The following optional configuration is used in the reference architecture for 500 virtual desktops: The EMC VNX Series does not require a dedicated hot spare drive. The disk shown here as 0_2_14 is an unbound disk that can be used as a hot spare when needed. This disk is marked as hot spare in the storage layout diagram. Five SAS disks (shown here as 0_2_0 to 0_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV. Sixteen NL-SAS disks (shown here as 0_2_5 to 0_2_13 and 1_2_0 to 1_2_6) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 500 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles. 70 EMC VSPEX End-User Computing

71 Chapter 4: Solution Overview Eight SAS disks (1_2_7 to 1_2_14) in the RAID 10 Storage Pool 5 are used to store the Personal vdisks. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Storage layout for 1,000 virtual desktops Core storage layout with PVS provisioning Figure 23 illustrates the layout of the disks that are required to store 1,000 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. Virtual Desktops (Write Cache) Storage Pool 1 vdisks and TFTP images Storage Pool 2 UNUSED RAID RAID Bus 1 Enclosure 1 UNUSED FAST Cache Virtual Desktops (Write Cache) Hot Spare RAID Hot Spare 27 Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 23. Core storage layout with PVS provisioning for 1,000 virtual desktops Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 1,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_0_7 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Sixteen SAS disks (shown here as 1_0_8 to 1_0_14 and 1_1_0 to 1_1_8) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. EMC VSPEX End-User Computing 71

72 Chapter 4: Solution Overview For NAS, ten LUNs of 400 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, four LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Two Flash drives (shown here as 1_0_5 and 1_0_6) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Five SAS disks (1_1_9 to 1_1_13) in the RAID 5 Storage Pool 2 are used to store PVS vdisks and TFTP images. FAST Cache is enabled for the entire pool. The disk shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_14 is unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. Core storage layout with MCS provisioning Figure 24 illustrates the layout of the disks that are required to store 1,000 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache RAID 1 Hot Spare Virtual Desktops Storage Pool 1 RAID UNUSED 14 Bus 1 Enclosure 1 UNUSED Hot Spare Virtual Desktops Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 24. Core storage layout with MCS provisioning for 1,000 virtual desktops Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 1,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_2 are unbound disks that can be used as hot 72 EMC VSPEX End-User Computing

73 Chapter 4: Solution Overview spares when needed. These disks are marked as hot spares in the storage layout diagram. Twenty SAS disks (shown here as 1_0_5 to 1_0_14 and 1_1_3 to 1_1_12) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. For FC, four LUNs of 2 TB each are provisioned from the pool to present to the Hyper-V servers as a four CSVs. Note: If personal vdisk is implemented, half the drives (ten SAS disks for 1,000 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vdisk with MCS provisioning with 10 SAS drives for 1,000 desktops. Two Flash drives (shown here as 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 0_0_4 to 0_0_24 and 1_1_13 to 1_1_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 73

74 Chapter 4: Solution Overview Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 25. This storage is in addition to the core storage shown in Figure 24. If storage for user data exists elsewhere in the production environment, this storage is not required. Personal vdisks Storage Pool 5 UNUSED RAID Bus 1 Enclosure 3 Personal vdisks Storage Pool 5 RAID Hot Spare 14 Bus 0 Enclosure 3 User Profiles and Home Directories Storage Pool 4 RAID Bus 1 Enclosure 2 Infrastructure VMs Storage Pool 6 RAID 5 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Bus 0 Enclosure 2 SAS SSD NL SAS UNUSED Figure 25. Optional storage layout for 1,000 virtual desktops Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vdisks. The following optional configuration is used in the reference architecture for 1,000 virtual desktops: The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 0_2_14 and 0_3_14 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Five SAS disks (shown here as 0_2_0 to 0_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV. Twenty-four NL-SAS disks (shown here as 0_2_5 to 0_2_13 and 1_2_0 to 1_2_14) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 1 TB each are provisioned from the pool to provide the storage required to create two CIFS file systems. 74 EMC VSPEX End-User Computing

75 Chapter 4: Solution Overview If you have implemented multiple drive types, you can enable FAST VP to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles. Sixteen SAS disks (0_3_0 to 0_3_13 and 1_3_0 to 1_3_1) in the RAID 10 Storage Pool 5 are used to store the Personal vdisks. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 400 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares. For FC, four LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Disks shown here as as 1_3_2 to 1_3_14 are unused. They were not used for testing this solution. EMC VSPEX End-User Computing 75

76 Chapter 4: Solution Overview Storage layout for 2,000 virtual desktops Core storage layout with PVS provisioning Figure 26 illustrates the layout of the disks that are required to store 2,000 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache UNUSED RAID 1 Hot Spare Bus 0 Enclosure 2 Virtual Desktops (Write Cache) Storage Pool 1 RAID vdisks and TFTP images Storage Pool 2 RAID 5 FAST Cache RAID 1 Hot Spare Bus 1 Enclosure 1 Virtual Desktops (Write Cache) Storage Pool 1 RAID Bus 0 Enclosure 1 UNUSED Virtual Desktops (Write Cache) Hot Spare Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 26. Core storage layout with PVS provisioning for 2,000 virtual desktops Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 2,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4, 1_1_14, and 0_2_2 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Thirty-two SAS disks (shown here as 1_0_5 to 1_0_14 and 0_1_0 to 0_1_14, and 1_1_0 to 1_1_6) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. 76 EMC VSPEX End-User Computing

77 Chapter 4: Solution Overview For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. For FC, eight LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Four Flash drives (shown here as 1_1_12 to 1_1_13 and 0_2_0 to 0_2_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Five SAS disks (1_1_7 to 1_1_11) in the RAID 5 Storage Pool 2 are used to store PVS vdisks and TFTP images. FAST Cache is enabled for the entire pool. Disks shown here as as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 0_2_3 to 0_2_14 are unused. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 77

78 Chapter 4: Solution Overview Core storage layout with MCS provisioning Figure 27 illustrates the layout of the disks that are required to store 2,000 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vdisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. Virtual Desktops Storage Pool 1 RAID Hot Spare UNUSED Bus 0 Enclosure 2 FAST Cache Virtual Desktops Storage Pool 1 RAID RAID Bus 1 Enclosure 1 FAST Cache Virtual Desktops RAID 1 Hot Spare Storage Pool 1 RAID Bus 0 Enclosure 1 UNUSED Hot Spare Virtual Desktops Storage Pool 1 RAID Bus 1 Enclosure 0 VNX OE UNUSED RAID 5 (3+1) Bus 0 Enclosure 0 SAS SSD NL SAS UNUSED Figure 27. Core storage layout with MCS provisioning for 2,000 virtual desktops Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 2,000 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4,0_1_2, and 0_2_5 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram. Forty SAS disks (shown here as 1_0_5 to 1_0_14, 0_1_3 to 0_1_14, 1_1_2 to 1_1_14, and 0_2_0 to 0_2_4) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 1,600 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. 78 EMC VSPEX End-User Computing

79 Chapter 4: Solution Overview For FC, eight LUNs of 2 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Note: If personal vdisk is implemented, half the drives (twenty SAS disks for 2,000 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vdisk with MCS provisioning with 20 SAS drives for 1,000 desktops. Two Flash drives (shown here as 0_1_0 to 0_1_1 and 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 0_2_6 to 0_2_14 are unbound. They were not used for testing this solution. Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results. EMC VSPEX End-User Computing 79

80 Chapter 4: Solution Overview Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 28. This storage is in addition to the core storage shown in Figure 27. If storage for user data exists elsewhere in the production environment, this storage is not required Personal vdisks Storage Pool 5 RAID 10 Hot Spare Hot Spare UNUSE D Bus 0 Enclosure 5 Personal vdisks Storage Pool 5 RAID Bus 1 Enclosure 4 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Personal vdisks Storage Pool RAID 10 Bus 0 Enclosure 4 User Profiles and Home Directories Storage Pool 4 RAID Bus 1 Enclosure 3 User Profiles and Home Directories Storage Pool 4 RAID Bus 0 Enclosure 3 Infrastructure VMs Storage Pool 6 RAID 5 User Profiles and Home Directories Storage Pool 4 RAID 6 Hot Spare Bus 1 Enclosure 2 SAS SSD NL SAS UNUSED Figure 28. Optional storage layout for 2,000 virtual desktops Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vdisks. The following optional configuration is used in the reference architecture for 2,000 virtual desktops: The EMC VNX Series does not require a dedicated hot spare drive. The disk shown here as 1_2_14, 0_4_9, 0_5_12, and 0_5_13 are unbound disks that can be used as hot spares when needed. This disk is marked as hot spare in the storage layout diagram. 80 EMC VSPEX End-User Computing

81 Chapter 4: Solution Overview Five SAS disks (shown here as 1_2_0 to 1_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV. Forty eight NL-SAS disks (shown here as 1_2_5 to 1_2_13, 0_3_0 to 0_3_14, 1_3_0 to 1_3_14, and 0_4_0 to 0_4_8) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 2 TB each are provisioned from the pool to provide the storage required to create two CIFS file systems. If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles Thirty-two SAS disks (0_4_10 to 0_4_14, 1_4_0 to 1_4_14, and 0_5_0 to 0_5_11) in the RAID 10 Storage Pool 5 are used to store the Personal vdisks. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares. For FC, eight LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. EMC VSPEX End-User Computing 81

82 Chapter 4: Solution Overview High availability and failover This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive most single-unit failures with minimal to no impact to business operations. Virtualization layer As indicated earlier, configuring high availability in the virtualization layer and allowing the hypervisor to automatically restart virtual machines that fail is recommended. Figure 29 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 29. High availability at the virtualization layer Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible. Compute layer While this solution offers flexibility in the type of servers to be used in the compute layer, we recommend that you use enterprise class servers designed for the data center. Connect these servers, with redundant power supplies, to separate Power Distribution Units (PDUs) in accordance with your server vendor s best practices. Figure 30. Redundant power supplies 82 EMC VSPEX End-User Computing

83 Chapter 4: Solution Overview Configuring high availability in the virtualization layer is also recommended. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 30. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network, as shown in Figure 31. Figure 31. Network layer high availability By designing the network with no single points of failure, you can ensure that the compute layer is able to access storage and communicate with users even if a component fails. EMC VSPEX End-User Computing 83

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx) EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This document

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC

More information

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200,

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP Enabled By EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This describes how to design virtualized Oracle Database resources on

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE White Paper EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE EMC Next-Generation VNX, EMC Storage Integrator for Windows Suite, Microsoft System Center 2012 SP1 Reduce storage

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX White Paper EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX Citrix XenDesktop 5.6 with Provisioning Services 6.1 for 5000 Desktops Including: Citrix XenDesktop Citrix Provisioning Services EMC

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using NFS and DNFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE.

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE. EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications Essentials Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

Making the Move to Desktop Virtualization No More Reasons to Delay

Making the Move to Desktop Virtualization No More Reasons to Delay Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Maximizing Your Investment in Citrix XenDesktop With Sanbolic Melio By Andrew Melmed, Director of Enterprise Solutions, Sanbolic Inc. White Paper September 2011 www.sanbolic.com Introduction This white

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide www.citrix.com Overview XenDesktop offers IT administrators many options in order to implement virtual

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

Desktop Virtualization and Storage Infrastructure Optimization

Desktop Virtualization and Storage Infrastructure Optimization Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

SIZING EMC VNX SERIES FOR VDI WORKLOAD

SIZING EMC VNX SERIES FOR VDI WORKLOAD White Paper SIZING EMC VNX SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

How To Protect Data On Network Attached Storage (Nas) From Disaster

How To Protect Data On Network Attached Storage (Nas) From Disaster White Paper EMC FOR NETWORK ATTACHED STORAGE (NAS) BACKUP AND RECOVERY Abstract This white paper provides an overview of EMC s industry leading backup and recovery solutions for NAS systems. It also explains

More information

Introduction. Options for enabling PVS HA. Replication

Introduction. Options for enabling PVS HA. Replication Software to Simplify and Share SAN Storage Enabling High Availability for Citrix XenDesktop and XenApp - Which Option is Right for You? White Paper By Andrew Melmed, Director of Enterprise Solutions, Sanbolic,

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Simplify management and decrease TCO Streamline Application

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2 Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer

More information

Three Paths to the Virtualized Private Cloud

Three Paths to the Virtualized Private Cloud The Essential Guide to Virtualizing Microsoft Applications on EMC VSPEX For organizations running mission-critical Microsoft enterprise applications like Microsoft Exchange, Microsoft SharePoint, and Microsoft

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching

More information

Infortrend EonNAS 3000 and 5000: Key System Features

Infortrend EonNAS 3000 and 5000: Key System Features Infortrend EonNAS 3000 and 5000: Key System Features White paper Abstract This document introduces Infortrend s EonNAS 3000 and 5000 systems and analyzes key features available on these systems. Table

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions Citrix XenDesktop Modular Reference Architecture Version 2.0 Prepared by: Worldwide Consulting Solutions TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 3 Design Planning... 9 Design Examples...

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for file, block, and object storage MCx multi-core optimization unlocks the power of flash

More information

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE White Paper MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC VNX Family, EMC Symmetrix VMAX Systems, and EMC Xtrem Server Products Design and sizing best practices

More information

How To Backup With Ec Avamar

How To Backup With Ec Avamar BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel A Detailed Review EMC Information Infrastructure Solutions Abstract

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Luxembourg June 3 2014

Luxembourg June 3 2014 Luxembourg June 3 2014 Said BOUKHIZOU Technical Manager m +33 680 647 866 sboukhizou@datacore.com SOFTWARE-DEFINED STORAGE IN ACTION What s new in SANsymphony-V 10 2 Storage Market in Midst of Disruption

More information

Windows Server 2012 2,500-user pooled VDI deployment guide

Windows Server 2012 2,500-user pooled VDI deployment guide Windows Server 2012 2,500-user pooled VDI deployment guide Microsoft Corporation Published: August 2013 Abstract Microsoft Virtual Desktop Infrastructure (VDI) is a centralized desktop delivery solution

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE VSPEX IMPLEMENTATION GUIDE SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE Silver Peak Abstract This Implementation Guide describes the deployment of Silver Peak

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review

EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review White Paper EMC UNISPHERE FOR VNXe: NEXT-GENERATION STORAGE MANAGEMENT A Detailed Review Abstract This white paper introduces EMC Unisphere for VNXe, a web-based management environment for creating storage

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read

More information

CXS-203-1 Citrix XenServer 6.0 Administration

CXS-203-1 Citrix XenServer 6.0 Administration Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,

More information

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide 605: Design and implement a desktop virtualization solution based on a mock scenario Hands-on Lab Exercise Guide Contents Overview... 2 Scenario... 5 Quick Design Phase...11 Lab Build Out...12 Implementing

More information