EMC VSPEX END-USER COMPUTING

Size: px
Start display at page:

Download "EMC VSPEX END-USER COMPUTING"

Transcription

1 VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract This document describes the EMC VSPEX End-User Computing solution with Citrix XenDesktop and EMC VNX for up to 2,000 virtual desktops. January 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published January 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. Part Number H

3 3

4

5 Contents Chapter 1 Executive Summary 15 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 19 Solution overview Desktop broker Virtualization Storage Network Compute Chapter 3 Solution Technology Overview 23 Solution technology Summary of key components Desktop broker Overview Citrix XenDesktop Machine Creation Services Citrix Personal vdisk Citrix Profile Manager Virtualization Overview VMware vsphere VMware vcenter VMware vsphere High Availability EMC Virtual Storage Integrator for VMware VNX VMware vstorage API for Array Integration support Compute

6 Contents Network Storage Overview EMC VNX series Backup and recovery Overview EMC Avamar Security RSA SecurID two-factor authentication SecurID authentication in the VSPEX End-User Computing for Citrix XenDesktop environment Required components Compute, memory and storage resources Chapter 4 Solution Architectural Overview 41 Solution overview Solution architecture Architecture for up to 500 virtual desktops Architecture for up to 1,000 virtual desktops Architecture for up to 2,000 virtual desktops Key components Hardware resources Software resources Sizing for validated configuration Server configuration guidelines Overview VMware vsphere memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines Overview VLAN Enable jumbo frames Link aggregation Storage configuration guidelines Overview VMware vsphere storage virtualization for VSPEX Storage layout for 500 virtual desktops Storage layout for 1,000 virtual desktops Storage layout for 2,000 virtual desktops High availability and failover

7 Contents Introduction Virtualization layer Compute layer Network layer Storage layer Validation test profile Profile characteristics Backup environment configuration guidelines Overview Backup characteristics Backup layout Sizing guidelines Reference workload Defining the reference workload Applying the reference workload Implementing the reference architectures Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment CPU requirements Memory requirements Storage performance requirements Storage capacity requirements Determining equivalent reference virtual desktops Fine-tuning hardware resources Chapter 5 VSPEX Configuration Guidelines 83 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare switches, connect network, and configure switches Overview Prepare network switches

8 Contents Configure infrastructure network Configure VLANs Complete network cabling Prepare and configure storage array VNX configuration Provision core data storage Provision optional storage for user data Provision optional storage for infrastructure virtual machines Install and configure VMware vsphere hosts Overview Install ESXi Configure ESXi networking Jumbo frames Connect VMware datastores Plan virtual machine memory allocations Install and configure SQL Server database Overview Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Configure database for VMware vcenter Configure database for VMware Update Manager Install and configure VMware vcenter Server Overview Create the vcenter host virtual machine Install vcenter guest operating system Create vcenter ODBC connections Install vcenter Server Apply vsphere license keys Deploy the VNX VAAI for NFS plug-in (NFS variant) Install the EMC VSI Unified Storage Management feature Install and configure XenDesktop controller Overview Install server-side components of XenDesktop Configure a site Add a second controller Install Desktop Studio Prepare master virtual machine Provision virtual desktops Summary

9 Contents Chapter 6 Validating the Solution 113 Overview Post-install checklist Deploy and test a single virtual desktop Verify the redundancy of the solution components Appendix A Bills of Materials 117 Bill of materials for 500 virtual desktops Bill of materials for 1,000 virtual desktops Bill of materials for 2,000 virtual desktops Appendix B Customer Configuration Data Sheet 123 Customer configuration data sheets Appendix C References 127 References EMC documentation Other documentation Appendix D About VSPEX 131 About VSPEX

10 Figures Figure 1. Solution components Figure 2. Compute layer flexibility Figure 3. Example of highly-available network design Figure 4. Authentication control flow for XenDesktop access requests originating on an external network Figure 5. Authentication control flow for XenDesktop requests originating on local Figure 6. network Logical architecture: VSPEX End-User Computing for Citrix XenDesktop with RSA Figure 7. Logical architecture for 500 virtual desktops NFS variant Figure 8. Logical architecture for 500 virtual desktops FC variant Figure 9. Logical architecture for 1,000 virtual desktops NFS variant Figure 10. Logical architecture for 1,000 virtual desktops FC variant Figure 11. Logical architecture for 2,000 virtual desktops NFS variant Figure 12. Logical architecture for 2,000 virtual desktops FC variant Figure 13. Hypervisor memory consumption Figure 14. Required networks Figure 15. VMware virtual disk types Figure 16. Core storage layout for 500 virtual desktops Figure 17. Optional storage layout for 500 virtual desktops Figure 18. Core storage layout for 1,000 virtual desktops Figure 19. Optional storage layout for 1,000 virtual desktops Figure 20. Core storage layout for 2,000 virtual desktops Figure 21. Optional storage layout for 2,000 virtual desktops Figure 22. High availability at the virtualization layer Figure 23. Redundant power supplies Figure 24. Network layer high availability Figure 25. VNX series high availability Figure 26. Sample Ethernet network architecture for 500 and 1,000 virtual desktops Figure 27. Sample Ethernet network architecture for 2,000 virtual desktops Figure 28. Set Direct Writes Enabled checkbox Figure 29. View all Data Mover parameters Figure 30. Set nthread parameter Figure 31. Storage System Properties dialog box Figure 32. Create FAST Cache dialog box Figure 33. Advanced tab in the Create Storage Pool dialog box Figure 34. Advanced tab in the Storage Pool Properties dialog box Figure 35. Storage Pool Properties window

11 Figures Figure 36. Manage Auto-Tiering window Figure 37. LUN Properties window Figure 38. Virtual machine memory settings

12 Tables Table 1. VNX customer benefits Table 2. Minimum hardware resources to support SecurID Table 3. Solution hardware Table 4. Solution software Table 5. Configurations that support this solution Table 6. Server hardware Table 7. Storage hardware Table 8. Validated environment profile Table 9. Backup profile characteristics Table 10. Virtual desktop characteristics Table 11. Blank worksheet row Table 12. Reference virtual desktop resources Table 13. Example worksheet row Table 14. Example applications Table 15. Server resource component totals Table 16. Blank customer worksheet Table 17. Deployment process overview Table 18. Tasks for pre-deployment Table 19. Deployment prerequisites checklist Table 20. Tasks for switch and network configuration Table 21. Tasks for storage configuration Table 22. Tasks for server installation Table 23. Tasks for SQL Server database setup Table 24. Tasks for vcenter configuration Table 25. Tasks for XenDesktop controller setup Table 26. Tasks for testing the installation Table 27. List of components used in the VSPEX solution for 500 virtual desktops Table 28. List of components used in the VSPEX solution for 1,000 virtual Table 29. desktops List of components used in the VSPEX solution for 2,000 virtual desktops Table 30. Common server information Table 31. ESXi server information Table 32. Array information Table 33. Network infrastructure information

13 Table 34. VLAN information Table 35. Service accounts

14

15 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs

16 Executive Summary Introduction Target audience Document purpose VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When you are embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk. This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; customers are free to select the server and networking hardware of their choice that meet or exceed the stated minimums. The reader of this document is expected to have the necessary training and background to install and configure an end-user computing solution based on Citrix XenDesktop with VMware vsphere as a hypervisor, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and EMC recommends that the reader be familiar with these documents. Readers are also expected to be familiar with the infrastructure and database security policies of the customer installation. Individuals focused on selling and sizing a VSPEX End-User Computing solution for Citrix XenDesktop should pay particular attention to the first four chapters of this document. Implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This document presents an initial introduction to the VSPEX End-User Computing architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy the system. The VSPEX End-User Computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on VMware s vsphere virtualization layer backed by the highly available VNX storage family for storage and Citrix s XenDesktop desktop broker. The compute and network components, while vendor-definable, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. The 500, 1,000, and 2,000 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when deployed. A smaller 250 virtual desktop environment based 16

17 Business needs on the VNXe3300 is described in EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops. Executive Summary An end-user computing or virtual desktop architecture is a complex system offering. This document will facilitate its setup by providing up front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. Validation tests are provided to ensure that your system is up and running properly after the last component has been installed. Following the guidance provided by this document will ensure an efficient and painless desktop deployment. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, efficiency, and lower risk. Business applications are moving into the consolidated compute, network, and storage environment. EMC VSPEX End-User Computing using Citrix reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The following are the business needs addressed by the VSPEX End-User Computing solution for Citrix architecture: Provides an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components Provides a solution for efficiently virtualizing 500, 1,000, or 2,000 virtual desktops for varied customer use cases Provides a reliable, flexible, and scalable reference design 17

18 Executive Summary 18

19 Chapter 2 Solution Overview This chapter presents the following topic: Solution overview

20 Solution Overview Solution overview The EMC VSPEX End-User Computing solution for Citrix XenDesktop on VMware vsphere 5.1 provides a complete system architecture capable of supporting up to 2,000 virtual desktops with a redundant server/network topology and highly available storage. The core components that make up this particular solution are desktop broker, virtualization, storage, network, and compute. Desktop broker Virtualization XenDesktop is the virtual desktop solution from Citrix that allows virtual desktops to be run on the VMware vsphere virtualization environment. It allows for the centralization of desktop management and provides increased control for IT organizations. XenDesktop allows end users to connect to their desktops from multiple devices across a network connection. VMware vsphere is the leading virtualization platform in the industry, providing flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vsphere components are the VMware vsphere hypervisor and the VMware vcenter control server for system management. The VMware hypervisor runs on a dedicated server and allows multiple operating systems to execute on the system simultaneously as virtual machines. Connect these hypervisor systems to operate in a clustered configuration. Manage these clustered configuration as a larger resource pool through the vcenter product and allow dynamic allocation of CPU, memory, and storage across the cluster. Features like vmotion, which allows a virtual machine to move among different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS), which performs vmotion automatically to balance load, make vsphere a solid business choice. With the release of vsphere 5.1, a VMware virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual RAM. Storage The EMC VNX storage family is the number one shared storage platform in the industry. Its ability to provide both file and block access with a broad feature set make it an ideal choice for any end-user computing implementation. The VNX storage includes the following components, which are sized for the stated architecture workloads: Host adapter ports Provide host connectivity via fabric into the array Data Movers Front-end components that provide file services to hosts (optional if providing CIFS/SMB, NFS services) Storage Processors Compute components of the storage array, responsible for all aspects of data moving into, out of, and between arrays Disk Array Enclosures Contain the actual disk drives that record the host/application data 20

21 Solution Overview The End-User Computing solutions for Citrix XenDesktop discussed in this document are based on the VNX5300 (500, 1,000 desktops) and VNX5500 (2,000 desktops) storage arrays. The VNX5300 can support a maximum of 125 drives, while the VNX5500 can host up to 250 drives. The EMC VNX series supports a wide range of business-class features ideal for the end-user computing environment, including: Network Compute Fully Automated Storage Tiering for Virtual Pools (FAST VP) FAST Cache Data deduplication Thin provisioning Replication Snapshots/checkpoints File-level retention Quota management VSPEX allows the flexibility of designing and implementing the vendor s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage Support for link aggregation Traffic isolation based on industry-accepted best practices VSPEX allows the flexibility of designing and implementing the vendor s choice of server components. The infrastructure must conform to the following attributes: Sufficient RAM, CPU cores, and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to support failover after a server failure in the environment 21

22 Solution Overview 22

23 Chapter 3 Solution Technology Overview This chapter presents the following topics: Solution technology Summary of key components Desktop broker Virtualization Compute Network Storage Backup and recovery Security

24 Solution Technology Overview Solution technology This VSPEX solution uses EMC VNX5300 (for up to 1,000 virtual desktops) or VNX5500 (for up to 2,000 virtual desktops) storage arrays and VMware vsphere 5.1 to provide the storage and computer resources for a Citrix XenDesktop 5.6 environment of Windows 7 virtual desktops provisioned by Machine Creation Services (MCS). Figure 1 shows the components of the solution. Figure 1. Solution components In particular, planning and designing the storage infrastructure for the Citrix XenDesktop environment is a critical step because the shared storage must be able to absorb large bursts of input/output (I/O) that occur over the course of a workday. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users may adapt to slow performance, but unpredictable performance causes frustration and reduces efficiency. To provide predictable performance for end-user computing, the storage system must be able to handle peak I/O load from the clients while keeping response time to a minimum. Designing for this workload involves the deployment of many disks to handle brief periods of extreme I/O pressure, which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required. 24

25 Solution Technology Overview EMC next-generation backup enables protection of user data and end-user recoverability. This is accomplished by leveraging EMC Avamar and its desktop client within the desktop image. Summary of key components This section describes the key components of this solution. Desktop broker The desktop virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, to allow maintenance to the image without affecting user productivity, and to prevent the environment from growing in an unconstrained way. Virtualization The virtualization layer allows the physical implementation of resources to be decoupled from the applications that use them. In other words, the application s view of the resources available to it is no longer directly tied to the hardware. This enables many key features in the end-user computing concept. Compute The compute layer provides memory and processing resources for the virtualization layer software as well as the needs of the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resource required, but allows the customer to implement the requirements using any compute hardware that meets these requirements. Network The network layer connects the users of the environment to the resources they need, as well as connecting the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but allows the customer to implement the requirements using any network hardware that meets these requirements. Storage The storage layer is a critical resource for the implementation of the end-user computing environment. Because of the way desktops are used, the storage layer must be able to absorb large bursts of transient activity without having an unduly impact on the user experience. This solution uses EMC VNX FAST Cache to handle this workload efficiently. Backup and recovery The optional backup and recovery component of the solution provide data protection in the event that the data in the primary system is deleted, damaged, or otherwise becomes unusable. 25

26 Solution Technology Overview Desktop broker Security Security components from RSA provide customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. Solution architecture provides details on all the components that make up the reference architecture. Overview Desktop virtualization encapsulates and delivers the user desktop to a remote client device, which can be thin clients, zero clients, smartphones, or tablets. It allows subscribers from different locations to access virtual desktops hosted on centralized computing resources at remote data centers. In this solution, Citrix XenDesktop is used to provision, manage, broker, and monitor the desktop virtualization environment. Citrix XenDesktop 5.6 Citrix XenDesktop transforms Windows desktops as an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop, or any type of Windows, web, or SaaS application, to all the latest PCs, Macs, tablets, smartphones, laptops, and thin clients and does so with a highdefinition HDX user experience. Citrix FlexCast delivery technology enables IT to optimize the performance, security, and cost of virtual desktops for any type of user, including task workers, mobile workers, power users, and contractors. XenDesktop helps IT rapidly adapt to business initiatives by simplifying desktop delivery and enabling user self-service. The open, scalable, and proven architecture simplifies management, support, and integration. Machine Creation Services Machine Creation Services (MCS) is a provisioning mechanism introduced in XenDesktop 5.0. It is integrated with the XenDesktop management interface, Desktop Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management. MCS allows several types of machines, including dedicated and pooled machines, to be managed within a catalog in Desktop Studio. Desktop customization is persistent for dedicated machines, while a pooled machine is required if a non-persistent desktop is appropriate. In this solution, persistent virtual desktops running Windows 7 are provisioned using MCS. Desktops provisioned using MCS share a common base image within a catalog. Because of this, the base image typically is accessed with sufficient frequency to naturally leverage EMC VNX FAST Cache, where frequently accessed data is promoted to flash drives to provide optimal I/O response time with fewer physical disks. Citrix Personal vdisk The Citrix Personal vdisk feature is introduced in Citrix XenDesktop 5.6. With Personal vdisk, users can preserve customization settings and user-installed applications in a 26

27 Solution Technology Overview pooled desktop. This capability is accomplished by redirecting the changes from the user s pooled virtual machine to a separate disk called Personal vdisk. During runtime, the content of the Personal vdisk is blended with the content from the base virtual machine to provide a unified experience to the end user. The Personal vdisk data is preserved during reboot/refresh operations. Citrix Profile Manager 4.1 Citrix Profile Manager 4.1 preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Manager ensures that personal settings are applied to desktops and applications regardless of the user s login location or client device. The combination of Citrix Profile Manager and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization. With Citrix Profile Manager, a user s remote profile is downloaded dynamically when the user logs in to a Citrix XenDesktop. Profile Manager downloads user profile information only when the user needs it. Virtualization Overview VMware vsphere 5.1 The virtualization layer is a key component of any end-user computing solution. It allows the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and even allowing the physical capability of the system to change without affecting the hosted applications. VMware vsphere 5.1 is used to build the virtualization layer for this solution. VMware vsphere 5.1 transforms a computer s physical resources, by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. High-availability features of VMware vsphere 5.1 such as vmotion and Storage vmotion enable seamless migration of virtual machines and stored files from one vsphere server to another with minimal or no performance impact. Coupled with vsphere Distributed Resource Scheduling (DRS) and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources. VMware vcenter VMware vcenter is a centralized management platform for the VMware virtual infrastructure. It provides administrators with a single interface that can be accessed from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure. VMware vcenter is also responsible for managing some of the more advanced features of the VMware virtual infrastructure like VMware vsphere High Availability and Distributed Resource Scheduling (DRS), along with vmotion and Update Manager. VMware vsphere High Availability The VMware vsphere High Availability feature allows the virtualization layer to restart virtual machines in various failure conditions automatically. 27

28 Solution Technology Overview Note If the virtual machine operating system has an error, the virtual machine can be restarted automatically on the same hardware. If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster. For VMware vsphere High Availability to restart virtual machines on different hardware, those servers must have resources available. There are specific recommendations in the Compute section to enable this functionality. VMware vsphere High Availability allows you to configure policies to determine which machines are restarted automatically and under what conditions these operations should be attempted. EMC Virtual Storage Integrator for VMware EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in to the vsphere client that provides a single interface that is used for managing EMC storage within the vsphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which allows new features to be introduced rapidly in response to changing customer requirements. The following VSI features were used during the validation testing: Storage Viewer Extends the vsphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vsphere hosts and virtual machines. Storage Viewer presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new Network File System (NFS) datastores, Virtual Machine File System (VMFS) datastores, and Raw Device Mapping (RDM) volumes seamlessly within vsphere client. Refer to the product guides for EMC VSI for VMware vsphere, available on the EMC Online Support website, for more information. VNX VMware vstorage API for Array Integration support Hardware acceleration with VMware vstorage API for Array Integration (VAAI) is a storage enhancement in vsphere 5.1 that enables vsphere to offload specific storage operations to compatible storage hardware such as the VNX series platforms. With storage hardware assistance, vsphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. 28

29 Solution Technology Overview Compute The choice of a server platform for an EMC VSPEX infrastructure is based not only on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For these reasons, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents a number of processor cores and an amount of RAM that must be provided. This can be implemented with 2 servers or 20 and still be considered the same VSPEX solution. For example, let us assume that the compute layer requirements for a given implementation are 25 processor cores and 200 GB of RAM. One customer might want to use white-box servers containing 16 processor cores and 64 GB of RAM, while a second customer might choose a higher-end server with 20 processor cores and 144 GB of RAM. 29

30 Solution Technology Overview Figure 2. Compute layer flexibility The first customer needs four of the servers while the second customer needs two, as shown in Figure 2. Note To enable high availability at the compute layer, each customer will need one additional server with sufficient capacity to provide a failover platform in the event of a hardware outage. The following best practices should be observed in the compute layer: It is a best practice to use a number of identical or, at least, compatible servers. VSPEX implements hypervisor-level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. 30

31 Solution Technology Overview If you are implementing hypervisor-layer high availability, then the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implementing the high-availability features available in the virtualization layer is recommended to ensure that the compute layer has sufficient resources to accommodate at least single-server failures. This allows you to implement minimal-downtime upgrades and tolerate single-unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be quite flexible to meet your specific needs. The key constraint is provision of sufficient processor cores and RAM per core to meet the needs of the target environment. Network The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 3. Note The example is for IP-based networks, but the same underlying principles of multiple connections and elimination of single points of failure also apply to Fibre Channel-based networks. 31

32 Solution Technology Overview Figure 3. Example of highly-available network design This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. 32

33 Solution Technology Overview Storage Overview EMC VNX series The storage layer is also a key component of any cloud infrastructure solution, providing storage efficiency, management flexibility, and reduced total cost of ownership. This VSPEX solution uses the EMC VNX series for providing virtualization at the storage layer. The EMC VNX family is optimized for virtual applications, delivering industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. The VNX series is powered by Intel Xeon processors, for intelligent storage that automatically and efficiently scales in performance while ensuring data integrity and security. Table 1 identifies the VNX customer benefits. Table 1. Feature VNX customer benefits Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High availability, designed to deliver five 9s availability Automated tiering with FAST VP (Fully Automated Storage Tiering for Virtual Pools) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash Software suites available FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously Local Protection Suite Practices safe data protection and repurposing Remote Protection Suite Protects data against localized failures, outages, and disasters Application Protection Suite Automates application copies and proves compliance Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity Software packs available Total Efficiency Pack Includes all five of the preceding software suites 33

34 Solution Technology Overview Total Protection Pack Includes Local, Remote, and Application Protection Suites VNX FAST Cache Backup and recovery VNX FAST Cache, a part of the VNX FAST Suite, enables Flash drives to be used as an expanded cache layer for the array. FAST Cache is an array-wide feature available for both file and block storage. FAST Cache works by examining 64-KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN. FAST Cache enables XenDesktop to deliver consistent performance at flash drive speeds by absorbing read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads such as operating system patches and application updates. This extended read/write cache is an ideal caching mechanism for MCS in XenDesktop because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the flash drives without having to access the slower drives at the lower storage tier. VNX FAST VP (optional) VNX FAST VP, a part of the VNX FAST Suite, enables you to automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1 GB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation. Overview Backup and recovery is another important component in this VSPEX solution, providing data protection by backing up data files or volumes on a defined schedule and restoring data lost by accident or disaster. In this VSPEX solution, EMC Avamar software provides backup and recovery services for up to 2,000 virtual desktops. EMC Avamar Avamar software provides rapid backup and restoration capabilities in the virtualized environment. Performance is greatly enhanced by the Avamar software s seamless integration of deduplication technology, which results in vastly less data traversing the network, and greatly reduced amounts of data being backed up and stored resulting in storage and bandwidth operational savings. Two of the most common recovery requests made to backup administrators are the following: File-level recovery Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are 34

35 Solution Technology Overview individual users deleting files, applications requiring recoveries, and batch process-related erasures. System recovery Although complete system recovery requests occur less frequently than do file-level recovery requests, this bare-metal restore capability is vital to the enterprise. Common root causes for full system recovery requests include viral infestation, registry corruption, and unidentifiable unrecoverable issues. In both of these scenarios, Avamar functionality in conjunction with VMware implementations adds new capabilities for backup and recovery. Key capabilities added in VMware, such as the vstorage API integration and change block tracking (CBT), enable the Avamar software to protect the virtual environment more efficiently. Leveraging CBT for both backup and recovery with virtual proxy server pools, this functionality minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry-leading next-generation backup appliances. Security RSA SecurID two-factor authentication RSA SecurID two-factor authentication can provide enhanced security for the VSPEX end-user computing environment by requiring the user to authenticate with two pieces of information, collectively called a passphrase, consisting of: Something the user knows: A PIN, which is used like any other PIN or password Something the user has: A token code, provided by a physical or software token, which changes every 60 seconds The typical use case deploys SecurID to authenticate users accessing protected resources from an external or public network. Access requests originating from within a secure network are authenticated by traditional mechanisms involving Active Directory or LDAP. SecurID functionality is managed through RSA Authentication Manager, which also controls administrative functions such as token assignment to users, user management, and high availability. The Citrix NetScaler network appliance and Citrix Storefront enable streamlined integration of SecurID into the XenDesktop environment (as well as XenApp and other Citrix virtualization product environments). SecurID authentication in the VSPEX End- User Computing for Citrix XenDesktop environment For external access requests into the VSPEX End-User Computing with Citrix XenDesktop environment, the user is challenged for a userid, SecurID passphrase, and Active Directory password on a single dialog. Upon successful authentication, the user is logged in directly to his or her virtual desktop. Internal request authentication is carried out against Active Directory only. Figure 4 describes authentication flow for an external access request to the XenDesktop environment. 35

36 Solution Technology Overview Figure 4. Authentication control flow for XenDesktop access requests originating on an external network Note Authentication policies set on NetScaler s Access Gateway Enterprise Edition (AGEE) control authentication against SecurID and Active Directory. Figure 5 depicts internal access authentication flow. Active Directory authentication is initiated from within Citrix Storefront. Figure 5. Authentication control flow for XenDesktop requests originating on local network Note Users are authenticated against Active Directory only. Required components Enablement of SecurID for VSPEX solutions is described in Securing VSPEX Citrix XenDesktop 5.6 End-User Computing Solutions with RSA Design Guide. The following components are required: 36

37 Solution Technology Overview Compute, memory and storage resources RSA SecurID Authentication Manager (version 7.1 SP4) Used to configure and manage the SecurID environment and assign tokens to users, Authentication Manager 7.1 SP4 is available as an appliance or as an installable on a Windows Server 2008 R2 instance. Future versions of Authentication Manager will be available as a physical or virtual appliance only. SecurID tokens for all users SecurID requires something the user knows (a PIN) combined with a constantly changing code from a token the user possesses. SecurID tokens may be physical, displaying at 60-second intervals a new code that the user must then enter with a PIN, or software-based, wherein the user supplies a PIN and the token code is supplied programmatically. Hardware and software tokens are registered with Authentication Manager through token records supplied on a CD or other media. Citrix NetScaler network appliance (version 10 or higher) NetScaler s Access Gateway functionality manages RSA SecurID (primary) and Active Directory (secondary) authentication of access requests originating on public or external networks. NetScaler also provides load balancer capability supporting high availability of Authentication Manager and Citrix Storefront servers. Citrix Storefront (version 1.2 or higher) Storefront, also known as CloudGateway Express, provides authentication and other services and presents users desktops to browser-based or mobile Citrix clients. Citrix Receiver Receiver provides an interface through which the user interacts with the virtual desktop or other Citrix virtual environment such as XenApp or XenServer. In the context of this solution, the user client is considered a generic user endpoint, so versions of the Receiver client, and options and optimizations for them, are not addressed. Figure 6 depicts the VSPEX End-User Computing for Citrix XenDesktop environment with added infrastructure to support SecurID. All necessary components can run in a redundant, high-availability configuration on 2 or more VMware ESXi hosts with a minimum of 12 CPU cores (16 recommended) and 16 GB of RAM. Table 2 on page 39 summarizes these requirements. 37

38 Solution Technology Overview Figure 6. Logical architecture: VSPEX End-User Computing for Citrix XenDesktop with RSA 38

39 Solution Technology Overview Table 2. Minimum hardware resources to support SecurID CPU (cores) Memory (GB) Storage (GB) SQL database* Reference RSA Authentication Manager Citrix NetScaler VPX 2 8** 60 n/a n/a RSA Authenticat ion Manager 7.1 Performanc e and Scalability Guide Citrix NetScaler VPX Getting Started Guide Citrix Storefront MB per 100 users * It is expected that this capacity can be drawn from pre-existing SQL Server infrastructure. ** RSA recommends an 8 GB minimum for VMware-based deployments. A 4 GB or even 2 GB configuration is acceptable on standalone servers. 39

40 Solution Technology Overview 40

41 Chapter 4 Solution Architectural Overview This chapter presents the following topics: Solution overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High availability and failover Validation test profile Backup environment configuration guidelines Sizing guidelines Reference workload Applying the reference workload Implementing the reference architectures Quick assessment

42 Solution Architectural Overview Solution overview Solution architecture VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute, and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloudbased computing by enabling faster deployment, more choice, higher efficiency, and lower risk. This section is intended to be a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meets or exceeds the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your End-User Computing deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops that have been validated by EMC. In practice, each virtual desktop type has its own set of requirements that rarely fit a predefined idea of what a virtual desktop should be. In any discussion about end-user computing, a reference workload should first be defined. Not all servers perform the same tasks, and building a reference that takes into account every possible combination of workload characteristics is impractical. The VSPEX End-User Computing solution with EMC VNX is validated at three different points of scale. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload. Note VSPEX uses the concept of a Reference Workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment may not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Applying the reference workload provides a detailed description. Architecture for up to 500 virtual desktops The architecture diagrams in this section show the layout of major components comprising the solutions. Two storage variants, NFS and FC, are shown in the following diagrams. 42

43 Solution Architectural Overview Figure 7 depicts the logical architecture of the NFS variant for 500 virtual desktops, wherein 10 GbE carries storage traffic for servers hosting virtual desktops and 1 GbE carries all other traffic. Figure 7. Logical architecture for 500 virtual desktops NFS variant Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. 43

44 Solution Architectural Overview Figure 8 depicts the logical architecture of the FC variant for 500 virtual desktops, wherein an FC SAN carries storage traffic and 1 GbE carries management and application traffic. Figure 8. Logical architecture for 500 virtual desktops FC variant Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. 44

45 Solution Architectural Overview Architecture for up to 1,000 virtual desktops The architecture diagrams in this section show the layout of major components comprising the solutions. Two storage variants, NFS and FC, are shown in the following diagrams. Figure 9 depicts the logical architecture of the NFS variant for 1,000 virtual desktops, wherein 10 GbE carries storage traffic for servers hosting virtual desktops and 1 GbE carries all other traffic. Figure 9. Logical architecture for 1,000 virtual desktops NFS variant Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that sufficient bandwidth and redundancy are provided to meet the listed requirements. 45

46 Solution Architectural Overview Figure 10 depicts the logical architecture of the FC variant for 1,000 virtual desktops, wherein an FC SAN carries storage traffic and 1 GbE carries management and application traffic. Figure 10. Logical architecture for 1,000 virtual desktops FC variant Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that sufficient bandwidth and redundancy are provided to meet the listed requirements. 46

47 Solution Architectural Overview Architecture for up to 2,000 virtual desktops The architecture diagrams in this section show the layout of major components comprising the solutions. Two storage variants, NFS and FC, are shown in the following diagrams. Figure 11 depicts the logical architecture of the NFS variant for 2,000 virtual desktops, wherein 10 GbE carries storage traffic for servers hosting virtual desktops and 1 GbE carries all other traffic. Figure 11. Logical architecture for 2,000 virtual desktops NFS variant Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. 47

48 Solution Architectural Overview Figure 12 depicts the logical architecture of the FC variant for 2,000 virtual desktops, wherein an FC SAN carries storage traffic and 1GbE carries management and application traffic. Figure 12. Logical architecture for 2,000 virtual desktops FC variant Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. Key components Citrix XenDesktop 5.6 controller Two Citrix XenDesktop controllers are used to provide redundant virtual desktop delivery, authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. In this reference architecture, the controllers are installed on Windows Server 2008 R2 and hosted as virtual machines on VMware vsphere 5.1 servers. Virtual desktops Persistent virtual desktops running Windows 7 are provisioned using MCS, a provisioning mechanism introduced in XenDesktop 5.0. VMware vsphere 5.1 VMware vsphere provides a common virtualization layer to host a server environment. Table 10 on page 74 lists the specifics of the validated environment. VMware vsphere 5.1 provides a highly available infrastructure through features such as the following: vmotion Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption Storage vmotion Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption 48

49 Solution Architectural Overview vsphere High Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster Distributed Resource Scheduler (DRS) Provides load balancing of computing capacity in a cluster Storage Distributed Resource Scheduler (SDRS) Provides load balancing across multiple datastores, based on space use and I/O latency VMware vcenter Server 5.1 vcenter Server provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vsphere 5.1 cluster. All vsphere hosts and their virtual machines are managed through vcenter. Active Directory server Active Directory services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. DHCP server The DHCP server centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose. DNS server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose. VSI for VMware vsphere EMC VSI for VMware vsphere is a plug-in to the vsphere client that provides storage management for EMC arrays directly from the client. VSI is highly customizable and helps provide a unified management interface. IP/Storage Networks All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while NFS storage traffic is carried over a private, non-routable subnet. Mixed 10 and 1 GbE IP network The Ethernet network infrastructure provides 10 GbE connectivity between virtual desktops, vsphere clusters, and VNX storage. For the NFS variant, the 10 GbE infrastructure allows vsphere servers to access NFS datastores on the VNX with high bandwidth and low latency. It also allows desktop users to redirect their roaming profiles and home directories to the centrally maintained CIFS shares on the VNX. The desktop clients, XenDesktop management components, and Windows server infrastructure can reside on 1 GbE network. Fibre Channel network For the FC variant, storage traffic between all vsphere hosts and the VNX storage system is carried over an FC network. All other traffic is carried over 1 GbE. EMC VNX5300 array A VNX5300 array provides storage by presenting NFS/FC datastores to vsphere hosts for up to 1,000 virtual desktops. EMC VNX5500 array A VNX5500 array provides storage by presenting NFS/FC datastores to vsphere hosts for up to 2,000 virtual desktops. VNX family storage arrays include the following components: 49

50 Solution Architectural Overview Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and FCoE protocols. The SPs provide access for all external hosts and for the file side of the VNX array. The Disk-Processor Enclosure (DPE) is 3U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500. X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. The Data Mover Enclosure (DME) is 2U in size and houses the Data Movers (X- Blades). The DME is similar in form to the SPE and is used on all VNX models that support file. Standby power supplies are 1U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted. Control Stations are 1U in size and provide management functions to the fileside components referred to as X-Blades. The Control Station is responsible for X-Blade failover. The Control Station optionally may be configured with a matching secondary Control Station to ensure redundancy on the VNX array. Disk-Array Enclosures (DAEs) house the drives used in the array. EMC Avamar Avamar software provides the platform for protection of virtual machines. This protection strategy leverages persistent virtual desktops. It also leverages both image protection and end-user recoveries. 50

51 Solution Architectural Overview Hardware resources Table 3 lists the hardware used in this solution. Table 3. Solution hardware Hardware Configuration Notes Servers for virtual desktops Memory: 2 GB RAM per desktop 1 TB RAM across all servers for 500 virtual desktops 2 TB RAM across all servers for 1,000 virtual desktops 4 TB RAM across all servers for 2,000 virtual desktops CPU: 1 vcpu per desktop (8 desktops per core) Network: 63 cores across all servers for 500 virtual desktops 125 cores across all servers for 1,000 virtual desktops 250 cores across all servers for 2,000 virtual desktops Six 1 GbE NICs per standalone server for 500 virtual desktops Three 10 GbE NICs per blade chassis or six 1 GbE NICs per standalone server for 1,000/2,000 desktops Total server capacity required to host virtual desktops Network infrastructure Storage Minimum switching capability for NFS variant: Two physical switches Six 1 GbE ports per vsphere server or three 10 GbE ports per blade chassis One 1 GbE port per Control Station for management Two 10 GbE ports per Data Mover for data Minimum switching capability for FC variant: Common Two 1 GbE ports per vsphere server Four 4/8 Gb FC ports for VNX back end Two 4/8 Gb FC ports per vsphere server Two 10 GbE interfaces per Data Mover Two 8 Gb FC ports per storage processor (FC variant only) Redundant LAN configuration Redundant LAN/SAN configuration 51

52 Solution Architectural Overview Hardware Configuration Notes For 500 virtual desktops: Two Data Movers (active/standby NFS variant only) Fifteen 300 GB, 15 k rpm 3.5-inch SAS disks Three 100 GB, 3.5-inch flash drives For 1,000 virtual desktops: Two Data Movers (active/standby NFS variant only) Twenty-six 300 GB, 15 k rpm 3.5-inch SAS disks Three 100 GB, 3.5-inch flash drives For 2,000 virtual desktops: Three Data Movers (2 active/1 standby NFS variant only) Forty-six 300 GB, 15 k rpm 3.5-inch SAS disks Five 100 GB, 3.5-inch flash drives VNX shared storage for virtual desktops For 500 virtual desktops: Nine 2 TB, 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops: Seventeen 2 TB, 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops: Thirty-four 2 TB, 7,200 rpm 3.5-inch NL-SAS disks For 500 virtual desktops: Five 300 GB, 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops: Five 300 GB, 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops: Five 300 GB, 15 k rpm 3.5-inch SAS disks Optional for user data Optional for infrastructure storage 52

53 Solution Architectural Overview Hardware Configuration Notes Shared infrastructure In most cases, a customer environment will already have infrastructure services such as Active Directory and DNS configured. The setup of these services is beyond the scope of this document. If this solution is being implemented with no existing infrastructure, a minimum number of additional servers is required: Two physical servers 20 GB RAM per server Four processor cores per server Two 1 GbE ports per server Services can be migrated into VSPEX postdeployment but must exist before VSPEX can be deployed EMC nextgeneration backup Avamar One Gen4 utility node One Gen4 3.9TB spare node Three Gen4 3.9TB storage nodes Servers for customer infrastructure Minimum number required: Two physical servers 20 GB RAM per server Four processor cores per server Two 1 GbE ports per server Servers and the roles they fulfill may already exist in the customer environment Software resources Table 4 lists the software used in this solution. Table 4. Solution software Software VNX5300 (shared storage, file systems) Configuration VNX OE for file Release VNX OE for block Release 32 ( ) EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer EMC PowerPath Viewer (FC variant only) Version 5.3 Version 5.3 Version 1.0.SP2.b019 XenDesktop Desktop Virtualization Citrix XenDesktop Controller Version 5.6 Platinum Edition 53

54 Solution Architectural Overview Software Operating system for XenDesktop Controller Microsoft SQL Server Configuration Windows Server 2008 R2 Standard Edition Version 2008 R2 Standard Edition Next-generation backup Avamar 6.1 SP1 VMware vsphere vsphere Server 5.1 vcenter Server 5.1 Operating system for vcenter Server vstorage API for Array Integration Plugin (VAAI) (NFS variant only) PowerPath Virtual Edition (FC variant only) Windows Server 2008 R2 Standard Edition Virtual Desktops Note Beyond base OS, software was used for solution validation and is not required. Base operating system Microsoft Office Microsoft Windows 7 Enterprise (32-bit) SP1 Office Enterprise 2007 SP3 Internet Explorer Adobe Reader 9.1 McAfee Virus Scan 8.7.0i Enterprise Adobe Flash Player 11 Bullzip PDF Printer FreeMind Sizing for validated configuration When selecting servers for this solution, ensure that the processor core meets or exceeds the performance of the Intel Nehalem family at 2.66 GHz. As servers with greater processor speeds, performance, and higher core density become available, servers may be consolidated as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability. As with servers, you may also consolidate network interface card (NIC) speed and quantity as long as you maintain the overall bandwidth requirements for this solution and sufficient redundancy to support high availability. 54

55 Solution Architectural Overview Table 5 shows the configurations of each server having two sockets of four cores and 128 GB of RAM, and one 10 GbE per four blades plus one 10 GbE for each blade chassis, that support this solution: Table 5. Configurations that support this solution Number of servers Number of virtual desktops Total cores TB 16 1, TB 32 2, TB Total RAM As shown in Table 10 on page 74, a minimum of one core is required to support eight virtual desktops and a minimum of 2 GB of RAM for each. The correct balance of memory and cores for the expected number of virtual desktops to be supported by a server must also be taken into account. For example, a server expected to support 24 virtual desktops requires a minimum of three cores but also a minimum of 48 GB of RAM. IP network switches used to implement this reference architecture must have a minimum backplane capacity of 96 (for 500 virtual desktops), 192 (for 1,000 virtual desktops), or 320 (for 2,000 virtual desktops) Gb/s non-blocking and support the following features: IEEE 802.1x Ethernet flow control 802.1q VLAN tagging Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol SNMP management capability Jumbo frames The quantity and type of switches chosen should support high availability; choosing a network vendor based on the availability of parts, service, and support contracts is also recommended. In addition to the above features, the network configuration should include the following: A minimum of two switches to support redundancy Redundant power supplies A minimum of 40 1 GbE ports (for 500 virtual desktops), two 1 GbE and fourteen 10 GbE ports (for 1,000 virtual desktops), or two 1 GbE and twentytwo 10 GbE ports (for 2,000 virtual desktops),distributed for high availability The appropriate uplink ports for customer connectivity Use of 10 GbE ports should align with those on the server and storage while keeping in mind the overall network requirements for this solution and a level of redundancy to support high availability. Additional server NICs and storage connections should also be considered based on customer or specific implementation requirements. 55

56 Solution Architectural Overview The management infrastructure (Active Directory, DNS, DHCP, and SQL Server) can be supported on two servers similar to those previously defined, but require a minimum of only 20 GB of RAM instead of 128 GB. Storage configuration guidelines describes the disk storage layout. Server configuration guidelines Overview When you are designing and ordering the compute/server layer of the VSPEX solution, you should consider several factors that may alter the final purchase. From a virtualization perspective, if a system s workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement. If the virtual desktop pool does not have a high level of peak or concurrent usage, the number of vcpus may be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased may need to be increased. Table 6 provides configuration details for the virtual desktop servers and network hardware. Table 6. Server hardware Hardware Configuration Notes Servers for virtual desktops Memory: 2 GB RAM per desktop 1 TB RAM across all servers for 500 virtual desktops 2 TB RAM across all servers for 1,000 virtual desktops 4 TB RAM across all servers for 2,000 virtual desktops CPU: 1 vcpu per desktop (8 desktops per core) Network: 63 cores across all servers for 500 virtual desktops 125 cores across all servers for 1,000 virtual desktops 250 cores across all servers for 2,000 virtual desktops Six 1GbE NICs per standalone server for 500 virtual desktops Three 10 GbE NICs per blade chassis or Six 1 GbE NICs per standalone server for 1,000/2,000 desktops Total server capacity required to host virtual desktops 56

57 Solution Architectural Overview VMware vsphere memory virtualization for VSPEX VMware vsphere 5 has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features and the items you need to consider when using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources. Figure 13 shows an example of memory consumption at the hypervisor level. Figure 13. Hypervisor memory consumption Memory over-commitment Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vsphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vsphere can handle memory overcommitment without any performance degradation. However, if actively using more memory than those present on the server, vsphere might resort to swapping out portions of a virtual machine's memory. 57

58 Solution Architectural Overview Non-Uniform Memory Access vsphere uses a Non-Uniform Memory Access (NUMA) load-balancer to assign a home node to a virtual machine. Because memory for the virtual machine is allocated from the home node, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature. Transparent page sharing Virtual machines running similar operating systems and applications typically have identical sets of memory content. Page sharing allows the hypervisor to reclaim the redundant copies and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries, then total memory usage can be reduced to increase consolidation ratios. Memory ballooning By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. This is accomplished with little or no impact to the performance of the application. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vsphere memory overhead and the virtual machine memory settings. vsphere memory overhead The virtualization of memory resources has associated overhead. The memory space overhead has two components: Fixed system overhead for the VMkernel Additional overhead for each virtual machine The amount of additional overhead memory for the VMkernel is fixed, while each virtual machine depends on the number of virtual CPUs and the memory configured for the guest operating system. Allocating memory to virtual machines The proper sizing of memory for a virtual machine in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Table 10 outlines the resources used by a single virtual machine. Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account jumbo frames, VLAN, and Link Aggregation Control Protocol (LACP) on EMC unified storage. Table 3 on page 51 provides detailed network resource requirements. 58

59 Solution Architectural Overview VLAN It is a best practice to isolate network traffic so that the traffic between hosts and storage and hosts and clients, as well as management traffic, all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons, but in many cases logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs: Client access Storage Management These VLANs are illustrated in Figure 14. Figure 14. Required networks Note The diagram demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. A similar topology should be created for an array using 1 GbE network connections. The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network is used for communication between the compute layer and the storage layer. The management network is used for administrators to have a dedicated way to access the management connections on the storage array, network switches, and hosts. 59

60 Solution Architectural Overview Note Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks may be implemented if desired, but they are not required. If the Fibre Channel storage network option is chosen for the deployment, similar best practices and design principles apply. Enable jumbo frames Link aggregation This EMC VSPEX End-User Computing solution recommends that MTU be set at 9,000 (jumbo frames) for efficient storage and migration traffic. A link aggregation resembles an Ethernet channel, but uses the Link Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview vsphere allows more than one method of using storage when hosting virtual machines. The solutions described in this section and in Table 7 were tested using NFS, and the storage layout described adheres to all current best practices. Educated customers and architects can make modifications based on their understanding of the systems usage and load if required. Table 7. Storage hardware Hardware Configuration Notes Storage Common Two 10 GbE interfaces per Data Mover Two 8 Gb FC ports per storage processor (FC variant only) 60

61 Solution Architectural Overview Hardware Configuration Notes For 500 virtual desktops Two Data Movers (active/standby NFS variant only) Fifteen 300 GB, 15 k rpm 3.5-inch SAS disks Three 100 GB, 3.5-inch flash drives For 1,000 virtual desktops Two Data Movers (active/standby NFS variant only) Twenty-six 300 GB, 15 k rpm 3.5-inch SAS disks Three 100 GB, 3.5-inch flash drives For 2,000 virtual desktops Three Data Movers (2 active/1 standby NFS variant only) Forty-six 300 GB, 15 k rpm 3.5-inch SAS disks Five 100 GB, 3.5-inch flash drives VNX shared storage for virtual desktops For 500 virtual desktops Nine 2 TB, 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops Seventeen 2 TB, 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops Thirty-four 2 TB, 7,200 rpm 3.5-inch NL-SAS disks For 500 virtual desktops Five 300 GB, 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops Five 300 GB, 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops Five 300 GB, 15 k rpm 3.5-inch SAS disks Optional for user data Optional for infrastructure storage VMware vsphere storage virtualization for VSPEX VMware ESXi provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machine. A virtual machine stores its operating system, and all other files that are related to the virtual machine activities, in a virtual disk. The virtual disk itself is one file or multiple files. VMware uses a virtual SCSI controller to present the virtual disk to the guest operating system running inside the virtual machine. 61

62 Solution Architectural Overview The virtual disk resides in a datastore. Depending on the type used, the virtual disk can reside in either a VMware Virtual Machine File System (VMFS) datastore or an NFS datastore. Figure 15 shows the details. Figure 15. VMware virtual disk types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI-based local or network storage. Raw Device Mapping In addition, VMware provides a mechanism named Raw Device Mapping (RDM). RDM allows a virtual machine to directly access a volume on the physical storage and can be used only with Fibre Channel or iscsi. NFS VMware also supports use of NFS file systems from external NAS storage systems or devices as virtual machine datastores. In this VSPEX solution, VMFS is used for FC variant; NFS is used for NFS variant. Storage layout for 500 virtual desktops Core storage layout Figure 16 on page 63 illustrates the layout of the disks that are required to store 500 virtual desktops. This layout does not include space for user profile data. 62

63 Solution Architectural Overview FAST Cache UN-BOUND Hot Spare RAID Bus 1 Enclosure 0 VNX OE RAID 5 (3+1) Hot Spare Virtual Desktops Storage Pool 1 RAID Bus 0 Enclosure 0 SAS SSD NL SAS UNBOUND Figure 16. Core storage layout for 500 virtual desktops Core storage layout overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The disks shown here as 0_0_4 and 1_0_0 are hot spares. These disks are marked as hot spares in the storage layout diagram. Ten SAS disks (shown here as 0_0_5 to 0_0_14) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. Note For NAS, ten LUNs of 200 GB each are carved out of the pool to provide the storage required to create four NFS file systems. The file systems are presented to the vsphere servers as four NFS datastores. For FC, four LUNs of 500 GB each are carved out of the pool to present to the vsphere servers as four VMFS datastores. Two Flash drives (shown here as 1_0_1 and 1_0_2) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown here as 1_0_3 to 1_0_14 are unbound. They were not used for testing this solution. Larger drives may be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms may give sub-optimal results. Optional user data storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 17 on page 64. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required. 63

64 Solution Architectural Overview Figure 17. Optional storage layout for 500 virtual desktops Optional storage layout overview The virtual desktops use two shared filed systems one for user profiles, and the other to redirect user storage that resides in home directories. In general, redirecting users data out of the base image of VNX for file enables centralized administration, backup, and recovery, and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. The following optional configuration is used in the reference architecture for 500 virtual desktops: Storage layout for 1,000 virtual desktops The disk shown here as 0_1_13 is a hot spare. This disk is marked as hot spare in the storage layout diagram. Five SAS disks (shown here as 0_1_0 to 0_1_4) on the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. A 1 TB LUN or NFS file system is carved out of the pool to present to the vsphere servers as a VMFS or an NFS datastore. Eight NL-SAS disks (shown here as 0_1_5 to 0_1_12) on the RAID 6 storage pool 3 are used to store user data and roaming profiles. Ten LUNs of 1 TB each are carved out of the pool to provide the storage required to create two CIFS file systems. The disk shown here as 0_1_14 is unbound. It was not used for testing this solution. If multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop datastores, but it can provide performance improvements when implemented for user data and roaming profiles. Core storage layout Figure 18 on page 65 illustrates the layout of the disks that are required to store 1,000 desktop virtual machines. This layout does not include space for user profile data. 64

65 Solution Architectural Overview Figure 18. Core storage layout for 1,000 virtual desktops Core storage layout overview The following core configuration is used in the reference architecture for 1,000 virtual desktops: Note Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The disks shown here as 0_0_4 and 1_0_10 are hot spares. These disks are marked as hot spare in the storage layout diagram. Twenty SAS disks (shown here as 0_0_5 to 0_0_14 and 1_0_0 to 1_0_9) in the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, ten LUNs of 400 GB each are carved out of the pool to provide the storage required to create eight NFS file systems. The file systems are presented to the vsphere servers as eight NFS datastores. For FC, eight LUNs of 500GB each are carved out of the pool to present to the vsphere servers as eight VMFS datastores. Two Flash drives (shown here as 1_0_11 and 1_0_12) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. The disks shown here as 1_0_13 and 1_0_14 are unbound. They were not used for testing this solution. Larger drives may be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms may give sub-optimal results. Optional user data storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 19 on page 66. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required. 65

66 Solution Architectural Overview Figure 19. Optional storage layout for 1,000 virtual desktops Optional storage layout overview The virtual desktops use two shared file systems one for user profiles and the other to redirect user storage that resides in home directories. In general, redirecting users data out of the base image of VNX for file enables centralized administration, backup, and recovery, and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. The following optional configuration is used in the solution stack architecture: The disk shown here as 1_1_6 is a hot spare. This disk is marked as hot spare in the storage layout diagram. Five SAS disks (shown here as 0_1_0 to 0_1_4) in the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. A 1 TB LUN or NFS file system is carved out of the pool to present to the vsphere servers as a VMFS or NFS datastore. Sixteen NL-SAS disks (shown here as 0_1_5 to 0_1_14, and 1_1_0 to 1_1_5) in the RAID 6 storage pool 3 are used to store user data and roaming profiles. Ten LUNs of 1.5 TB each are carved out of the pool to provide the storage required to create two CIFS file systems. The disks shown here as 1_1_7 to 1_1_14 are unbound. They were not used for testing this solution. If multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1 GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop datastores, but it can provide performance improvements when implemented for user data and roaming profiles. 66

67 Solution Architectural Overview Storage layout for 2,000 virtual desktops Core storage layout Figure 20 illustrates the layout of the disks that are required to store 2,000 desktop virtual machines. This layout does not include space for user profile data. Figure 20. Core storage layout for 2,000 virtual desktops Core storage layout overview The following core configuration is used in the reference architecture for 2,000 virtual desktops: Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE. The disks shown here as 0_0_4, 1_0_12, and 1_1_5 are hot spares. These disks are marked as hot spare in the storage layout diagram. Forty SAS disks (shown here as 0_0_5 to 0_0_14, 1_0_0 to 1_0_11, 0_1_0 to 0_1_12, and 1_1_0 to 1_1_4) in the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. For NAS, 10 LUNs of 800 GB each are carved out of the pool to provide the storage required to create 16 NFS file systems. The file systems are presented to the vsphere servers as 16 NFS datastores. For FC, 16 LUNs of 500GB each are carved out of the pool to present to the vsphere servers as 16 VMFS datastores. Four Flash drives (shown here as 1_0_13 to 1_0_14 and 0_1_13 to 0_1_14) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. The disks shown here as 1_1_6 to 1_1_14 are unbound. They were not used for testing this solution. 67

68 Solution Architectural Overview Note Larger drives may be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms may give sub-optimal results. Optional user data storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 21. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required. Figure 21. Optional storage layout for 2,000 virtual desktops Optional storage layout overview The virtual desktops use two shared filed systems one for user profiles, and the other to redirect user storage that resides in home directories. In general, redirecting users data out of the base image of VNX for file enables centralized administration, backup, and recovery, and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. The following optional configuration is used in the solution stack architecture: The disks shown here as 1_2_14 and 0_3_8 are hot spares. These disks are marked as hot spare in the storage layout diagram. Five SAS disks (shown here as 0_2_0 to 0_2_4) in the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. A 1 TB LUN or NFS file system is carved out of the pool to present to the vsphere servers as a VMFS or NFS datastore. Thirty two NL-SAS disks (shown here as 0_2_5 to 0_2_14, 1_2_0 to 1_2_13, and 0_3_0 to 0_3_7) in the RAID 6 storage pool 3 are used to store user data 68

69 Solution Architectural Overview and roaming profiles. Ten LUNs of 3 TB each are carved out of the pool to provide the storage required to create two CIFS file systems. The disks shown here as 0_3_9 to 0_3_14 are unbound. They were not used for testing this solution. If multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop datastores, but it can provide performance improvements when implemented for user data and roaming profiles. High availability and failover Introduction This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide it provides the ability to survive most single-unit failures with minimal to no impact to business operations. Virtualization layer As indicated earlier, configuring high availability in the virtualization layer and allowing the hypervisor to automatically restart virtual machines that fail is recommended. Figure 22 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 22. High availability at the virtualization layer Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible. Compute layer While this solution offers flexibility in the type of servers to be used in the compute layer, using enterprise class servers designed for the datacenter is recommended. These servers, with redundant power supplies, should be connected to separate Power Distribution Units (PDUs) in accordance with your server vendor s best practices. 69

70 Solution Architectural Overview Figure 23. Redundant power supplies Configuring high availability in the virtualization layer is also recommended. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 22. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network. Figure 24. Network layer high availability 70

71 Solution Architectural Overview By designing the network with no single points of failure, you can ensure that the compute layer will be able to access storage and communicate with users even if a component fails. Storage layer The VNX family is designed for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in the event of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be allocated dynamically to replace a failing disk. This is shown in Figure 25. Figure 25. VNX series high availability Validation test profile EMC storage arrays are designed to be highly available by default. When they are configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. Profile characteristics The VSPEX solution was validated with following environment profile in Table 8. Table 8. Validated environment profile Profile characteristic Value Number of virtual desktops 500 for 500 virtual desktops 1,000 for 1,000 virtual desktops 2,000 for 2,000 virtual desktops 71

72 Solution Architectural Overview Profile characteristic Virtual desktop OS CPU per virtual desktop Value Windows 7 Enterprise (32-bit) SP1 1 vcpu Number of virtual desktops per CPU core 8 RAM per virtual desktop Desktop provisioning method Average storage available for each virtual desktop Average IOPS per virtual desktop at steady state Average peak IOPS per virtual desktop during boot storm Number of datastores to store virtual desktops 2 GB Machine Creation Services (MCS) 4.8 GB (VMDK and VSwap) 8 IOPS 65 IOPS (NFS variant) 84 IOPS (FC variant) 4 for 500 virtual desktops 8 for 1,000 virtual desktops 16 for 2,000 virtual desktops Number of virtual desktops per datastore 125 Disk and RAID type for datastores Disk and RAID type for CIFS shares to host roaming user profiles and home directories (optional for user data) RAID 5, 300 GB, 15k rpm, 3.5-inch SAS disks RAID 6, 2 TB, 7,200 rpm, 3.5-inch NL- SAS disks Backup environment configuration guidelines Overview Backup characteristics This section provides guidelines to set up the backup and recovery environment for this VSPEX solution. Table 9 shows how the backup environment profile of three stacks in this VSPEX solution were sized. Table 9. Backup profile characteristics Profile characteristic Number of virtual machines User data Value 500 for 500 virtual desktops 1,000 for 1,000 virtual desktops 2,000 for 2,000 virtual desktops 5 TB for 500 virtual desktops 10 TB for 1,000 virtual desktops 20 TB for 2,000 virtual desktops Note 10.0 GB per desktop Daily change rate for the applications 72

73 Solution Architectural Overview Profile characteristic Value User data 2% Retention per data types # Daily 30 Daily # Weekly 4 Weekly # Monthly 1 Monthly Backup layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with an Avamar Data Store. This enables the unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This backup solution unifies the backup process with industry-leading deduplication backup software and system, and achieves the highest levels of performance and efficiency. Sizing guidelines Reference workload The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures discussed in this document. They provide guidance on how to correlate those reference workloads to actual customer workloads and how that may change the end delivery from the server and network perspective. You can modify the storage definition adding drives for greater capacity and performance as well as by adding features like FAST Cache for desktops and FAST VP for improved user data performance. The disk layouts were created to provide support for the appropriate number of virtual desktops at the defined performance level. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a reduced user experience because of higher response time. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements, which rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, we have defined a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose. 73

74 Solution Architectural Overview For the VSPEX End-User Computing solution, the reference workload is defined as a single virtual desktop. Table 10 shows the characteristics of the reference virtual machine. Table 10. Virtual desktop characteristics Characteristic Virtual desktop operating system Value Microsoft Windows 7 Enterprise Edition (32-bit) SP1 Virtual processors per virtual desktop 1 RAM per virtual desktop Available storage capacity per virtual desktop Average IOPS per virtual desktop at steady state Average peak IOPS per virtual desktop during boot storm 2 GB 4 GB (VMDK and VSwap) 8 65 IOPS (NFS variant) 84 IOPS (FC variant) This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently, with a steady load generated by the constant use of office-based applications like browsers, office productivity software, and other standard task worker utilities. Applying the reference workload You may need to consider other factors, in addition to the supported desktop numbers (500, 1,000, and 2,000), when deciding which end-user computing solution to deploy. Concurrency The workloads used to validate VSPEX solutions assume that all desktop users will be active at all times. In other words, the 1,000-desktop architecture was tested with 1,000 desktops, all generating workload in parallel, all booted at the same time, and so on. If your customer expects to have 1,200 users, but only 50 percent of them will be logged on at any given time because of time zone differences or alternate shifts, the 600 active users out of the total 1,200 users can be supported by the 1,000- desktop architecture. Heavier desktop workloads The workload defined in Table 10 and used to test these VSPEX end-user computing configurations is considered a typical office worker load. However, some customers may think that their users have a more active profile. If a company has 800 users, and because of custom corporate applications each user generates 12 IOPS as compared to 8 IOPS used in the VSPEX workload, it will need 74

75 Solution Architectural Overview 9,600 IOPS (800 users * 12 IOPS per desktop). The 1,000-desktop configuration would be underpowered in this case because it has been rated to 8,000 IOPS (1,000 desktops * 8 IOPS per desktop). This customer should move up to the 2,000-desktop solution. Implementing the reference architectures The reference architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements. Resource types The reference architectures define the hardware requirements for the solution in terms of four basic types of resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, how they are used in the reference architectures, and key considerations for implementing them in a customer environment. CPU resources The architectures define the number of CPU cores that are required, but not a specific type or configuration. It is assumed that new deployments use recent revisions of common processor technologies, and it is assumed that these will perform as well as, or better than, the systems used to validate the solution. In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual desktop and required hardware resources in the reference architectures assume that there will be no more than eight virtual CPUs for each physical processor core (8:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops. However, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual desktop in the reference architecture is defined as having 2 GB of memory. In a virtual environment, it is not uncommon to provision virtual desktops with more memory than the hypervisor physically has, because of budget constraints. The memory over-commitment technique takes advantage of the fact that each virtual desktop does not fully utilize the amount of memory allocated to it. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware vsphere runs out of memory for the guest operating systems, paging will begin to take place, resulting in extra I/O activity going to the VSwap files. If the 75

76 Solution Architectural Overview storage subsystem is sized correctly, occasional spikes because of VSwap activity may not cause performance issues, as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of VSwap activity, more disks must be added, not because of capacity requirement but because of the demand of increased performance. It is then up to the administrator to decide whether it is more cost effective to add more physical memory to the server or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option. This solution was validated with statically assigned memory and no over-commitment of memory resources. If memory over-commitment is used in a real-world environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. Network resources The reference architectures outline the minimum needs of the system. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports and provide the option of adding ports using EMC FLEX I/O modules. For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 8 IOs per second with an average size of 4 KB. Each virtual desktop is generating at least 32 KB/s of traffic on the storage network. For an environment rated for 500 virtual desktops, this equates to a minimum of approximately 16 MB/sec. This is well within the bounds of modern networks. However, this does not take into account other operations. For example, additional bandwidth is needed for: User network traffic Virtual desktop migration Administrative and management operations The requirements for each of these vary depending on how the environment is being used, so it is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the described use cases. Regardless of the network traffic requirements, always have at least two physical network connections that are shared for a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few layers to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision datastores to the VMware vsphere 76

77 Solution Architectural Overview cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is generally acceptable to replace drive types with a type that has more capacity and the same performance characteristics or with ones that have higher performance characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Implementation summary The requirements stated in the reference architectures are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual desktop. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual desktops differ significantly from the reference definition, and vary in the same resource group, then you may need to add more of that resource to the system. Quick assessment An assessment of the customer environment will help ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the user types that you plan to migrate into the VSPEX End-User Computing environment. For each group, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual desktops required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as shown in Table 11. Table 11. Blank worksheet row Application CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Example User Type Resource Requirements Equivalent Reference Desktops Fill out the resource requirements for the User Type. The row requires inputs on three different resources: CPU, Memory, and IOPS. 77

78 Solution Architectural Overview CPU requirements Memory requirements The reference virtual desktop assumes most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, then consider that your pool needs to provide 120 virtual desktops of capability. Memory plays a key role in ensuring application functionality and performance. Therefore, each group of desktops will have different targets for the acceptable amount of available memory. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of desktops you are planning for to accommodate the additional resource requirements. For example, if you have 200 desktops that will be virtualized but each one needs 4 GB of memory, instead of the 2 GB that is provided in the reference virtual desktop, plan for 400 reference virtual desktops. Storage performance requirements Storage capacity requirements Determining equivalent reference virtual desktops The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations. The storage capacity requirements for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops presented in this solution rely on additional shared storage for user profile data and user documents. This requirement is covered as an optional component that can be met with the addition of specific storage hardware from the reference architecture or with existing file shares in the environment. With all of the resources defined, determine an appropriate value for the Equivalent Reference Virtual Desktops row in Table 10 by using the relationships in Table 12. Round all values up to the nearest whole number. Table 12. Reference virtual desktop resources Resource Value for Reference Virtual Desktop Relationship between requirements and equivalent reference virtual desktops CPU 1 Equivalent Reference Virtual Desktops = Resource Requirements Memory 2 Equivalent Reference Virtual Desktops = (Resource Requirements)/2 78

79 Resource Value for Reference Virtual Desktop Solution Architectural Overview Relationship between requirements and equivalent reference virtual desktops IOPS 8 Equivalent Reference Virtual desktops = (Resource Requirements)/8 For example, if a group of 100 users need the two virtual CPUs and 12 IOPS per desktop described earlier, along with 8 GB of memory, describe them as needing two reference desktops of CPU, four reference desktops of memory, and two reference desktops of IOPS based on the virtual desktop characteristics in Table 10 on page 74. These figures go in the Equivalent Reference Virtual Desktops row, as shown in Table 13. Use the maximum value in the row to complete the Equivalent Reference Virtual Desktops column. Multiply the number of equivalent reference virtual desktops by the number of users to arrive at the total resource needs for that type of user. Table 13. Example worksheet row User Type Heavy Users Resource Requirements Equivalent Reference Virtual Desktops CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops After completing the worksheet for each user type to be migrated into the virtual infrastructure, compute the total number of reference virtual desktops that are required in the pool by computing the sum of the Total column on the right side of the worksheet, as shown in Table 14. Table 14. Example applications User Type Heavy Users Resource Requirement s Equivalent Reference Virtual Desktops CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops 79

80 Solution Architectural Overview User Type Moderate Users Typical Users Resource Requirement s Equivalent Reference Virtual Desktops Resource Requirement s Equivalent Reference Virtual Desktops CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Total 900 The VSPEX End-User Computing solutions define discrete resource pool sizes. For this solution set, the pool sizes are 500, 1,000, and 2,000. In the case of Table 14, the customer requires 900 virtual desktops of capability from the pool. Therefore, the resource pool of 1,000 virtual desktops provides sufficient resources for the current needs as well as room for growth. Fine-tuning hardware resources In most cases, the recommended hardware for servers and storage can be sized appropriately based on the process described. However, in some cases further customization of available hardware resources may be desired. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point. Storage resources In some applications, separating some storage workloads from other workloads may be necessary. The storage layouts in the VSPEX architectures put all of the virtual desktops in a single resource pool. To achieve workload separation, purchase additional disk drives for each group that needs workload isolation, and add them to a dedicated pool. It is not appropriate to reduce the size of the main storage resource pool in order to support isolation, or to reduce the capability of the pool, without additional guidance beyond this document. The storage layouts presented in this paper are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult-topredict impacts on other areas of the system. 80

81 Server resources Solution Architectural Overview In the VSPEX End-User Computing solution, it is possible to customize the server hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 15. Note the addition of the Total CPU Resources and Total Memory Resources columns on the right side of the table. Table 15. Server resource component totals User Type CPU (Virtual CPUs) Memory (GB) Number of Users Total CPU Resources Total Memory Resources Heavy Users Moderate Users Typical Users Resource Requirements Resource Requirements Resource Requirements Total In this example, the target architecture required 700 virtual CPUs and 1800 GB of memory. With the stated assumptions of eight desktops per physical processor core, and no memory over-provisioning, this translates to 88 physical processor cores and 1800 GB of memory. In contrast, the 1,000 virtual-desktop resource pool as documented in the reference architecture calls for 2000 GB of memory and at least 125 physical processor cores. In this environment, the solution can be implemented effectively with fewer server resources. Note Keep high availability requirements in mind when customizing the resource pool hardware. Table 16 is a blank worksheet. 81

82 Solution Architectural Overview Table 16. Blank customer worksheet User Type CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Total 82

83 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure VMware vsphere hosts Install and configure SQL Server database Install and configure VMware vcenter Server Install and configure XenDesktop controller Summary

84 VSPEX Configuration Guidelines Overview Table 17 describes the stages of the solution deployment process. When the deployment is completed, the VSPEX infrastructure will be ready for integration with the existing customer network and server infrastructure. Table 17. Deployment process overview Stage Description Reference 1 Verify prerequisites. Pre-deployment tasks 2 Obtain the deployment tools. Pre-deployment tasks 3 Gather customer configuration data. Pre-deployment tasks 4 Rack and cable the components. Vendor s documentation 5 Configure the switches and networks; connect to the customer network. Prepare switches, connect network, and configure switches 6 Install and configure the VNX. Prepare and configure storage array 7 Configure virtual machine datastores. Prepare and configure storage array 8 Install and configure the servers. Install and configure VMware vcenter Server 9 Set up SQL Server (used by VMware vcenter and XenDesktop). 10 Install and configure vcenter and virtual machine networking. Install and configure SQL Server database Install and configure VMware vcenter Server 11 Set up XenDesktop Controller. Install and configure XenDesktop controller 12 Test and install. Validating the Solution Pre-deployment tasks Overview Pre-deployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results will be needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. Be sure to perform these tasks, as shown in Table 18, before the customer visit to decrease the time required onsite. 84

85 Table 18. Tasks for pre-deployment Task Description Reference Gather documents Gather the related documents listed in the references. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution. VSPEX Configuration Guidelines EMC documentation Other documentation Gather tools Gather data Gather the required and optional tools for the deployment. Use Table 19 to confirm that all equipment, software, and appropriate licenses are available before the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information on the customer configuration data worksheet for reference during the deployment process. Table 19 Appendix B Deployment prerequisites Complete the VNX Block Configuration Worksheet for Fibre Channel variant or VNX File and Unified Worksheet for NFS variant, available on the EMC Online Support website, to provide the most comprehensive array-specific information. Table 19 itemizes the hardware, software, and license requirements for the solution. For additional information, refer to the hardware and software tables in this guide. Table 19. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual desktops: Sufficient physical server capacity to host desktops VMware vsphere 5.1 servers to host virtual infrastructure servers Note This requirement may be covered by existing infrastructure. Networking: Switch port capacity and capabilities as required by the end-user computing 85

86 VSPEX Configuration Guidelines Requirement Description Reference EMC VNX: Multiprotocol storage array with the required disk layout Software VMware ESXi 5.1 installation media VMware vcenter Server 5.1 installation media Citrix XenDesktop 5.6 installation media EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer EMC Online Support Software FC variant only Software NFS variant only Licenses Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vcenter and Citrix Desktop Controller) Microsoft Windows Server 2012 installation media (AD/DHCP/DNS) Microsoft Windows 7 SP1 installation media Microsoft SQL Server 2008 or newer installation media Note This requirement may be covered in the existing infrastructure. EMC PowerPath Viewer EMC PowerPath Virtual Edition EMC vstorage API for Array Integration plug-in VMware vcenter 5.1 license key VMware vsphere 5.1 Desktop license keys Citrix XenDesktop 5.6 license files EMC Online Support EMC Online Support 86

87 Requirement Description Reference Microsoft Windows Server 2008 R2 Standard (or higher) license keys Microsoft Windows Server 2012 Standard (or higher) license keys Note This requirement may be covered in the existing Microsoft Key Management Server (KMS). Microsoft Windows 7 license keys Note This requirement may be covered in the existing Microsoft Key Management Server (KMS). Microsoft SQL Server license key Note This requirement may be covered in the existing infrastructure. VSPEX Configuration Guidelines Licenses - FC variant only EMC PowerPath Virtual Edition license files Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process. Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information may be added, modified, and recorded as deployment progresses. Additionally, complete the VNX Block Configuration Worksheet for Fibre Channel variant or VNX File and Unified Worksheets for NFS variant, available on the EMC Online Support website, to provide the most comprehensive array-specific information. Prepare switches, connect network, and configure switches Overview This section provides the requirements for network infrastructure to support this architecture. Table 20 offers a summary of the tasks to complete along with references for further information. 87

88 VSPEX Configuration Guidelines Table 20. Tasks for switch and network configuration Task Description Reference Configure infrastructure network Configure storage array and ESXi host infrastructure networking as specified in Solution architecture on page 42. Configure storage network (FC variant) Configure VLANs Configure Fibre Channel switch ports, zoning for ESXi hosts, and the storage array. Configure private and public VLANs as required. Vendor s switch configuration guide Your vendor s switch configuration guide Complete network cabling Connect switch interconnect ports. Connect VNX ports. Connect ESXi server ports. Prepare network switches Configure infrastructure network For validated levels of performance and high availability, this solution requires the switching capacity provided in this document s Solution hardware table. If existing infrastructure meets the requirements, new hardware installation is not necessary. The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. Figure 26 and Figure 27 show a sample redundant Ethernet infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that no single points of failure exist in network connectivity. 88

89 VSPEX Configuration Guidelines Figure 26. Sample Ethernet network architecture for 500 and 1,000 virtual desktops 89

90 X 2 X XX 1618 X X !! FAN STATUS FAN STATUS FAN STATUS FAN 1 FAN 5 FAN 2 FAN 6 FAN 3 FAN 7 FAN 4 FAN 8 FAN STATUS XX 3234 XX X 48 X FAN STATUS FAN STATUS FAN 1 FAN 5 FAN 2 FAN 6 FAN 3 FAN 7 FAN 4 FAN 8 FAN STATUS FAN STATUS FAN STATUS FAN STATUS X 2 X FAN STATUS FAN STATUS XX 1618 X X FAN STATUS FAN STATUS FAN STATUS FAN STATUS XX 3234 XX B B X 48 X VSPEX Configuration Guidelines Additional ESXi Blades CHS A56 CHS A56 Customer Network Virtual Desktop Network Storage Network VM mgmt/vmotion Customer network uplink Customer Data (CIFS) Storage (NFS) ISL Storage (either variant) Customer data (CIFS) Storage (NFS) Customer data (CIFS) Virtual Desktop Network Storage Network VM mgmt/vmotion Storage (either variant) Customer Data (CIFS) Management Management Secondary Control Station (standby) Figure 27. Sample Ethernet network architecture for 2,000 virtual desktops 90

91 VSPEX Configuration Guidelines Configure VLANs Ensure that you have an adequate number of switch ports for the storage array and ESXi hosts configured with a minimum of three VLANs for: Virtual machine networking, ESXi management, and CIFS traffic (customerfacing networks, which may be separated if desired) NFS networking (private network) vmotion (private network) Complete network cabling Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network. Note At this point, the new equipment is being connected to the existing customer network. Take care to ensure that unforeseen interactions do not cause service issues on the customer network. Prepare and configure storage array VNX configuration Overview This section describes how to configure the VNX storage array. In this solution, the VNX series provides Network File System (NFS) or Fibre Channel SAN-connected block storage for VMware hosts. Table 21 shows the tasks for the storage configuration. 91

92 VSPEX Configuration Guidelines Table 21. Tasks for storage configuration Task Description Reference Set up initial VNX configuration Provision storage for VMFS datastores (FC only) Provision storage for NFS datastores (NFS only) Configure the IP address information and other key parameters on the VNX. Create FC LUNs that will be presented to the ESXi servers as VMFS datastores hosting the virtual desktops. Create NFS file systems that will be presented to the ESXi servers as NFS datastores hosting the virtual desktops. VNX5300 Unified Installation Guide VNX5500 Unified Installation Guide VNX File and Unified Worksheet Unisphere System Getting Started Guide Your vendor s switch configuration guide Provision optional storage for user data Provision optional storage for infrastructure virtual machines Create CIFS file systems that will be used to store roaming user profiles and home directories. Create optional VMFS/NFS datastores to host SQL Server, domain controller, vcenter Server, and/or XenDesktop controller virtual machines. Prepare VNX VNX5300 Unified Installation Guide provides instructions on assembly, racking, cabling, and powering the VNX. For 2,000 virtual desktops, refer tovnx5500 Unified Installation Guide instead. There are no specific setup steps for this solution. Set up the initial VNX configuration After completing the initial VNX setup, you must configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information: DNS NTP Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership The reference documents listed in Table 21 provide more information on how to configure the VNX platform. Storage configuration guidelines on page 60 provides more information on the disk layout. 92

93 VSPEX Configuration Guidelines Provision core data storage Overview Core data storage is a repository for virtual desktops operating system data. It can be VMFS datastores for FC variant and NFS datastores for NFS variant. Figure 16, Figure 18, and Figure 20 depict the target storage layout for both Fibre Channel (FC) and NFS variants of the three solution stacks in this VSPEX solution. The following sections describe provision steps for both FC and NFS variants. Provision storage for VMFS datastores (FC variant only) Complete the following steps in the EMC Unisphere interface to configure FC LUNs on VNX that will be used to store virtual desktops: 1. Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 300 GB SAS drives (10 drives for 500 virtual desktops, twenty for 1,000 virtual desktops, or forty for 2,000 virtual desktops). Enable FAST Cache for the storage pool. a. Log in to EMC Unisphere. b. Choose the array that will be used in this solution. c. Go to Storage -> Storage Configuration -> Storage Pools. d. Go to the Pools tab. e. Click Create. Note Create your hot spare disks at this time. Consult the EMC VNX Unified Installation Guide for additional information. 2. In the block storage pool, create four, eight, or sixteen LUNSs of 500 GB each (four LUNs for 500 virtual desktops, eight LUNs for 1,000 virtual desktops, or sixteen LUNs for 2,000 virtual desktops), and present them to the ESXi servers as VMFS datastores. a. Go to Storage -> LUNs. b. Click Create. c. In the dialog box, choose the pool created in step1; MAX for User Capacity; and 4, 8, or 16 for Number of LUNs to create. LUNs will be provisioned after this operation. 3. Configure a storage group to allow ESXi servers access to the newly created LUNs. a. Go to Hosts -> Storage Groups. b. Create a new storage group. c. Select LUNs and ESXi hosts to be added in this storage group. Provision storage for NFS datastores (NFS variant only) Complete the following steps in EMC Unisphere to configure NFS file systems on VNX that will be used to store virtual desktops: 1. Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 300 GB SAS drives (ten drives for 500 virtual desktops, twenty drives for 93

94 VSPEX Configuration Guidelines 1,000 virtual desktops, or forty drives for 2,000 virtual desktops). Enable FAST Cache for the storage pool. a. Log in to EMC Unisphere. b. Choose the array that will be used in this solution. c. Go to Storage -> Storage Configuration -> Storage Pools. d. Go to the Pools tab. e. Click Create. Note Create your hot spare disks at this time. Consult the EMC VNX Unified Installation Guide for additional information. 2. Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. Each LUN should be 200 GB for 500 virtual desktops, 400 GB for 1,000 virtual desktops, or 800 GB for 2,000 virtual desktops. a. Go to Storage -> LUNs. b. Click Create. c. In the dialog box, choose the pool created in step 1, MAX for User Capacity, and 10 for Number of LUNs to create. Note Ten LUNs are created because EMC Performance Engineering recommends creating approximately one LUN for every four drives in the storage pool and creating LUNs in even multiples of ten. Refer to EMC VNX Unified Best Practices for Performance Applied Best Practices Guide. d. Go to Hosts -> Storage Groups. e. Choose filestorage. f. Click Connect LUNs. g. In the Available LUNs panel, choose the 10 LUNs you just created. h. The LUNS immediately appear in the Selected LUNs panel. i. The Volume Manager automatically detects a new storage pool for file, or you can click Rescan Storage System under Storage Pool for File to scan for it immediately. j. Do not proceed until the new storage pool for file is present in the GUI. 3. Create four, eight, or sixteen files systems of 500 GB each (four file systems for 500 virtual desktops, eight for 1,000, or sixteen for 2,000), and present them to the ESXi servers as NFS datastores. a. Go to Storage -> Storage Configuration -> File Systems. b. Click Create. c. In the dialog box, choose Create from Storage Pool. d. Enter the Storage Capacity, for example, 500GB. e. Keep everything else set to their default values. 94

95 VSPEX Configuration Guidelines Note To enable an NFS performance fix for VNX file that significantly reduces NFS write latency, the file systems must be mounted on the Data Mover using the Direct Writes mode, as shown in Figure 28. The Set Advanced Options checkbox must be selected to enable the Direct Writes Enabled checkbox. Figure 28. Set Direct Writes Enabled checkbox 2. Export the file systems using NFS, and give root access to ESXi servers. a. Go to Storage -> Shared Folders -> NFS. b. Click Create. c. In the dialog window, add the IP addresses, separated by colons, of all ESXi servers in Root Hosts. 3. In Unisphere: a. Click Settings > Data Mover Parameters to make changes to the Data Mover configuration. b. Click the list menu to the right of Set Parameters and choose All Parameters, as shown in Figure 29. c. Scroll down to the nthreads parameter as shown in Figure 30. d. Click Properties to update the setting. 95

96 VSPEX Configuration Guidelines The default number of threads dedicated to serve NFS requests is 384 per Data Mover on VNX. Because this solution requires up to 2,000 desktop connections, increase the number of active NFS threads to a maximum of 1,024 (for 500 virtual desktops), or 2,048 (for 1,000 and 2,000 virtual desktops) on each Data Mover. Figure 29. View all Data Mover parameters Figure 30. Set nthread parameter Fast Cache configuration To configure FAST Cache on the storage pool(s) for this solution, complete the following steps: 4. Configure flash drives as FAST Cache. a. Click Properties (in the dashboard of the Unisphere window) or Manage Cache (in the left-hand pane of the Unisphere window) to open the Storage System Properties dialog box, which is shown in Figure 31 b. Click the FAST Cache tab to view FAST Cache information. 96

97 VSPEX Configuration Guidelines Figure 31. Storage System Properties dialog box Note c. Click Create to open the Create FAST Cache dialog box, which is shown in Figure 32. d. The RAID Type field is displayed as RAID 1 when the FAST Cache has been created. e. You can also choose the number of flash drives. The bottom portion of the window shows the flash drives that will be used for creating FAST Cache. You can choose the drives manually by selecting the Manual option. Refer to Storage configuration guidelines to determine the number of flash drives that are used in this solution. If a sufficient number of flash drives are not available, an error message is displayed and FAST Cache cannot be created. Figure 32. Create FAST Cache dialog box 5. Enable FAST Cache on the storage pool. 97

98 VSPEX Configuration Guidelines If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. In other words, all the LUNs created in the storage pool will have FAST Cache enabled or disabled. You can configure them under the Advanced tab in the Create Storage Pool dialog box shown in Figure 33. After FAST Cache is installed in the VNX series, it is enabled by default when a storage pool is created. Figure 33. Advanced tab in the Create Storage Pool dialog box If the storage pool has already been created, you can use the Advanced tab in the Storage Pool Properties dialog box, as shown in Figure 34, to configure FAST Cache. Figure 34. Advanced tab in the Storage Pool Properties dialog box Note The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves. 98

99 VSPEX Configuration Guidelines Provision optional storage for user data If storage required for user data (that is, roaming user profiles and home directories) does not already exist in the production environment and the optional user data disk pack has been purchased, complete the following steps in Unisphere to configure two CIFS file systems on VNX: 1. Create a block-based RAID 6 storage pool that consists of eight, sixteen, or twenty-two 2 TB NL-SAS drives (eight drives for 500 virtual desktops, sixteen drives for 1,000 virtual desktops, or twenty-two for 2,000 virtual desktops). Figure 17, Figure 19, and Figure 21 depict the target user data storage layout for the solution. 2. Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. Each LUN should be 1 TB for 500 virtual desktops, 1.5 TB for 1,000 virtual desktops, or 3 TB for 2,000 virtual desktops. 3. Create two file systems from the system-defined NAS pool containing the two new LUNs. Export the file systems as CIFS shares. FAST VP configuration (optional) Optionally you can configure FAST VP to automate data movement between storage tiers. Following are two ways to configure FAST VP. Configure FAST VP at the pool level. Click Properties for a specific storage pool to open the Storage Pool Properties dialog box. Figure 35 shows the tiering information for a specific FAST VP enabled pool. Figure 35. Storage Pool Properties window The Tier Status section of the window shows FAST VP relocation information specific to the pool selected. Scheduled relocation can be selected at the 99

100 VSPEX Configuration Guidelines pool level from the list menu labelled Auto-Tiering. This can be set to either Automatic or Manual. In the Tier Details section, users can see the exact distribution of their data. Users can also connect to the array-wide Relocation Schedule using the button located in the top-right corner, which presents the Manage Auto- Tiering window as shown in Figure 36. Figure 36. Manage Auto-Tiering window From this status window, users can control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O. Note As its name implies, FAST VP is a completely automated tool. To this end, relocations can be scheduled to occur automatically. Scheduling relocations during off-hours, to minimize any potential performance impact they may cause, is recommended. Configure FAST VP at the LUN level Some FAST VP properties are managed at the LUN level. Click Properties for a specific LUN. In this dialog box, click the Tiering tab to view tiering information for this single LUN, as shown in Figure

101 VSPEX Configuration Guidelines Figure 37. LUN Properties window Provision optional storage for infrastructure virtual machines The Tier Details section displays the current distribution of slices within the LUN. Tiering policy can be selected at the LUN level from the list menu labelled Tiering Policy. If storage required for infrastructure virtual machines (that is, SQL Server, domain controller, vcenter Server, and/or XenDesktop controllers) does not already exist in the production environment and the optional user data disk pack has been purchased, configure a NFS file system on VNX to be used as an NFS datastore in which the infrastructure virtual machines reside. Repeat the configuration steps shown in Provision storage for NFS datastores (NFS variant only) to provision the optional storage, while taking into account the smaller number of drives. Install and configure VMware vsphere hosts Overview This section provides information about installation and configuration of ESXi hosts and infrastructure servers required to support the architecture. Table 22 lists the tasks that must be completed. Table 22. Tasks for server installation Task Description Reference Install ESXi Install the ESXi 5.1 hypervisor on the physical servers deployed for the solution. vsphere Installation and Setup Guide 101

102 VSPEX Configuration Guidelines Task Description Reference Configure ESXi networking Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and jumbo frames. vsphere Networking Add ESXi hosts to VNX storage groups (FC variant) Connect VMware datastores Use the Unisphere console to add the ESXi hosts to the storage groups. Connect the VMware datastores to the ESXi hosts deployed for the solution. vsphere Storage Guide Install ESXi Upon initial power-up of the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in the BIOS of each of the servers. If the servers are equipped with a RAID controller, configuring mirroring on the local disks is recommended. Boot the ESXi 5.1 installation media and install the hypervisor on each of the servers. ESXi hostnames, IP addresses, and a root password are required for installation. Appendix B provides appropriate values. Configure ESXi networking During the installation of VMware ESXi, a standard virtual switch (vswitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and meet bandwidth requirements, an additional NIC must be added either by using the ESXi console or by connecting to the ESXi host from the vsphere Client. Each VMware ESXi server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, link aggregation, and network adapter failover. VMware ESXi networking configuration including load balancing, link aggregation, and failover options are described in vsphere Networking. Refer to the list of documents in Appendix C of this guide for more information. Choose the appropriate load-balancing option based on what is supported by the network infrastructure. Create VMkernel ports as required, based on the infrastructure configuration: VMkernel port for NFS traffic (NFS variant only) VMkernel port for VMware vmotion Virtual desktop port groups (used by the virtual desktops to communicate on the network) vsphere Networking describes the procedure for configuring these settings. Refer to the list of documents in Appendix C of this guide for more information. 102

103 VSPEX Configuration Guidelines Jumbo frames A jumbo frame is an Ethernet frame with a payload greater than 1500 bytes and up to ~9000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames should be enabled end-to-end. This includes the network switches, ESXi servers, and VNX SPs. Jumbo frames can be enabled on the ESXi server into two different levels. If all the portals on the virtual switch need to be enabled for jumbo frames, this can be achieved by selecting properties of virtual switch and editing the MTU settings from the vcenter. If specific VMkernel ports are to be jumbo frames enabled, edit the VMkernel port under network properties from vcenter. To enable jumbo frames on the VNX: 1. Use Unisphere -> Settings -> Network ->Settings for File. 2. Select the appropriate network interface under the Interfaces tab. 3. Select Properties. 4. Set the MTU size to Select OK to apply the changes. Jumbo frames may also need to be enabled on each network switch. Consult your switch configuration guide for instructions. Connect VMware datastores Connect the datastores configured in Prepare and configure storage array to the appropriate ESXi servers. These include the datastores configured for: Virtual desktop storage Infrastructure virtual machine storage (if required) SQL Server storage (if required) vsphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to the list of documents in Appendix C of this guide for more information. The ESXi EMC PowerPath VE (FC variant) and NFS VAAI (NFS variant) plug-ins must be installed after VMware Virtual Center has been deployed as described in Install and configure VMware vcenter Server. Plan virtual machine memory allocations Server capacity is required for two purposes in the solution: To support the new virtualized server infrastructure To support the required infrastructure services such as authentication/authorization, DNS, and database 103

104 VSPEX Configuration Guidelines For information on minimum requirements for infrastructure services hosting, refer to Table 3 on page 51. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services will not be required. Memory configuration Proper sizing and configuration of the solution necessitates that care be taken when configuring server memory. The following section provides general guidance on memory allocation for the virtual machines, factoring in vsphere overhead and the virtual machine configuration. We begin with an overview of how memory is managed in a VMware environment. ESX/ESXi memory management Memory virtualization techniques allow the vsphere hypervisor to abstract physical host resources such as memory in order to provide resource isolation across multiple virtual machines while avoiding resource exhaustion. In cases where advanced processors (such as Intel processors with EPT support) are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself through a feature known as shadow page tables. vsphere employs the following memory management techniques: Allocation of memory resources greater than those physically available to the virtual machine is known as memory overcommitment. Identical memory pages that are shared across virtual machines are merged through a feature known as transparent page sharing. Duplicate pages are returned to the host free memory pool for reuse. ESXi stores pages, which otherwise would be swapped out to disk through host swapping, in a compression cache located in the main memory. Host resource exhaustion can be relieved through a process known as memory ballooning. This process requests free pages be allocated from the virtual machine to the host for reuse. Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk. You can obtain additional information at: 104

105 Virtual machine memory concepts VSPEX Configuration Guidelines Figure 38 shows parameters for memory settings in the virtual machine. Figure 38. Virtual machine memory settings Configured memory Physical memory allocated to the virtual machine at the time of creation. Reserved memory Memory that is guaranteed to the virtual machine. Touched memory Memory that is active or in use by the virtual machine. Swappable Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines through ballooning, compression, or swapping. Following are the recommended best practices: Do not disable the default memory reclamation techniques. These are lightweight processes that enable flexibility with minimal impact to workloads. Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machines sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance likely will be adversely affected. Having performance baselines of your virtual machine workloads assists in this process. Install and configure SQL Server database Overview This section describes how to set up and configure a SQL Server database for the solution. At the end of this section, you will have Microsoft SQL server on a virtual machine, with the databases required by VMware vcenter and XenDesktop configured for use. Table 23 identifies the tasks for the SQL Server database setup. 105

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This document

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX White Paper EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX Citrix XenDesktop 5.6 with Provisioning Services 6.1 for 5000 Desktops Including: Citrix XenDesktop Citrix Provisioning Services EMC

More information

Cisco Solution for EMC VSPEX End-User Computing

Cisco Solution for EMC VSPEX End-User Computing Reference Architecture Guide Cisco Solution for EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Cisco Unified Computing System, Cisco Nexus

More information

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx) EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage EMC Information Infrastructure Solutions Abstract Virtual desktop infrastructures introduce a new way for IT organizations

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Simplify management and decrease TCO Streamline Application

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

Cisco Solution for EMC VSPEX End-User Computing

Cisco Solution for EMC VSPEX End-User Computing Reference Architecture Cisco Solution for EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5 for 1000 Virtual Desktops Enabled by Cisco Unified Computing System, Cisco Nexus Switches,Citrix

More information

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery

SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery www.citrix.com Contents Introduction... 3 Fitting Disaster Recovery to the Cloud... 3 Considerations for Disaster Recovery

More information

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014 DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTIONS FOR VSPEX PRIVATE CLOUD EMC VSPEX December 2014 Copyright 2013-2014 EMC Corporation. All rights reserved. Published in USA. Published December,

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

VMware vsphere Data Protection 6.0

VMware vsphere Data Protection 6.0 VMware vsphere Data Protection 6.0 TECHNICAL OVERVIEW REVISED FEBRUARY 2015 Table of Contents Introduction.... 3 Architectural Overview... 4 Deployment and Configuration.... 5 Backup.... 6 Application

More information

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015 VMware vsphere Data Protection REVISED APRIL 2015 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Requirements.... 4 Evaluation Workflow... 5 Overview.... 5 Evaluation

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

Benefits of Consolidating and Virtualizing Microsoft Exchange and SharePoint in a Private Cloud Environment

Benefits of Consolidating and Virtualizing Microsoft Exchange and SharePoint in a Private Cloud Environment . The Radicati Group, Inc. 1900 Embarcadero Road, Suite 206 Palo Alto, CA 94303 Phone 650-322-8059 Fax 650-322-8061 http://www.radicati.com THE RADICATI GROUP, INC. Benefits of Consolidating and Virtualizing

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Desktop Virtualization. The back-end

Desktop Virtualization. The back-end Desktop Virtualization The back-end Will desktop virtualization really fit every user? Cost? Scalability? User Experience? Beyond VDI with FlexCast Mobile users Guest workers Office workers Remote workers

More information

What s New with VMware Virtual Infrastructure

What s New with VMware Virtual Infrastructure What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

EMC VNXe3200 UFS64 FILE SYSTEM

EMC VNXe3200 UFS64 FILE SYSTEM White Paper EMC VNXe3200 UFS64 FILE SYSTEM A DETAILED REVIEW Abstract This white paper explains the UFS64 File System architecture, functionality, and features available in the EMC VNXe3200 storage system.

More information

SIZING EMC VNX SERIES FOR VDI WORKLOAD

SIZING EMC VNX SERIES FOR VDI WORKLOAD White Paper SIZING EMC VNX SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure

More information

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE White Paper MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC VNX Family, EMC Symmetrix VMAX Systems, and EMC Xtrem Server Products Design and sizing best practices

More information

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Jul 2013 D E P L O Y M E N T A N D T E C H N I C A L C O N S I D E R A T I O N S G U I D E Table of Contents Introduction... 3 VMware

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

VMware Workspace Portal Reference Architecture

VMware Workspace Portal Reference Architecture VMware Workspace Portal 2.1 TECHNICAL WHITE PAPER Table of Contents Executive Summary.... 3 Overview.... 4 Hardware Components.... 5 VMware vsphere.... 5 VMware Workspace Portal 2.1.... 5 VMware Horizon

More information

Cisco Solution for EMC VSPEX Server Virtualization

Cisco Solution for EMC VSPEX Server Virtualization Reference Architecture Cisco Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 Virtual Machines Enabled by Cisco Unified Computing System, Cisco Nexus Switches, Microsoft Hyper-V, EMC

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE DESIGN AND IMPLEMENTATION GUIDE EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE EMC VSPEX Abstract This describes how to design virtualized VMware vcloud Suite resources on

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

VMware vsphere 4.1 with ESXi and vcenter

VMware vsphere 4.1 with ESXi and vcenter VMware vsphere 4.1 with ESXi and vcenter This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter. Assuming no prior virtualization

More information

Business Process Desktop: Acronis backup & Recovery 11.5 Deployment Guide

Business Process Desktop: Acronis backup & Recovery 11.5 Deployment Guide WHITE Deployment PAPERGuide Business Process Desktop: Acronis backup & Recovery 11.5 Deployment Guide An Acronis White Paper Copyright Acronis, Inc., 2000 2011 Deployment Guide Table of contents About

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

VMware vsphere: Install, Configure, Manage [V5.0]

VMware vsphere: Install, Configure, Manage [V5.0] VMware vsphere: Install, Configure, Manage [V5.0] Gain hands-on experience using VMware ESXi 5.0 and vcenter Server 5.0. In this hands-on, VMware -authorized course based on ESXi 5.0 and vcenter Server

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

Backup and Recovery for SAP Environments using EMC Avamar 7

Backup and Recovery for SAP Environments using EMC Avamar 7 White Paper Backup and Recovery for SAP Environments using EMC Avamar 7 Abstract This white paper highlights how IT environments deploying SAP can benefit from efficient backup with an EMC Avamar solution.

More information

MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS

MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS VCE Word Template Table of Contents www.vce.com MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS January 2012 VCE Authors: Changbin Gong: Lead Solution Architect Michael

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2

RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2 RSA Authentication Manager 8.1 Setup and Configuration Guide Revision 2 Contact Information Go to the RSA corporate website for regional Customer Support telephone and fax numbers: www.emc.com/domains/rsa/index.htm

More information

Backup & Recovery for VMware Environments with Avamar 6.0

Backup & Recovery for VMware Environments with Avamar 6.0 White Paper Backup & Recovery for VMware Environments with Avamar 6.0 A Detailed Review Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements

More information

White Paper. SAP NetWeaver Landscape Virtualization Management on VCE Vblock System 300 Family

White Paper. SAP NetWeaver Landscape Virtualization Management on VCE Vblock System 300 Family White Paper SAP NetWeaver Landscape Virtualization Management on VCE Vblock System 300 Family Table of Contents 2 Introduction 3 A Best-of-Breed Integrated Operations Architecture 3 SAP NetWeaver Landscape

More information

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage TECHNICAL PAPER Veeam Backup & Replication with Nimble Storage Document Revision Date Revision Description (author) 11/26/2014 1. 0 Draft release (Bill Roth) 12/23/2014 1.1 Draft update (Bill Roth) 2/20/2015

More information

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide 605: Design and implement a desktop virtualization solution based on a mock scenario Hands-on Lab Exercise Guide Contents Overview... 2 Scenario... 5 Quick Design Phase...11 Lab Build Out...12 Implementing

More information

Redefining Microsoft SQL Server Data Management. PAS Specification

Redefining Microsoft SQL Server Data Management. PAS Specification Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing

More information

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014 VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Additional Features and Benefits of

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

VMware vsphere 5.0 Boot Camp

VMware vsphere 5.0 Boot Camp VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

EMC Data Domain Management Center

EMC Data Domain Management Center EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

www.vce.com SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series

www.vce.com SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series www.vce.com SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series Version 1.1 December 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

DVS Enterprise. Reference Architecture. VMware Horizon View Reference DVS Enterprise Reference Architecture VMware Horizon View Reference THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED

More information

Evolving Datacenter Architectures

Evolving Datacenter Architectures Technology Insight Paper Evolving Datacenter Architectures HP technologies for Cloud ready IT By Russ Fellows January, 2013 Enabling you to make the best technology decisions Evolving Datacenter Architectures

More information

Vblock Solution for Citrix XenDesktop and XenApp

Vblock Solution for Citrix XenDesktop and XenApp www.vce.com Vblock Solution for Citrix XenDesktop and XenApp Version 1.3 April 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM Leverage Vblock Systems for Esri's ArcGIS System Table of Contents www.vce.com LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM August 2012 1 Contents Executive summary...3 The challenge...3 The solution...3

More information

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced

More information

CVE-401/CVA-500 FastTrack

CVE-401/CVA-500 FastTrack CVE-401/CVA-500 FastTrack Description The CVE-400-1I Engineering a Citrix Virtualization Solution course teaches Citrix engineers how to plan for and perform the tasks necessary to successfully integrate

More information

VBLOCK SOLUTION FOR SAP: HIGH AVAILABILITY FOR THE PRIVATE CLOUD

VBLOCK SOLUTION FOR SAP: HIGH AVAILABILITY FOR THE PRIVATE CLOUD Vblock Solution for SAP: High Availability for the Private Cloud Table of Contents www.vce.com VBLOCK SOLUTION FOR SAP: HIGH AVAILABILITY FOR THE PRIVATE CLOUD Version 2.0 February 2013 1 Copyright 2013

More information

Consolidate and Virtualize Your Windows Environment with NetApp and VMware

Consolidate and Virtualize Your Windows Environment with NetApp and VMware White Paper Consolidate and Virtualize Your Windows Environment with NetApp and VMware Sachin Chheda, NetApp and Gaetan Castelein, VMware October 2009 WP-7086-1009 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY...

More information

Atlantis ILIO Persistent VDI for Flash Arrays Administration Guide Atlantis Computing Inc. Atlantis ILIO 4.1. Document Version 1.0

Atlantis ILIO Persistent VDI for Flash Arrays Administration Guide Atlantis Computing Inc. Atlantis ILIO 4.1. Document Version 1.0 Atlantis ILIO Persistent VDI for Flash Arrays Atlantis Computing Inc. Atlantis ILIO 4.1 Document Version 1.0 Copyrights Atlantis ILIO Persistent VDI for Flash Arrays Document Version 1.0 2007-2015 Atlantis

More information