EMC VSPEX END-USER COMPUTING

Size: px
Start display at page:

Download "EMC VSPEX END-USER COMPUTING"

Transcription

1 DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5. EMC XtremIO TM, EMC VNX, and Microsoft Windows Server 2012 R2 with Hyper-V provide the storage and virtualization platforms. November 2014

2 Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published November 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Enabled by EMC VNX and EMC Data Protection Part Number H Citrix XenDesktop 7.5 and Microsoft Hyper-V with EMC ExtremIO

3 Contents Contents Chapter 1 Introduction 9 Purpose of this guide Business value Scope Audience Terminology Chapter 2 Before You Start 13 Deployment workflow Essential reading VSPEX Solution Overview VSPEX Implementation Guide VSPEX Proven Infrastructure Guide Chapter 3 Solution Overview 15 Overview VSPEX Proven Infrastructures Solution architecture High-level architecture Logical architecture Key components Desktop virtualization broker Overview Citrix XenDesktop Machine Creation Services Citrix Provisioning Services Citrix Personal vdisk Citrix Profile Management Virtualization layer Microsoft Hyper-V Microsoft System Center Virtual Machine Manager Microsoft Hyper-V high availability Compute layer Network layer

4 Contents Storage layer EMC XtremIO EMC VNX Virtualization management Data protection layer Citrix ShareFile StorageZones solution Chapter 4 Sizing the Solution 33 Overview Reference workload VSPEX Private Cloud requirements Private cloud storage layout VSPEX/XtremIO array configurations Validated XtremIO configurations XtremIO storage layout Expanding existing VSPEX end-user computing environments VNX array configurations User data storage VNX building block EMC FAST VP VNX shared file systems Choosing the appropriate reference architecture Using the Customer Sizing Worksheet Selecting a reference architecture Fine tuning hardware resources Summary Chapter 5 Solution Design Considerations and Best Practices 43 Overview Server design considerations Server best practices Validated server hardware Hyper-V memory virtualization Memory configuration guidelines Network design considerations Validated network hardware Network configuration guidelines Storage design considerations Overview Validated storage hardware and configuration Hyper-V storage virtualization

5 Contents High availability and failover Virtualization layer Compute layer Network layer Storage layer Validation test profile Profile characteristics EMC Data Protection configuration guidelines Data protection profile characteristics Data protection layout VSPEX for Citrix XenDesktop with ShareFile StorageZones solution ShareFile StorageZones architecture StorageZones Design considerations VSPEX for ShareFile StorageZones architecture Chapter 6 Reference Documentation 67 EMC documentation Other documentation Appendix A Customer Sizing Worksheet 71 Customer Sizing Worksheet for end-user computing

6 Contents Figures Figure 1. VSPEX Proven Infrastructures Figure 2. Architecture of the validated solution Figure 3. Logical architecture for both block and file storage Figure 4. XenDesktop 7.5 architecture components Figure 5. New Unisphere Management Suite Figure 6. Optional user data storage building block for 1,750 virtual desktops.. 37 Figure 7. Compute layer flexibility Figure 8. Hypervisor memory consumption Figure 9. Highly-available XtremIO FC network design example Figure 10. Highly-available VNX Ethernet network design example Figure 11. Required networks Figure 12. Hyper-V virtual disk types Figure 13. High availability at the virtualization layer Figure 14. Redundant power supplies Figure 15. VNX Ethernet network layer high availability Figure 16. XtremIO series high availability Figure 17. VNX series high availability Figure 18. ShareFile high-level architecture Figure 19. VSPEX for Citrix XenDesktop with ShareFile StorageZones: Logical architecture Figure 20. Printable customer sizing worksheet

7 Contents Tables Table 1. Terminology Table 2. Deployment workflow Table 3. Solution components Table 4. VSPEX end-user computing: Design process Table 5. Reference virtual desktop characteristics Table 6. Infrastructure server minimum requirements Table 7. XtremIO storage layout Table 8. Example Customer Sizing Worksheet Table 9. Reference virtual desktop resources Table 10. Server resource component totals Table 11. Server hardware Table 12. Minimum switching capacity Table 13. Storage hardware Table 14. Validated environment profile Table 15. Data protection profile characteristics Table 16. Recommended VNX storage for ShareFile StorageZones CIFS share Table 17. Customer Sizing Worksheet

8 Contents 8

9 Chapter 1: Introduction Chapter 1 Introduction This chapter presents the following topics: Purpose of this guide Business value Scope Audience Terminology

10 Chapter 1: Introduction Purpose of this guide Business value The EMC VSPEX End-User-Computing Proven Infrastructure provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This VSPEX End-User-Computing solution for Citrix XenDesktop 7.5 runs on a Microsoft Windows Server 2012 R2 with Hyper-V virtualization layer backed by the highly available EMC XtremIO family, which provides the storage. In this solution, the desktop virtualization infrastructure components are layered on a VSPEX Private Cloud that uses a Microsoft Hyper-V Proven Infrastructure, while the desktops are hosted on dedicated resources. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual desktop environment. XtremIO solutions provide storage for virtual desktops, EMC VNX solutions provide storage for user data, and EMC Avamar data protection solutions provide data protection for Citrix XenDesktop data. This VSPEX End-User-Computing solution is validated for up to 1,750 Citrix XenDesktop Machine Creation Services (MCS) linked clone or Citrix Provisioning Services (PVS) streamed virtual desktops for an XtremIO Starter X-Brick, and up to 3,500 MCS linked clone or PVS streamed virtual desktops for an X-Brick. These validated configurations are based on a reference desktop workload and form the basis for creating cost-effective, custom solutions for individual customers. An end-user-computing or virtual desktop infrastructure is a complex system offering. This describes how to design an end-user-computing solution according to best practices for Citrix XenDesktop for Microsoft Hyper-V enabled by XtremIO, VNX, and EMC Data Protection. Employees are more mobile than ever, and they expect access to business-critical data and applications from any location and any device. They want the flexibility to bring their own devices to work, which means IT departments are increasingly investigating and supporting Bring Your Own Device (BYOD) initiatives. This adds layers of complexity to safeguarding sensitive information. Deploying a virtual desktop project is one way to do this. Implementing large-scale virtual desktop environments presents many challenges, however. Administrators must rapidly roll out persistent or non-persistent desktops for all users task workers, knowledge workers, and power users while offering an outstanding user experience that outperforms physical desktops. In addition to performance, a virtual desktop solution must be simple to deploy, manage, and scale, with substantial cost savings over physical desktops. Storage is also a critical component of an effective virtual desktop solution. EMC VSPEX Proven Infrastructures are designed to help you address the most serious of IT challenges by creating solutions that are simple, efficient, and flexible, and designed to take advantage of the many possibilities that XtremIO s flash technology offers. 10

11 The business benefits of the VSPEX End-User-Computing solution for Citrix XenDesktop include: Chapter 1: Introduction An end-to-end virtualization solution to use the capabilities of the unified infrastructure components Efficient virtualization for varied customer use cases, such as A Starter X-Brick supporting up to 1,750 MCS linked clone or PVS streamed virtual desktops An X-Brick supporting up to 3,500 MCS linked clone or PVS streamed virtual desktops Reliable, flexible, and scalable reference architectures Scope This describes how to plan a simple, effective, and flexible VSPEX End- User-Computing solution for Citrix XenDesktop 7.5. It provides a deployment example of virtual desktop storage on XtremIO and user data storage on a VNX storage array. The same principles and guidelines apply to the XtremIO and VNX arrays that have been validated as part of the EMC VSPEX program. The desktop virtualization infrastructure components of the solution are layered on a VSPEX Private Cloud that uses a Microsoft Hyper-V Proven Infrastructure. This guide illustrates how to size XenDesktop on the VSPEX infrastructure, allocate resources following best practice, and use all the benefits that VSPEX offers. Audience This guide is intended for internal EMC personnel and qualified EMC VSPEX Partners. The guide assumes that VSPEX partners who intend to deploy this VSPEX Proven Infrastructure for Citrix XenDesktop have the necessary training and background to install and configure an end-user-computing solution based on Citrix XenDesktop with Microsoft Hyper-V as the hypervisor, XtremIO and VNX series storage systems, and associated infrastructure. Readers should also be familiar with the infrastructure and database security policies of the customer installation. This guide provides external references where applicable. EMC recommends that partners implementing this solution are familiar with these documents. For details, see Essential reading and Chapter 6: Reference Documentation. 11

12 Chapter 1: Introduction Terminology Table 1 lists the terminology used in this guide. Table 1. Terminology Term Data deduplication PVS streamed desktops MCS linked clones Reference architecture Reference workload Storage processor (SP) Storage controller (SC) Virtual Desktop Infrastructure (VDI) Definition Reduces physical storage utilization by eliminating redundant blocks of data. Desktops provisioned to stream their base image data from Citrix Provisioning Services (PVS) servers, writing any changes to a local write cache or Citrix Personal vdisk, depending on the desktop configuration. Desktops provisioned to share a common base image within a desktop pool, thereby having a minimal storage footprint. The validated architecture that supports this VSPEX end-usercomputing solution at different points of scale. For VSPEX end-user-computing solutions, the reference workload is defined as a single virtual desktop the reference virtual desktop with the workload characteristics indicated in Table 5. By comparing the customer s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer s VSPEX deployment. Refer to Reference workload for details. The compute component of the VNX storage array. SPs are used for all aspects of moving data into, out of, and between VNX arrays. The compute component of the XtremIO storage array. SCs are used for all aspects of moving data into, out of, and between XtremIO arrays. Decouples the desktop from the physical machine. In a VDI environment, the desktop OS and applications reside inside a virtual machine running on a host computer, with data residing on shared storage. Users access their virtual desktop from any computer or mobile device over a private network or Internet connection. 12

13 Chapter 2: Before You Start Chapter 2 Before You Start This chapter presents the following topics: Deployment workflow Essential reading

14 Chapter 2: Before You Start Deployment workflow To design and implement your end-user computing solution, refer to the process flow in Table 2. Table 2. Deployment workflow Step Action 1 Use the Customer Sizing Worksheet to collect customer requirements. Refer to Appendix A of this. 2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user-computing solution, based on the customer requirements collected in Step 1. For more information about the Sizing Tool, refer to the EMC VSPEX Sizing Tool portal. Note: If the Sizing Tool is not available, you can manually size the application using the guidelines in Chapter 4. 3 Use this to determine the final design for your VSPEX solution. Note: Ensure that all resource requirements are considered and not just the requirements for end-user computing. 4 Select and order the right VSPEX reference architecture and Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for guidance on selecting a Private Cloud Proven Infrastructure. 5 Deploy and test your VSPEX solution. Refer to the VSPEX Implementation Guide in Essential reading for guidance. Essential reading EMC recommends that you read the following documents, available from the VSPEX space in the EMC Community Network or from EMC.com or the VSPEX Proven Infrastructure partner portal. VSPEX Solution Overview VSPEX Implementation Guide VSPEX Proven Infrastructure Guide Refer to the EMC VSPEX End User Computing Solution Overview. Refer to the : Citrix XenDesktop 7.5 and Microsoft Hyper-V with EMC XtremIO Implementation Guide. Refer to the EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper- V for up to 1,000 Virtual Machines Proven Infrastructure Guide. 14

15 Chapter 3: Solution Overview Chapter 3 Solution Overview This chapter presents the following topics: Overview VSPEX Proven Infrastructures Solution architecture Key components Desktop virtualization broker Virtualization layer Compute layer Network layer Storage layer Data protection layer Citrix ShareFile StorageZones solution

16 Chapter 3: Solution Overview Overview This chapter provides an overview of the VSPEX End-User-Computing solution and the key technologies used in the solution. The solution has been designed and proven by EMC to provide the desktop virtualization, server, network, storage, and data protection resources to support reference architectures of up to 1,750 virtual desktops for a Starter X-Brick, and up to 3,500 virtual desktops for an X-Brick. Although the desktop virtualization infrastructure components of the solution shown in Figure 3 are designed to be layered on a VSPEX Private Cloud solution, the reference architectures do not include configuration details for the underlying Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for information on configuring the required infrastructure components. VSPEX Proven Infrastructures EMC has joined forces with IT infrastructure providers to create a complete virtualization solution that accelerates the deployment of the private cloud and Citrix XenDesktop virtual desktops. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk, compared to the challenges and complexity of building an IT infrastructure themselves. VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity characteristic of truly converged infrastructures, with more choice in individual stack components. VSPEX Proven Infrastructures, as shown in Figure 1, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. They include virtualization, server, network, storage, and data protection layers. Partners can choose the virtualization, server, and network technologies that best fit a customer s environment, while the highly available XtremIO and VNX storage systems and EMC Data Protection technologies provide the storage and data protection layers. 16

17 Chapter 3: Solution Overview Figure 1. VSPEX Proven Infrastructures Solution architecture High-level architecture The for Citrix XenDesktop solution provides a complete system architecture capable of supporting up to 1,750 MCS linked clone or PVS streamed virtual desktops for a Starter X-Brick, and up to 3,500 MCS linked clone or PVS streamed virtual desktops for an X-Brick. The solution supports block storage for virtual desktops, and optional file storage for user data. 17

18 Chapter 3: Solution Overview Figure 2 shows the high-level architecture of the validated solution. Figure 2. Architecture of the validated solution The solution uses EMC XtremIO, EMC VNX, and Microsoft Hyper-V to provide the storage and virtualization platforms for a Citrix XenDesktop environment of Microsoft Windows 8.1 virtual desktops provisioned by Citrix XenDesktop MCS or Citrix PVS. For the solution, we 1 deployed an XtremIO array in multiple configurations (Starter X- Brick and X-Brick) to support up to 3,500 virtual desktops. We also deployed a VNX array for hosting user data. The highly available XtremIO array provides the storage for the desktop virtualization components. The infrastructure services for the solution, as shown in Figure 2, can be provided by existing infrastructure at the customer site, by the VSPEX Private Cloud, or by deploying them as dedicated resources as part of the solution. The virtual desktops require dedicated end-user-computing resources and are not intended to be layered on a VSPEX Private Cloud. Planning and designing the storage infrastructure for a Citrix XenDesktop environment is critical because the shared storage must be able to absorb large 1 In this guide, "we" refers to the EMC Solutions engineering team that validated the solution. 18

19 Chapter 3: Solution Overview bursts of I/O that occur during a day. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can adapt to slow performance, but unpredictable performance frustrates users and reduces efficiency. To provide predictable performance for end-user-computing solutions, the storage system must be able to handle the peak I/O load from the clients while keeping response time to a minimum. This solution uses the XtremIO array to provide the submillisecond response times the clients require, while the real-time, inline deduplication features of the platform reduce the amount of physical storage needed. EMC Data Protection solutions enable user data protection and end-user recoverability. This Citrix XenDesktop solution uses Avamar and its desktop client to achieve this. Logical architecture The for Citrix XenDesktop solution supports block storage for the virtual desktops. Figure 3 shows the logical architecture of the solution for both variants. Figure 3. Logical architecture for both block and file storage This solution uses two networks: a storage network for carrying virtual desktop and virtual server OS data, and a 10 Gb Ethernet (GbE) network for carrying all other traffic. The storage network can use 8 Gb FC, 10 Gb CEE with FCoE, or 10 GbE with iscsi protocol. Note: The solution also supports 1 Gb Ethernet if the bandwidth requirements are met. 19

20 Chapter 3: Solution Overview Key components This section provides an overview of the key technologies used in this solution, as outlined in Table 3. Table 3. Solution components Component Desktop virtualization broker Virtualization layer Compute layer Network layer Storage layer Data protection Citrix ShareFile StorageZones solution Description Manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, allow maintenance to the image without affecting user productivity, and prevent the environment from growing in an unconstrained way. The desktop broker in this solution is Citrix XenDesktop 7.5. Enables the physical implementation of resources to be decoupled from the applications that use them. In other words, the application s view of the resources available is no longer directly tied to the hardware. This enables many key features in the end-user-computing concept. This solution uses Microsoft Hyper-V for the virtualization layer. Provides memory and processing resources for the virtualization layer software and for the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required but enables the customer to implement the requirements using any server hardware that meets these requirements. Connects the users of the environment to the resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but enables the customer to implement the requirements using any network hardware that meets these requirements. A critical resource for the implementation of the end-user-computing environment, the storage layer must be able to absorb large bursts of activity as they occur without unduly affecting the user experience. This solution uses XtremIO and VNX arrays to efficiently handle this workload. An optional solution component that provides data protection if data in the primary system is deleted, damaged, or otherwise unusable. This solution uses Avamar for data protection. Optional support for Citrix ShareFile StorageZones deployments 20

21 Chapter 3: Solution Overview Desktop virtualization broker Overview Desktop virtualization encapsulates and hosts desktop services on centralized computing resources at remote data centers. This enables end users to connect to their virtual desktops from different types of devices across a network connection. Devices can include desktops, laptops, thin clients, zero (ultra-thin) clients, smart phones, and tablets. In this solution, we used Citrix XenDesktop to provision, manage, broker, and monitor the desktop virtualization environment. Citrix XenDesktop 7.5 XenDesktop is the desktop virtualization solution from Citrix that enables virtual desktops to run on the Hyper-V virtualization environment. Citrix XenDesktop 7.5 integrates Citrix XenApp application delivery technologies and XenDesktop desktop virtualization technologies into a single architecture and management experience. This architecture unifies both management and delivery components to enable a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as secure mobile services to users anywhere on any device. Figure 4 shows the XenDesktop 7.5 architecture components. Figure 4. XenDesktop 7.5 architecture components 21

22 Chapter 3: Solution Overview The XenDesktop 7.5 architecture includes the following components: Citrix Director A web-based tool that enables IT support and help desk teams to monitor an environment, troubleshoot issues before they become systemcritical, and perform support tasks for end users. Citrix Receiver Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of their devices including smart phones, tablets, and computers. Receiver provides on-demand access to Windows, web, and software as a service (SaaS) applications. Citrix StoreFront Provides authentication and resource delivery services for Citrix Receiver. It enables centralized control of resources and provides users with on-demand, self-service access to their desktops and applications. Citrix Studio Enables the configuration and management of the deployment, eliminating the need for separate consoles for managing delivery of applications and desktops. Studio provides wizards to guide you through the process of setting up your environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users. Delivery Controller Installed on servers in the data center, Delivery Controller consists of services that communicate with the hypervisor to: Distribute applications and desktops Authenticate and manage user access Broker connections between users and their virtual desktops and applications Delivery Controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller enables you to install profile management to manage user personalization settings in virtualized or physical Windows environments. License Server Assigns user or device licenses to the XenDesktop environment. License Server can be installed along with other Citrix XenDesktop components or on a separate virtual or physical machine. Virtual Delivery Agent (VDA) Installed on server or workstation operating systems (OSs), the VDA enables connections for desktops and applications. For remote computer access, you install the VDA on your office computer. Server OS machines Virtual machines or physical machines, based on the Windows Server OS, used for delivering applications or hosted-shared desktops (HSDs) to users. Desktop OS machines Virtual machines or physical machines, based on a Windows desktop OS, used for delivering personalized desktops to users, or applications from desktop operating systems. Remote PC Access Enables users to access resources on their office computers remotely, from any device running Citrix Receiver. 22

23 Chapter 3: Solution Overview Machine Creation Services Machine Creation Services (MCS) is a provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management. MCS enables several types of machines to be managed within a catalog in Citrix Studio. Desktop customization is persistent for machines that use the Personal vdisk (PvD) feature, while non-pvd machines are appropriate if desktop changes are to be discarded when the user logs off. Citrix Provisioning Services Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vdisk) instead of copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage. As the number of machines continues to grow, PVS provides the efficiency of centralized management with the benefits of distributed processing. Because machines stream disk data dynamically in real time from a single shared image, machine image consistency is ensured. In addition, large pools of machines can completely change their configuration, applications, and even OS during a reboot operation. Citrix Personal vdisk Citrix Profile Management The Citrix PvD feature enables users to preserve customization settings and userinstalled applications in a pooled desktop by redirecting the changes from the user s pooled virtual machine to a separate PvD. During runtime, the content of the PvD is blended with the content from the base virtual machine to provide a unified experience to the end user. The PvD data is preserved during reboot and refresh operations. Citrix Profile Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Profile Management downloads a user s remote profile dynamically when the user logs in to XenDesktop, and applies personal settings to desktops and applications regardless of the user s login location or client device. The combination of Profile Management and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization. 23

24 Chapter 3: Solution Overview Virtualization layer Microsoft Hyper-V Microsoft Hyper-V provides a complete virtualization platform that provides flexibility and cost savings by enabling the consolidation of large, inefficient server farms into nimble and reliable cloud infrastructures. The core Microsoft virtualization components are the Microsoft Hyper-V hypervisor and the Microsoft System Center Virtual Machine Manager for system management. The Hyper-V hypervisor transforms a computer s physical resources by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications, just like physical computers do. Hyper-V runs on a dedicated server and enables multiple operating systems to execute simultaneously on the system as virtual machines. Microsoft clustered services enable multiple Hyper-V servers to operate in a clustered configuration. The Hyper-V cluster configuration is managed as a larger resource pool through the Microsoft System Center Virtual Machine Manager. This enables dynamic allocation of CPU, memory, and storage across the cluster. Microsoft System Center Virtual Machine Manager Microsoft Hyper-V high availability Microsoft System Center Virtual Machine Manager (SCVMM) is a scalable, extensible, centralized management platform for the Hyper-V infrastructure. It provides administrators with a single interface that they can access from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure. Microsoft Hyper-V s high-availability features such as Failover Clustering, Live Migration, and Storage Migration enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact. Hyper-V Failover Clustering enables the virtualization layer to automatically restart virtual machines in various failure conditions. If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster. You can configure policies to determine which machines are restarted automatically and under what conditions these operations are performed. Note: For Hyper-V Failover Clustering to restart virtual machines on different hardware, those servers must have resources available. The Server design considerations section provides specific recommendations to enable this functionality. Live Migration provides migration of virtual machines within clustered and nonclustered servers with no virtual machine downtime or service disruption. Storage Migration provides migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. 24

25 Chapter 3: Solution Overview Compute layer VSPEX defines the minimum amount of compute layer resources required, but enables the customer to implement the requirements using any server hardware that meets these requirements. For details, refer to Chapter 5. Network layer VSPEX defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but enables the customer to implement the requirements using any network hardware that meets these requirements. For details, refer to Chapter 5. Storage layer The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses EMC XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, and reduces total cost of ownership. This solution also uses the EMC VNX family arrays to provide storage for user data. EMC XtremIO The EMC XtremIO all-flash array is deployed in one of two specialized configurations known as either a Starter X-Brick or an X-Brick, and is designed to maximize the use of flash storage media. Key attributes of the XtremIO platform are: Extremely high levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments Consistently low (sub-millisecond) latency True inline data reduction, which is the ability to remove redundant information in the data path and write only unique data on the storage array, thus reducing the capacity required A full suite of enterprise array capabilities, including N-way active controllers, high availability, strong data protection, writeable snapshots, and thin provisioning The XtremIO array is a scale-out design, in which additional performance and capacity are added in a building-block approach, with all blocks forming a single clustered system. XtremIO storage includes the following components: Host adapter ports Provide host connectivity through fabric into the array. Storage controllers (SCs) The compute component of the XtremIO storage array. SCs handle all aspects of data moving into, out of, and between arrays. Disk drives Solid-state drives (SSDs) that contain the host/application data, and their enclosures. 25

26 Chapter 3: Solution Overview Infiniband switches A switched, high-throughput, low-latency communications link used in multi-x-brick configurations. Infiniband also provides quality of service, scalability, and failover capability. XtremIO Operating System (XIOS) The XtremIO storage cluster is managed by the powerful XtremIO Operating System (XIOS), which ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention. XIOS ensures that all SSDs in the system are evenly loaded, providing the highest possible performance, as well as endurance that stands up to demanding workloads for the entire life of the array. XIOS eliminates the complex configuration tasks that need to be performed on traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths and caching policies, or build aggregates. With XIOS, every volume is automatically and optimally configured at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system. Standards-based enterprise storage system The XtremIO system interfaces with Hyper-V hosts using standard Fibre Channel (FC) and iscsi block interfaces. The system supports complete high-availability features, including support for native Hyper-V multipath I/O, protection against failed SSDs, non-disruptive software and firmware upgrades, no single point of failure (SPOF), and hot-swappable components. Real-time, inline data reduction The XtremIO storage system deduplicates desktop images in real time, enabling very large numbers of virtual desktops to reside in a small and economical amount of flash capacity. Also, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance, but actually enhances the performance of the end-user computing environment. Agile writeable snapshots XtremIO snapshots are purpose-built for flash and optimize the use of array memory and SSD space, enabling businesses to achieve petabyte-scale effective capacity in an optimal footprint with unprecedented performance throughout the entire application life cycle. XtremIO snapshots are unique in that they are: Instantly created as full-performance, writeable copies Space-efficient with neither data nor metadata bloat Enabled with full data services such as inline deduplication and compression, encryption, and thin provisioning Unlimited in topology make snapshots of snapshots of snapshots 26

27 Chapter 3: Solution Overview With these features and capabilities, XtremIO snapshots enable the consolidation of different application silos into a smaller footprint, faster application development and deployment (by giving each engineer a high-performance development/test sandbox), and the creation of on-demand analytics and reporting. Scale-out design The X-Brick is the fundamental building block of a scaled-out XtremIO clustered system. Virtual desktop deployments can start small with a Starter X-Brick, can be upgraded to an X-Brick, and then expanded to nearly any scale by adding further X- Bricks. System capacity and performance expand linearly as building blocks are added, making EUC sizing and management of future growth extremely simple. Massive performance The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read and write I/O as is typical in virtual desktops, and to do so with consistent, extremely low latency. Ease of use The XtremIO storage system requires only a few basic setup steps that can be completed in minutes and absolutely no tuning or ongoing administration is needed to achieve and maintain high performance levels. In fact, you can take the XtremIO system from shipping box to deployment readiness in less than an hour. Data center economics Up to 3,500 desktops are easily supported on a single X-Brick (1,750 on a Starter X- Brick), and the compact device requires just a few rack units of space and approximately 750 W of power. EMC VNX The EMC VNX flash-optimized unified storage platform is ideal for storing user data and Windows profiles in a Citrix XenDesktop infrastructure, and delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s virtualized application environments. VNX storage includes the following components: Host adapter ports (for block) Provide host connectivity through fabric into the array. Data Movers (for file) Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB or NFS services). Storage processors (SPs) The compute component of the storage array. SPs handle all aspects of data moving into, out of, and between arrays. Disk drives Disk spindles and solid-state drives that contain the host/application data, and their enclosures. 27

28 Chapter 3: Solution Overview Note: Data Mover refers to a VNX hardware component, which has a CPU, memory, and input/output (I/O) ports. It enables the CIFS (SMB) and NFS protocols on the VNX array. EMC VNX series VNX includes many features and enhancements designed and built on the first generation s success, including: More capacity and better optimization with the VNX MCx technology components Multicore Cache, Multicore RAID, and Multicore Fully Automated Storage Tiering (FAST) Cache Greater efficiency with a flash-optimized hybrid array Better protection by increasing application availability with active/active storage processors Easier administration and deployment with the new EMC Unisphere Management Suite VSPEX is built with VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. Flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. EMC FAST Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives. It also boosts the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. Data generally is accessed most frequently at the time it is created; therefore, new data is first stored on flash drives to provide the best performance. As the data ages and becomes less active over time, FAST VP tiers the data from high-performance to high-capacity drives automatically, based on customer-defined policies. This functionality has been enhanced with four times better granularity and with new FAST VP SSDs based on enterprise multilevel cell (emlc) technology to lower the cost per gigabyte. FAST Cache uses flash drives as an expanded cache layer for the array to dynamically absorb unpredicted spikes in system workloads. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives, dramatically improving the response times for the active data and reducing data hot spots that can occur within the LUN. All VSPEX use cases benefit from the increased efficiency provided by the FAST Suite. Furthermore, VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier. 28

29 Unisphere Management Suite Chapter 3: Solution Overview EMC Unisphere is the central management platform for the VNX series, providing a single, combined view of file and block systems, with all features and functions available through a common interface. Unisphere is optimized for virtual applications and provides Hyper-V integration, automatically discovering virtual machines and ESX servers, and providing end-to-end, virtual-to-physical mapping. Unisphere also simplifies configuration of FAST Cache and FAST VP on VNX platforms. The Unisphere Management Suite extends the easy-to-use interface of Unisphere to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 5, the suite also includes Unisphere Remote for centrally managing thousands of VNX and VNXe systems with new support for EMC XtremCache. Figure 5. EMC Unisphere Management Suite EMC VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide predictable high performance for your applications. Both LUN types benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and user-capacity threshold setting. VNX file shares In many environments, it is important to have a common location in which to store files accessed by many users. CIFS or NFS file shares, which are available from a file 29

30 Chapter 3: Solution Overview server, provide this ability. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information about VNX file shares, refer to EMC VNX Series: Configuring and Managing CIFS on VNX. EMC SnapSure EMC SnapSure is a VNX software feature that lets you create and manage checkpoints that are point-in-time logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks; when a block within the PFS is modified, a copy containing the block's original contents is saved to a separate volume called SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. SnapSure reads the original blocks from the PFS in the SavVol, and the unchanged PFS blocks remaining in the PFS, according to a bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports the following checkpoint types: Read-only checkpoints Read-only file systems created from a PFS Writeable checkpoints Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint is associated with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. For more details refer to Using VNX SnapSure. Virtualization management EMC Storage Integrator for Windows EMC Storage Integrator (ESI) for Windows is a management interface that lets you view, provision, and manage block and file storage for Windows environments. ESI simplifies the process for creating and provisioning storage to Hyper-V servers as a local disk or a mapped share. ESI also supports storage discovery and provisioning through PowerShell. Data protection layer For more information, refer to the ESI for Windows documentation, available on EMC Online Support. Backup and recovery provides data protection by backing up data files or volumes to defined schedules and restoring data from the backup if recovery is needed after a 30

31 Chapter 3: Solution Overview disaster. Avamar delivers the protection confidence needed to accelerate deployment of VSPEX end-user-computing solutions. Avamar empowers administrators to centrally back up and manage policies and enduser-computing infrastructure components, while enabling end users to efficiently recover their own files from a simple and intuitive web-based interface. By moving only new, unique sub-file data segments, Avamar delivers fast full backups daily, with up to 90 percent reduction in backup times, while reducing the required daily network bandwidth by up to 99 percent. All Avamar recoveries are single-step for simplicity. With Avamar, you can choose to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk (VMDK) level for image backup and at the file level for guest-based backups. Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop in the event of hardware failure, corruption, or accidental deletion. Avamar significantly reduces the backup and recovery time of the virtual desktop by using change block tracking (CBT) on both backup and recovery. Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an OS for which an Avamar backup client is available. It enables fine-grained control over the content and inclusion/exclusion patterns. This can be used to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected enables end-user, self-service recovery of data. Citrix ShareFile StorageZones solution Citrix ShareFile is a cloud-based file sharing and storage service for enterprise-class storage and security. ShareFile enables users to securely share documents with other users. ShareFile users include employees and users who are outside of the enterprise directory (referred to as clients). ShareFile StorageZones enables businesses to share files across the organization while meeting compliance and regulatory concerns. StorageZones enables customers to keep their data on their own on-premises storage systems. It facilitates sharing of large files with full encryption and enables the synchronization of files with multiple devices. By keeping data on-premises and closer to users than data residing on the public cloud, StorageZones can provide improved performance and security. The main features available to ShareFile StorageZones users are: Use of StorageZones with or instead of ShareFile-managed cloud storage. Ability to configure Citrix CloudGateway Enterprise to integrate ShareFile services with Citrix Receiver for user authentication and user provisioning. Automated reconciliation between the ShareFile cloud and an organization s StorageZones deployment. 31

32 Chapter 3: Solution Overview Automated antivirus scans of uploaded files. File recovery from Storage Center backup (Storage Center is the server component of StorageZones). StorageZones enables you to browse the file records for a particular date and time and tag any files and folders to restore from Storage Center backup. With additional infrastructure, the VSPEX End-User Computing for Citrix XenDesktop solution supports ShareFile StorageZones with Storage Center. 32

33 Chapter 4: Sizing the Solution Chapter 4 Sizing the Solution This chapter presents the following topics: Overview Reference workload VSPEX Private Cloud requirements VSPEX XtremIO array configurations VNX array configurations Choosing the appropriate reference architecture

34 Chapter 4: Sizing the Solution Overview This chapter describes how to design a VSPEX End-User Computing for Citrix XenDesktop solution and how to size it to fit the customer s needs. It introduces the concepts of a reference workload, building blocks, and validated end-user-computing maximums, and describes how to use these to design your solution. Table 4 outlines the high-level steps you need to complete when sizing the solution. Table 4. VSPEX end-user computing: Design process Step Action 1 Use the Customer Sizing Worksheet in Appendix A to collect the customer requirements for the end-user computing environment. 2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user-computing solution, based on the customer requirements collected in Step 1. Note: If the Sizing Tool is not available, you can manually size the end-usercomputing solution using the guidelines in this chapter. Reference workload VSPEX defines a reference workload to represent a unit of measure for quantifying the resources in the solution reference architectures. By comparing the customer s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer s VSPEX deployment. For VSPEX end-user-computing solutions, the reference workload is defined as a single virtual desktop the reference virtual desktop with the workload characteristics listed in Table 5. To determine the equivalent number of reference virtual desktops for a particular resource requirement, use the VSPEX Customer Sizing Worksheet to convert the total actual resources required for all desktops into the reference virtual desktop format. Table 5. Reference virtual desktop characteristics Characteristic Virtual desktop OS Value Microsoft Windows 8.1 Enterprise Edition (32-bit) Virtual processors per virtual desktop 1 RAM per virtual desktop Average storage available for each MCS linked clone virtual desktop Average storage available for each PVS streamed virtual desktop Average IOPS per virtual desktop at steady state 2 GB 6 GB (VMDK and VSWP) 20 GB (PVS write cache and personal vdisk VMDK and VSWP) 10 34

35 Chapter 4: Sizing the Solution This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently with a steady load generated by the constant use of office-based applications such as browsers and office productivity software. VSPEX Private Cloud requirements This VSPEX End User Computing Proven Infrastructure requires multiple application servers. Unless otherwise specified, all servers use Microsoft Windows Server 2012 R2 as the base OS. Table 6 lists the minimum requirements of each infrastructure server required. Table 6. Infrastructure server minimum requirements Server CPU RAM (GB) IOPS Domain controllers (each) 2 vcpus SQL Server 2 vcpus SCVMM server 2 vcpus Storage capacity (GB) Citrix XenDesktop Controllers (each) 2 vcpus Citrix PVS servers (each) 4 vcpus VSPEX for Citrix XenDesktop with ShareFile StorageZones solution on page 62 provides the requirements for the optional Citrix ShareFile component. Private cloud storage layout This solution requires a 1.5 TB volume to host the infrastructure virtual machines, which can include the Microsoft SCVMM server, Citrix XenDesktop Controllers, Citrix PVS servers, optional Citrix ShareFile servers, Microsoft Active Directory Server, and Microsoft SQL Server. VSPEX/XtremIO array configurations We validated the VSPEX/XtremIO end-user-computing configurations on the Starter X- Brick and X-Brick platforms, which vary according to the number of SSDs they include and their total available capacity. For each array, EMC recommends a maximum VSPEX end-user-computing configuration as outlined in this section. Validated XtremIO configurations The following XtremIO validated disk layouts provide support for a specified number of virtual desktops at a defined performance level. This VSPEX solution supports two XtremIO configurations, which are selected based on the number of desktops being deployed: 35

36 Chapter 4: Sizing the Solution XtremIO Starter X-Brick Includes 13 SSD drives, and is validated to support up to 1,750 MCS linked-clone or PVS-streamed virtual desktops XtremIO X-Brick Includes 25 SSD drives, and is validated to support up to 3,500 MCS linked-clone or PVS-streamed virtual desktops The XtremIO storage configuration required for this solution is in addition to the storage required by the VSPEX private cloud that supports the solution s infrastructure services. For more information about the VSPEX private cloud storage pool, refer to the VSPEX Proven Infrastructure Guide in Essential reading. XtremIO storage layout Table 7 shows the number and size of the XtremIO volumes that will be presented to the Hyper-V servers to host the virtual desktops. Two datastore configurations are listed for each desktop type: one that includes the space required to use the Citrix Personal vdisk (PvD) feature, and one that does not for solutions that will not use that component of Citrix XenDesktop. Please note that when deploying Citrix desktops using PVS, the following values are configured by default: PVS write cache disk = 6 GB Citrix Personal vdisk (PvD) = 10 GB If either of these values is changed from the default, the datastore sizes must also be changed as a result. Table 7. XtremIO storage layout XtremIO configuration Number of desktops Starter X-Brick 1,750 X-Brick 3,500 Number of volumes Type of desktop PVS streamed 2,500 PVS with PvD streamed MCS linkedclone MCS with PvD linked-clone Volume size (GB) 5, ,000 PVS streamed 2,500 PVS with PvD streamed MCS linkedclone MCS with PvD linked-clone 5, ,000 36

37 Chapter 4: Sizing the Solution Expanding existing VSPEX end-user computing environments The EMC VSPEX End User Computing solution supports a flexible implementation model where it is easy to expand your environment as the needs of the business change. To support future expansion, the XtremIO Starter X-Brick can be non-disruptively upgraded to an X-Brick by installing the XtremIO expansion kit, which adds an additional twelve 400 GB SSD drives. The resulting X-Brick supports up to 3,500 desktops. VNX array configurations The following optional VNX validated disk layouts provide support for user data storage. You can modify a validated storage layout by adding drives for greater capacity and performance and adding features such as FAST Cache and FAST VP for improved user data performance. However, decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a less satisfactory user experience due to higher response times. User data storage VNX building block Our building block for optional user data storage is verified on an EMC VNX5400 and provides a flexible solution for VNX sizing. The user data storage building block shown in Figure 6 can support up to 1,750 desktops, using a VNX5400 with 32 NL-SAS drives in a FAST Cache-enabled storage pool. FAST Cache should be configured with two flash drives. Figure 6. Optional user data storage building block for 1,750 virtual desktops To support 3,500 users, you need to add a second identical user data storage building block and a second Data Mover. This configuration then includes a total of 64 NL-SAS drives, 4 flash drives for FAST Cache, and 2 Data Movers. 37

38 Chapter 4: Sizing the Solution EMC FAST VP If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. Note: FAST VP can provide performance improvements when implemented for user data and roaming profiles. Do not use FAST VP for virtual desktop datastores. VNX shared file systems The virtual desktops use four shared file systems two for the Citrix XenDesktop Profile Management repositories and two to redirect user storage that resides in home directories. In general, redirecting users data out of the base image to VNX for File enables centralized administration and data protection and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. Each Persona Management repository share and home directory share serves an equal number of users. Choosing the appropriate reference architecture To choose the appropriate reference architecture for a customer environment, you must determine the resource requirements of the environment and then translate these requirements to an equivalent number of reference virtual desktops that have the characteristics defined in Table 8. This section describes how to use the Customer Sizing Worksheet to simplify the sizing calculations as well as additional factors you should take into consideration when deciding which architecture to deploy. Using the Customer Sizing Worksheet The Customer Sizing Worksheet helps you to assess the customer environment and calculate the sizing requirements of the environment. Table 8 shows a completed worksheet for a sample customer environment. Appendix A provides a blank Customer Sizing Worksheet that you can print out and use to help size the solution for a customer. Table 8. Example Customer Sizing Worksheet User type vcpus Memory IOPS Heavy users Moderate users Typical Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops No. of users Total reference desktops 2 8 GB GB GB

39 User type vcpus Memory IOPS users Equivalent reference virtual desktops Equivalent reference virtual desktops Chapter 4: Sizing the Solution No. of users Total reference desktops ,200 1,200 Total 2,400 To complete the Customer Sizing Worksheet: 1. Identify the user types planned for migration into the VSPEX end-usercomputing environment and the number of users of each type. 2. For each user type, determine the compute resource requirements in terms of vcpus, memory (GB), storage performance (IOPS), and storage capacity. 3. For each resource type and user type, determine the equivalent reference virtual desktops requirements that is, the number of reference virtual desktops required to meet the specified resource requirements. 4. Determine the total number of reference desktops needed from the resource pool for the customer environment. Determining the resource requirements CPU The reference virtual desktop outlined in Table 5 assumes that most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, consider that your pool needs to provide 120 virtual desktops of capability. Memory Memory plays a key role in ensuring application functionality and performance. Each group of desktops will have different targets for the available memory that is considered acceptable. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of planned desktops to accommodate the additional resource requirements. For example, if there are 200 desktops to be virtualized, but each one needs 4 GB of memory instead of the 2 GB that the reference virtual desktop provides, plan for 400 reference virtual desktops. IOPS The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations. Storage capacity The storage capacity requirement for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops in 39

40 Chapter 4: Sizing the Solution this solution rely on additional shared storage for user profile data and user documents. This requirement is an optional component that can be met by the addition of specific storage hardware defined in the solution. It can also be met by using existing file shares in the environment. Determining the equivalent reference virtual desktops With all of the resources defined, you determine the number of equivalent reference virtual desktops by using the relationships indicated in Table 9. Round all values up to the closest whole number. Table 9. Reference virtual desktop resources Resource Value for reference virtual desktop Relationship between requirements and equivalent reference virtual desktops CPU 1 Equivalent reference virtual desktops = resource requirements Memory 2 Equivalent reference virtual desktops = (resource requirements)/2 IOPS 10 Equivalent reference virtual desktops = (resource requirements)/10 For example, the heavy user type in Table 8 requires 2 virtual CPUs, 12 IOPS, and 8 GB of memory for each desktop. This translates to two reference virtual desktops of CPU, four reference virtual desktops of memory, and two reference virtual desktops of IOPS. The number of reference virtual desktops required for each user type then equals the maximum required for an individual resource. For example, the number of equivalent reference virtual desktops for the heavy user type in Table 8 is four, as this number will meet the all resource requirements IOPS, vcpu, and memory. To calculate the total number of reference desktops for a user type, you multiply the number of equivalent reference virtual desktops for that user type by the number of users. Determining the total reference virtual desktops After the worksheet is completed for each user type that the customer wants to migrate into the virtual infrastructure, you compute the total number of reference virtual desktops required in the resource pool by calculating the sum of the total reference virtual desktops for all user types. In the example in Table 8, the total is 2,400 virtual desktops. Selecting a reference architecture This VSPEX end-user computing reference architecture supports two separate points of scale, a Starter X-Brick capable of supporting up to 1,750 reference desktops, and an X-Brick capable of hosting up to 3,500 reference desktops. The total reference virtual desktops value from the completed Customer Sizing Worksheet can be used to verify that this reference architecture would be adequate for the customer requirements. In the example in Table 8, the customer requires 2,400 virtual 40

41 Chapter 4: Sizing the Solution desktops of capability from the pool. Therefore, this reference architecture provides sufficient resources for current needs as well as some room for growth. However, there may be other factors to consider when verifying that this reference architecture will perform as intended. These factors can include concurrency and desktop workload. Concurrency The reference workload used to validate this solution assumes that all desktop users are active at all times. In other words, we tested this 3,500-desktop reference architecture with 3,500 desktops, all generating workload in parallel, all booted at the same time, and so on. If the customer expects to have 3,500 users, but only 50 percent of them are logged on at any given time due to time zone differences or alternate shifts, the reference architecture may be able to support additional desktops in this case. Heavier desktop workloads The reference workload is considered a typical office worker load. However, some customers users might have a more active profile. If a company has 3,500 users and, due to custom corporate applications, each user generates 50 predominantly write IOPS as compared to the 10 IOPS used in the reference workload, this customer will need 175,000 IOPS (3,500 users x 50 IOPS per desktop). This configuration would be underpowered in this case because the proposed IO load is greater than the array maximum of 100,000 write IOPS. This company would need to deploy an additional X-Brick, reduce their current IO load, or reduce the total number of desktops to ensure that the storage array performs as required. Fine tuning hardware resources In most cases, the Customer Sizing Worksheet suggests a reference architecture adequate for the customer s needs. However, in some cases you may want to further customize the hardware resources available to the system. A complete description of the system architecture is beyond the scope of this document but you can customize your solution further at this point. Storage resources The XtremIO array is deployed in one of two specialized configurations, one being a Starter X-Brick, the other an X-Brick. While more X-Bricks can be added to increase the capacity or performance capabilities of the XtremIO cluster, this solution is based on a either a Starter X-Brick or a single X-Brick. The XtremIO array requires no tuning, and the number of SSDs available in the array is fixed. The VSPEX Sizing Tool or Customer Sizing Worksheet should be used to verify that the XtremIO array can provided the necessary levels of capacity and performance. Server resources For the server resources in the solution, it is possible to customize the hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 10. We added Total CPU resources and Total memory resources columns to the worksheet. 41

42 Chapter 4: Sizing the Solution Table 10. Server resource component totals User types Heavy users Moderate users Typical users Resource requirements Resource requirements Resource requirements vcpus Memory (GB) Number of users Total CPU resources Total memory resources (GB) , ,200 1,200 2,400 Total 2,000 4,800 The example in Table 10 requires 2,000 virtual vcpus and 4,800 GB of memory. The reference architectures assume five desktops per physical processor core and no memory over-provisioning. This converts to 500 processor cores and 4,800 GB of memory for this example. Use these calculations to more accurately determine the total server resources required. Note: Keep high availability requirements in mind when customizing the resource pool hardware. Summary EMC considers the requirements stated in this solution to be the minimum set of resources needed to handle the workloads defined for a reference virtual desktop. In any customer implementation, the load of a system can vary over time as users interact with the system. If the number of customer virtual desktops differs significantly from the reference definition and varies in the same resource group, you might need to add more of that resource to the system. 42

43 Chapter 5: Solution Design Considerations and Best Practices Chapter 5 Solution Design Considerations and Best Practices This chapter presents the following topics: Overview Server design considerations Network design considerations Storage design considerations High availability and failover Validation test profile EMC Data Protection configuration guidelines VSPEX for Citrix XenDesktop with ShareFile StorageZones solution

44 Chapter 5: Solution Design Considerations and Best Practices Overview This chapter describes best practices and considerations for designing the VSPEX End-User-Computing solution. For more information on deployment best practices of various components of the solution, refer to the vendor-specific documentation. Server design considerations VSPEX solutions are designed to run on a wide variety of server platforms. VSPEX defines the minimum CPU and memory resources required, but not a specific server type or configuration. The customer can use any server platform and configuration that meets or exceeds the minimum requirements. For example, Figure 7 shows how a customer could implement the same server requirements by using either white-box servers or high-end servers. Both implementations achieve the required number of processor cores and amount of RAM, but the number and type of servers differ. Figure 7. Compute layer flexibility 44

45 Chapter 5: Solution Design Considerations and Best Practices The choice of a server platform is not only based on the technical requirements of the environment, but also on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For example: From a virtualization perspective, if a system s workload is well understood, features like memory ballooning and transparent page sharing can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, you can reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, you might need to increase the number of CPUs and the amount of memory. The server infrastructure must meet the following minimum requirements: Sufficient CPU cores and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Sufficient excess capacity to enable the environment to withstand a server failure and failover Server best practices For this solution, EMC recommends that you consider the following best practices for the server layer: Identical server units Use identical or at least compatible servers to ensure that they share similar hardware configurations. VSPEX implements hypervisorlevel high-availability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. Recent processor technologies For new deployments, use recent revisions of common processor technologies. It is assumed that these will perform as well as, or better than, the systems used to validate the solution. High availability Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single server failures. This will also allow you to implement minimal-downtime upgrades. High availability and failover provides further details. Note: When implementing hypervisor layer-high availability, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Resource utilization In any running system, monitor the utilization of resources and adapt as needed. For example, the reference virtual desktop and required hardware resources in this solution assume that there are no more than five virtual CPUs for each physical processor core (5:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops, but this ratio may not be appropriate in all cases. EMC recommends 45

46 Chapter 5: Solution Design Considerations and Best Practices monitoring CPU utilization at the hypervisor layer to determine if more resources are required and adding them as needed. Validated server hardware Table 11 identifies the server hardware and the configurations validated in this solution. Table 11. Server hardware Servers for virtual desktops CPU Memory Network Configuration 1 vcpu per desktop (5 desktops per core) 350 cores across all servers for 1,750 virtual desktops 700 cores across all servers for 3,500 virtual desktops 2 GB RAM per virtual machine 3.5 TB RAM across all servers for 1,750 virtual desktops 7 TB RAM across all servers for 3,500 virtual machines 2 GB RAM reservation per Hyper-V host 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server Notes: The 5:1 vcpu-to-physical-core ratio applies to the reference workload defined in this. When deploying Avamar, add CPU and RAM as needed for components that are CPU or RAM intensive. Refer to the relevant product documentation for information on Avamar resource requirements. No matter how many servers you deploy to meet the minimum requirements in Table 11, always add one more server to support Hyper-V HA. This server should have sufficient capacity to provide a failover platform in the event of a hardware outage. 46

47 Chapter 5: Solution Design Considerations and Best Practices Hyper-V memory virtualization Microsoft Hyper-V has a number of advanced features that help optimize performance and overall use of resources. This section describes the key features for memory management and considerations for using them with your VSPEX solution. Figure 8 illustrates how a single hypervisor consumes memory from a pool of resources. Hyper-V memory management features such as memory over-commitment, transparent page sharing, and memory ballooning can reduce total memory usage and increase consolidation ratios in the hypervisor. Figure 8. Hypervisor memory consumption Memory virtualization techniques enable the Hyper-V hypervisor to abstract physical host resources, such as dynamic memory, to provide resource isolation across multiple virtual machines, while avoiding resource exhaustion. In cases where advanced processors (such as Intel processors with EPT support) are deployed, memory abstraction takes place within the CPU. Otherwise, it occurs within the hypervisor itself. Hyper-V provides several memory management techniques such as Dynamic Memory, non-uniform memory access, and Smart Paging. 47

48 Chapter 5: Solution Design Considerations and Best Practices Dynamic Memory Dynamic Memory increases physical memory efficiency by treating memory as a shared resource and allocating it to the virtual machines dynamically. The actual consumed memory of each virtual machine is adjusted on demand. Dynamic Memory enables more virtual machines to run by reclaiming unused memory from idle virtual machines. In Windows Server 2012, Dynamic Memory can increase the maximum memory available to virtual machines. Non-uniform memory access Non-uniform memory access (NUMA) is a multinode computer technology that enables a CPU to access remote-node memory. This type of memory access is costly in terms of performance. However, Windows Server 2012 employs a process affinity that strives to keep threads pinned to a particular CPU to avoid remote-node memory access. In previous versions of Windows, this feature is available only to the host. Windows Server 2012 extends this functionality to virtual machines, where it improves performance. Smart Paging With Dynamic Memory, Hyper-V allows virtual machines to exceed the physical memory available. This means that when a virtual machine s minimum memory is less than its start-up memory, Hyper-V might not always have additional memory available to meet the machine s start-up requirements. Smart Paging bridges the gap between minimum memory and start-up memory and allows virtual machines to restart reliably by using disk resources as a temporary memory replacement. It swaps out less-used memory to disk and swaps it back in when needed. However, this can degrade performance. Hyper-V continues to use guest paging when the host memory is oversubscribed, because it is more efficient than Smart Paging. Memory configuration guidelines Proper sizing and configuration of the solution requires care when configuring server memory. This section provides guidelines for allocating memory to virtual machines and takes into account Hyper-V memory overhead and the virtual machine memory settings. Hyper-V memory overhead Virtualization of memory resources incurs associated overhead, including the memory consumed by the Hyper-V parent partition, and additional overhead for each virtual machine. For this solution, leave at least 2 GB of memory for the Hyper-V parent partition. Allocating memory to virtual machines Server capacity is required for two purposes in the solution: To support required infrastructure services such as authentication and authorization, DNS, and database For further details on the hosting requirements for these infrastructure services, refer to the VSPEX Private Cloud Proven Infrastructure Guide listed in Essential reading. To support the virtualized desktop infrastructure 48

49 Chapter 5: Solution Design Considerations and Best Practices In this solution, each virtual desktop is assigned 2 GB of memory, as defined in Table 11 on page 46. We validated the solution with statically assigned memory and with no over-commitment of memory resources. If memory overcommit is used in a real-world environment, you should regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. Network design considerations VSPEX solutions define minimum network requirements and provide general guidance on network architecture while enabling customers to choose any network hardware that meets the requirements. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. VNX storage arrays include a number of network ports and have the option to add ports using EMC UltraFlex I/O modules. For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 10 IOPS with an average size of 4 KB. This means that each virtual desktop generates at least 40 KB/s of traffic on the storage network. For an environment rated for 1,750 virtual desktops, this means a minimum of approximately 70 MB/sec, which is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for the following: User network traffic Virtual desktop migration Administrative and management operations The requirements for each of these operations depend on how the environment is used. It is not practical to provide concrete numbers in this context. However, the networks described for the reference architectures in this solution should be able to handle average workloads for these operations. Regardless of the network traffic requirements, always have at least two physical network connections that are shared by a logical network to ensure that a single link failure does not affect system availability. Design the network so that if a failure happens, the aggregate bandwidth is sufficient to accommodate the full workload. The network infrastructure must meet the following minimum requirements: Redundant network links for the hosts, switches, and storage Support for link aggregation Traffic isolation based on industry best practices 49

50 Chapter 5: Solution Design Considerations and Best Practices Validated network hardware Table 12 lists the hardware resources for the network infrastructure validated in this solution. Table 12. Minimum switching capacity Storage type XtremIO Block: virtual desktop storage VNX File: optional user data storage Configuration 2 physical switches 2 x FC/CEE/10 GbE ports per Hyper-V server for storage network (iscsi and live migration) 2 x FC/CEE/10 GbE ports per SC for desktop data 2 physical switches 2 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Notes: The solution may use a 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled. This configuration assumes that the VSPEX implementation is using rack-mounted servers; for blade server implementations, ensure that similar bandwidth and highavailability capabilities are provided. Network configuration guidelines This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines take into account network redundancy, link aggregation, traffic isolation, and jumbo frames. The configuration examples are for IP-based networks, but similar best practices and design principles apply for the FC storage network option. Network redundancy The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. The configuration is also required regardless of whether the network infrastructure for the solution already exists, or is deployed with other solution components. 50

51 Chapter 5: Solution Design Considerations and Best Practices Figure 9 provides an example of a highly available XtremIO FC network topology. Figure 9. Highly available XtremIO FC network design example 51

52 Chapter 5: Solution Design Considerations and Best Practices Figure 10 provides an example of a highly available VNX network topology. Figure 10. Highly available VNX Ethernet network design example Link aggregation VNX arrays provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address and, potentially, multiple IP addresses. 2 In this solution, we configured the Link Aggregation Control Protocol (LACP) on the VNX array to combine multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. We distributed all network traffic across the active links. 2 A link aggregation resembles an Ethernet channel but uses the LACP IEEE 802.3ad standard. This standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. 52

53 Traffic isolation Chapter 5: Solution Design Considerations and Best Practices This solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. VLANs segregate network traffic to enable traffic of different types to move over isolated networks. In some cases, physical isolation is required for regulatory or policy compliance reasons, but in most cases logical isolation using VLANs is sufficient. This solution requires a minimum of two VLANs one for client access and one for management. Figure 11 shows the design of these VLANs. Figure 11. Required networks The client access network is for users of the system (clients) to communicate with the infrastructure, including the virtual machines and the CIFS shares hosted by the VNX array. The management network provides administrators with dedicated access to the management connections on the storage array, network switches, and hosts. Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks can be implemented, but they are not required. Figure 11 demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. Create a similar topology when using 1 GbE network connections. 53

54 Chapter 5: Solution Design Considerations and Best Practices Storage design considerations Overview The solution includes layouts for the disks used in the VNX portion of the solution for validation testing. Each layout balances the available storage capacity with the performance capability of the drives. There are several layers to consider when designing the storage layouts. Specifically, a collection of disks in the array is assigned to a storage pool. From that pool, you can provision LUNs to create a storage pool for file, which is used when creating file systems to act as CIFS shares for user data. Each layer has a specific configuration that is defined for the solution and documented in the for Citrix XenDesktop 7.5 and Microsoft Hyper-V Implementation Guide. It is generally acceptable to replace drive types with a type that has more capacity with the same performance characteristics, or one that has the same capacity and higher performance characteristics. It is also acceptable to change the placement of drives in the drive shelves to comply with updated or new drive shelf arrangements. Where there is a need to deviate from the proposed number and type of drives specified or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Validated storage hardware and configuration Hyper-V supports more than one method of using storage when hosting virtual machines. We tested the configurations described in Table 13 using FC, and the storage layouts described adhere to all current best practices. If required, a customer or architect with the necessary training and background can make modifications based on their understanding of the system s usage and load. 54

55 Chapter 5: Solution Design Considerations and Best Practices Table 13. Storage hardware Purpose XtremIO shared storage VNX shared storage Optional for user data (includes hot spares) Configuration Common: 2 x FC/CEE/10 GbE interfaces per storage controller 1 x 1 GbE interface per storage controller for management For 1,750 virtual desktops Starter X-Brick configuration with 13 x 400 GB flash drives For 3,500 virtual desktops X-Brick configuration with 25 x 400 GB flash drives Common: VNX x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per control station for management 2 x Data Movers (1,750 users; 1 active/1 standby) For 3,500 virtual desktops: 1 additional Data Mover (total of 2 active/1 standby) For 1,750 virtual desktops: 34 x 2 TB/7,200 rpm/3.5-inch/nl-sas disks 3 x 100 GB/3.5-inch flash drives For 3,500 virtual desktops: 50 x 2 TB/7,200 rpm/3.5-inch/nl-sas disks 5 x 100 GB/3.5-inch flash drives Note: For VNX arrays, EMC recommends configuring at least one hot spare for every 30 drives of a given type. The recommendations in Table 13 include hot spares. Hyper-V storage virtualization Windows Server 2012 Hyper-V and Failover Clustering use Clustered Shared Volumes (CSV) v2 and Virtual Hard Disk Format (VHDX) features to virtualize storage that is presented from external shared storage systems to host virtual machines. Figure 12 shows an example of a storage array presenting block-based LUNs (as CSVs) to the Windows hosts to host virtual machines. An additional option, pass-through disks, allows virtual machines to access a physical disk that is mapped to a Hyper-V host but that does not have a volume configured. This solution uses CSVs for the infrastructure server and virtual desktop datastores. 55

56 Chapter 5: Solution Design Considerations and Best Practices Figure 12. Hyper-V virtual disk types CSV A CSV is a shared disk that contains an NTFS volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass-through disks Windows Server 2012 supports pass-through disks, which enable a virtual machine to access a physical disk that is mapped to the host but that does not have a volume configured. New Virtual Hard Disk format Hyper-V in Windows Server 2012 introduces an update to the VHD format called VHDX, which has a much larger capacity and built-in resiliency. The main features of the VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures Optimal structure alignment of the virtual hard disk format to suit large sector disks The VHDX format has the following features: Larger block sizes for dynamic and differential disks, which enables the disks to meet the needs of the workload The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware) 56

57 Chapter 5: Solution Design Considerations and Best Practices High availability and failover This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with minimal impact to business operations. This section describes the high availability features of the solution. Virtualization layer EMC recommends that you configure high availability in the virtualization layer and automatically allow the hypervisor to restart virtual machines that fail. Figure 13 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 13. High availability at the virtualization layer By implementing high availability at the virtualization layer, the infrastructure attempts to keep as many services running as possible, even in the event of a hardware failure. Compute layer While the choice of servers to implement in the compute layer is flexible, it is best to use the enterprise-class servers designed for data centers. This type of server has redundant power supplies, as shown in Figure 14. You should connect these to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 14. Redundant power supplies 57

58 Chapter 5: Solution Design Considerations and Best Practices Network layer The advanced networking features of the VNX array provide protection against network connection failures. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 15. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 15. VNX Ethernet network layer high availability Having no single points of failure in the network layer ensures that the compute layer will be able to access storage and communicate with users even if a component fails. 58

59 Chapter 5: Solution Design Considerations and Best Practices Storage layer The XtremIO and VNX families are designed for five 9s (99.999%) availability by using redundant components throughout the array, as shown in Figure 16 and Figure 17. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss due to individual disk failures, and you can dynamically allocate the available hot spare drives to replace a failing disk. Figure 16. XtremIO series high availability Figure 17. VNX series high availability EMC storage arrays are designed to be highly available by default. Use the appropriate Installation Guides to ensure that any single-unit failures do not result in data loss or unavailability. 59

60 Chapter 5: Solution Design Considerations and Best Practices Validation test profile Profile characteristics Table 14 shows the desktop definition and storage configuration parameters that we validated with the environment profile. Table 14. Validated environment profile Profile characteristic Virtual desktop OS CPU per virtual desktop Value Windows 8.1 Enterprise (32-bit) 1 vcpu Number of virtual desktops per CPU core 5 RAM per virtual desktop Desktop provisioning method Average storage available for each MCS linked clone virtual desktop Average storage available for each PVS streamed virtual desktop Average IOPS per virtual desktop at steady state Number of datastores to store virtual desktops Number of virtual desktops per datastore Disk and RAID type for XtremIO virtual desktop datastores Disk and RAID type for VNX CIFS shares to host user profiles and home directories (optional) 2 GB MCS linked clones or PVS streamed desktops 6 GB (VMDK and VSWP) 20 GB (PVS write cache and personal vdisk VMDK and VSWP) 10 IOPS 1,750 MCS linked clone desktops: 14 1,750 PVS streamed desktops: 7 3,500 MCS linked clone desktops: 28 3,550 PVS streamed desktops: MCS linked clones PVS streamed 400 GB emlc SSD drives XtremIO proprietary data protection XDP delivers RAID 6-like data protection but better performance than RAID 10. RAID 6, 2 TB, 7,200 rpm, 3.5-inch NL-SAS disks 60

61 Chapter 5: Solution Design Considerations and Best Practices EMC Data Protection configuration guidelines Data protection profile characteristics Table 15 shows the data protection environment profile that we validated for the solution. Table 15. Data protection profile characteristics Profile characteristic User data Value 17.5 TB for 1,750 virtual desktops 35 TB for 3,500 virtual desktops Note: 10 GB per desktop Daily change rate for user data User data 2% Retention policy Number per day Number per week Number per month 30 daily 4 weekly 1 monthly The solution outlines the backup storage (initial and growth) and retention needs of the system. Gather additional information to further size Avamar, including tape-out needs, RPO and RTO specifics, and multisite environment replication needs. Data protection layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with an Avamar data store. This enables unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This data protection solution unifies the backup process with the deduplication software and system and achieves the highest levels of performance and efficiency. 61

62 Chapter 5: Solution Design Considerations and Best Practices VSPEX for Citrix XenDesktop with ShareFile StorageZones solution With some added infrastructure, the VSPEX End-User Computing for Citrix XenDesktop solution supports Citrix StorageZones with Storage Center. ShareFile StorageZones architecture Figure 18 shows the high-level architecture of a ShareFile StorageZones deployment. Figure 18. ShareFile high-level architecture The architecture consists of the following components: Client Accesses the ShareFile service through one of the native tools, such as a browser or Citrix Receiver, or directly through the ShareFile API. Control Plane Performs functions such as storing files, folders, and account information, access control, reporting, and various other brokering functions. The Control Plane resides in multiple Citrix data centers located worldwide. StorageZones Defines the locations where data is stored. 62

63 Chapter 5: Solution Design Considerations and Best Practices StorageZones ShareFile Storage Center extends the ShareFile software-as-a-service (SaaS) cloud storage by providing on-premises private storage that is, StorageZones. ShareFile on-premises storage differs from cloud storage as follows: ShareFile-managed cloud storage is a public multitenant storage system maintained by Citrix. By default, ShareFile stores data in cloud storage. A ShareFile Storage Center is a private single-tenant storage system maintained by the customer and accessible only by approved customer accounts. Storage Center enables you to configure private, on-premises StorageZones, which define the locations where data is stored and enable performance optimization by locating data storage close to users. You can use StorageZones with or instead of the ShareFile-managed cloud storage. Storage Center is a web service that handles all HTTPS operations from end users and the ShareFile control subsystem. The ShareFile control subsystem handles all operations not related to file content, such as authentication, authorization, file browsing, configuration, metadata, sending and requesting files, and load balancing. The control subsystem also performs Storage Center health checks and prevents offline servers from sending requests. The ShareFile control subsystem is maintained in Citrix online data centers. Design considerations Based on an organization s performance and compliance requirements, consider the number of StorageZones and where best to locate them. For example, if users are in Europe, storing files in a Storage Center in Europe provides both performance and compliance benefits. In general, assigning users to the StorageZones location that is closest to them geographically is the best practice for optimizing performance. For a production deployment of ShareFile, the best practice is to use at least two servers with Storage Center installed for high availability. When you install Storage Center, you create a StorageZone. You can then install Storage Center on another server and join it to the same StorageZone. Storage Centers that belong to the same StorageZones must use the same file share for storage. VSPEX for ShareFile StorageZones architecture Figure 19 shows the logical architecture of the VSPEX for ShareFile StorageZones solution. The customer can select any server and networking hardware that meets or exceeds the minimum requirements, while the recommended storage delivers a highly available architecture for a ShareFile StorageZones deployment. 63

64 Chapter 5: Solution Design Considerations and Best Practices Figure 19. VSPEX for Citrix XenDesktop with ShareFile StorageZones: Logical architecture Server requirements A high-availability production environment requires a minimum of two servers (virtual machines) with Storage Center installed. The minimum requirements to implement Citrix ShareFile StorageZones with Storage Center are: CPU (cores) = 2 Memory (GB) = 4 For more information, refer to the Storage Center system requirements on the Citrix edocs website. 64

65 Network requirements Chapter 5: Solution Design Considerations and Best Practices The networking components can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the minimum requirements of the solution. Provide sufficient network ports to support the additional two Storage Center servers. Storage requirements ShareFile StorageZones requires a CIFS share to provide private data storage for Storage Center. The VNX, which provides the storage for VSPEX end-user-computing solutions, provides both file and block access and an extensive feature set that makes it an ideal choice for ShareFile StorageZones deployments. Table 16 details the recommended VNX storage for the StorageZones CIFS share. Table 16. Recommended VNX storage for ShareFile StorageZones CIFS share CIFS share for (number of users) Configuration 1,750 users 24 x 2 TB, 7,200 rpm 3.5-inch NL-SAS disks (6+2 RAID 6) 3,500 users 48 x 2 TB, 7,200 rpm 3.5-inch NL-SAS disks (6+2 RAID 6) Note: The configuration assumes that each user has 10 GB of private storage space. 65

66 Chapter 5: Solution Design Considerations and Best Practices 66

67 Chapter 6: Reference Documentation Chapter 6 Reference Documentation This chapter presents the following topics: EMC documentation Other documentation

68 Chapter 6: Reference Documentation EMC documentation Other documentation The following documents, located on EMC Online Support, provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative. Avamar Client for Windows on Citrix XenDesktop Technical Notes Deploying Microsoft Windows 8 Virtual Desktops Applied Best Practices White Paper EMC Avamar 7 Administrator Guide EMC Avamar 7 Operational Best Practices EMC PowerPath Viewer Installation and Administration Guide EMC Storage Integrator for Windows Suite Release Notes EMC VNX Unified Best Practices for Performance Applied Best Practices White Paper EMC VNX5400 Unified Installation Guide EMC XtremIO Storage Array Hardware Installation and Upgrade Guide EMC XtremIO Storage Array Operations Guide EMC XtremIO Storage Array Pre-Installation Checklist EMC XtremIO Storage Array Security Configuration Guide EMC XtremIO Storage Array Site Preparation Guide EMC XtremIO Storage Array Software Installation and Upgrade Guide EMC XtremIO Storage Array User Guide VNX FAST Cache: A Detailed Review White Paper VNX Installation Assistant for File/Unified Worksheet Refer to the following topics on the Microsoft MSDN website: Installing Windows Server 2012 R2 SQL Server Installation (SQL Server 2012 SP1) Refer to the following topics on the Microsoft TechNet website: Note: The links provided were working correctly at the time of publication. Create VM from Template Creating a Hyper-V Host Cluster in VMM Overview Creating and Deploying Virtual Machines in VMM 68

69 Chapter 6: Reference Documentation Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager Failover Clustering Overview How to Add a Node to a Hyper-V Host Cluster in VMM How to Add Windows File Server Shares in VMM How to Create a Virtual Machine Template How to Create and Deploy a Virtual Machine from a Template Hyper-V: How many network cards do I need? Hyper-V Network Virtualization Overview Hyper-V Overview Install the Hyper-V Role and Configure a Virtual Machine Installation for SQL Server 2012 Installing a VMM Agent Locally Installing a VMM Management Server Installing and Opening the VMM Console Install and Deploy Windows Server 2012 R2 and Windows Server 2012 Windows Server 2012 Hyper-V Network Virtualization Survival Guide The following documents, available on the Citrix website, provide additional and relevant information: Definitive Guide to XenApp 7.5 and XenDesktop 7.5 Windows 7 Optimization Guide for Desktop Virtualization Windows 8 and 8.1 Virtual Desktop Optimization Guide The following documents, available on the Microsoft website, provide additional and relevant information: Installing Windows Server 2012 R2 SQL Server Installation (SQL Server 2012) 69

70 Chapter 6: Reference Documentation 70

71 Appendix A: Customer Sizing Worksheet Appendix A Customer Sizing Worksheet This appendix presents the following topic: Customer Sizing Worksheet for end-user computing

72 Chapter 6: Reference Documentation Customer Sizing Worksheet for end-user computing Before selecting a reference architecture on which to base a customer solution, use the Customer Sizing Worksheet to gather information about the customer s business requirements and to calculate the required resources. Table 17 shows a blank worksheet. To enable you to easily print a copy, a standalone copy of the worksheet is attached to this in Microsoft Office Word format. Table 17. Customer Sizing Worksheet User Type vcpus Memory (GB) IOPS Equivalent reference virtual desktops No. of users Total reference desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Total 72

73 Appendix A: Customer Sizing Worksheet To view and print the worksheet: 1. In Adobe Reader, open the Attachments panel as follows: Select View > Show/Hide > Navigation Panes > Attachments or Click the Attachments icon as shown in Figure 20. Figure 20. Printable customer sizing worksheet 2. Under Attachments, double-click the attached file to open and print the worksheet. 73

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

FLASH 15 MINUTE GUIDE DELIVER MORE VALUE AT LOWER COST WITH XTREMIO ALL- FLASH ARRAY Unparal eled performance with in- line data services al the time

FLASH 15 MINUTE GUIDE DELIVER MORE VALUE AT LOWER COST WITH XTREMIO ALL- FLASH ARRAY Unparal eled performance with in- line data services al the time FLASH 15 MINUTE GUIDE DELIVER MORE VALUE AT LOWER COST WITH XTREMIO ALL- FLASH ARRAY Unparalleled performance with in- line data services all the time OVERVIEW Opportunities to truly innovate are rare.

More information

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx) EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This document

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

Making the Move to Desktop Virtualization No More Reasons to Delay

Making the Move to Desktop Virtualization No More Reasons to Delay Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

Maximizing Your Investment in Citrix XenDesktop With Sanbolic Melio By Andrew Melmed, Director of Enterprise Solutions, Sanbolic Inc. White Paper September 2011 www.sanbolic.com Introduction This white

More information

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9 PRODUCT CATALOG THE SUMMARY ARKSERIES - pg. 3 ULTRASERIES - pg. 5 EXTREMESERIES - pg. 9 ARK SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP Unlimited scalability Painless Disaster Recovery The ARK

More information

EMC XtremIO. and the Future of Application Services. K. Aggelakopoulos EMC - Sr. Systems Engineer 28 th of May 2014

EMC XtremIO. and the Future of Application Services. K. Aggelakopoulos EMC - Sr. Systems Engineer 28 th of May 2014 EMC XtremIO and the Future of Application Services K. Aggelakopoulos EMC - Sr. Systems Engineer 28 th of May 2014 1 What Problem Are You Solving? Simple Agile Data Center Creating the 3 rd Platform Data

More information

EMC DESKTOP AS A SERVICE: VMWARE HORIZON DAAS WITH EMC XTREMIO ALL-FLASH ARRAY

EMC DESKTOP AS A SERVICE: VMWARE HORIZON DAAS WITH EMC XTREMIO ALL-FLASH ARRAY EMC DESKTOP AS A SERVICE: VMWARE HORIZON DAAS WITH EMC XTREMIO ALL-FLASH ARRAY EMC Solutions March 2015 Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200,

More information

EMC - XtremIO. All-Flash Array evolution - Much more than high speed. Systems Engineer Team Lead EMC SouthCone. Carlos Marconi.

EMC - XtremIO. All-Flash Array evolution - Much more than high speed. Systems Engineer Team Lead EMC SouthCone. Carlos Marconi. EMC - XtremIO All-Flash Array evolution - Much more than high speed Carlos Marconi Systems Engineer Team Lead EMC SouthCone August 2015 Evolution of High End Systems and Storage Centralized Single servers

More information

EMC FLASH STRATEGY. Flash Everywhere - XtremIO. Massimo Marchetti. Channel Business Units Specialty Sales EMC massimo.marchetti@emc.

EMC FLASH STRATEGY. Flash Everywhere - XtremIO. Massimo Marchetti. Channel Business Units Specialty Sales EMC massimo.marchetti@emc. EMC FLASH STRATEGY Flash Everywhere - XtremIO Massimo Marchetti Channel Business Units Specialty Sales EMC massimo.marchetti@emc.com Performance = Moore s Law, Or Does It? MOORE S LAW: 100X PER DECADE

More information

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER ESSENTIALS Mitigate project risk with the proven leader, many of largest EHR sites run on EMC storage Reduce overall storage costs with automated

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Desktop Virtualization and Storage Infrastructure Optimization

Desktop Virtualization and Storage Infrastructure Optimization Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................

More information

ASKING THESE 20 SIMPLE QUESTIONS ABOUT ALL-FLASH ARRAYS CAN IMPACT THE SUCCESS OF YOUR DATA CENTER ROLL-OUT

ASKING THESE 20 SIMPLE QUESTIONS ABOUT ALL-FLASH ARRAYS CAN IMPACT THE SUCCESS OF YOUR DATA CENTER ROLL-OUT ASKING THESE 20 SIMPLE QUESTIONS ABOUT ALL-FLASH ARRAYS CAN IMPACT THE SUCCESS OF YOUR DATA CENTER ROLL-OUT 1 PURPOSE BUILT FLASH SPECIFIC DESIGN: Does the product leverage an advanced and cost effective

More information

How To Backup With Ec Avamar

How To Backup With Ec Avamar BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

CERNER EMR: OPTIMIZING IT INFRASTRUCTURES

CERNER EMR: OPTIMIZING IT INFRASTRUCTURES CERNER EMR: OPTIMIZING IT INFRASTRUCTURES Guidance for maximizing performance, availability and mobility of Cerner Millennium environments ABSTRACT Cerner Millennium EMR plays a critical role in delivering

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide www.citrix.com Overview XenDesktop offers IT administrators many options in order to implement virtual

More information

EMC AVAMAR. a reason for Cloud. Deduplication backup software Replication for Disaster Recovery

EMC AVAMAR. a reason for Cloud. Deduplication backup software Replication for Disaster Recovery EMC AVAMAR a reason for Cloud Deduplication backup software Replication for Disaster Recovery Bogdan Stefanescu (Bogs) EMC Data Protection Solutions bogdan.stefanescu@emc.com 1 BUSINESS DRIVERS Increase

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

SQL Server Storage Best Practice Discussion Dell EqualLogic

SQL Server Storage Best Practice Discussion Dell EqualLogic SQL Server Storage Best Practice Discussion Dell EqualLogic What s keeping you up at night? Managing the demands of a SQL environment Risk Cost Data loss Application unavailability Data growth SQL Server

More information

Three Paths to the Virtualized Private Cloud

Three Paths to the Virtualized Private Cloud The Essential Guide to Virtualizing Microsoft Applications on EMC VSPEX For organizations running mission-critical Microsoft enterprise applications like Microsoft Exchange, Microsoft SharePoint, and Microsoft

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

VNX HYBRID FLASH BEST PRACTICES FOR PERFORMANCE

VNX HYBRID FLASH BEST PRACTICES FOR PERFORMANCE 1 VNX HYBRID FLASH BEST PRACTICES FOR PERFORMANCE JEFF MAYNARD, CORPORATE SYSTEMS ENGINEER 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with regard to product

More information

How To Protect Data On Network Attached Storage (Nas) From Disaster

How To Protect Data On Network Attached Storage (Nas) From Disaster White Paper EMC FOR NETWORK ATTACHED STORAGE (NAS) BACKUP AND RECOVERY Abstract This white paper provides an overview of EMC s industry leading backup and recovery solutions for NAS systems. It also explains

More information

How To Get The Most Out Of An Ecm Xtremio Flash Array

How To Get The Most Out Of An Ecm Xtremio Flash Array Conseils Intégration systèmes Gestion infrastructure 1/29 EMC Next Generation Flash Strategy #1 All-Flash Array In The Market: XtremIO Changes the Game Conseils Intégration systèmes Gestion infrastructure

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

SIZING EMC VNX SERIES FOR VDI WORKLOAD

SIZING EMC VNX SERIES FOR VDI WORKLOAD White Paper SIZING EMC VNX SERIES FOR VDI WORKLOAD An Architectural Guideline EMC Solutions Group Abstract This white paper provides storage sizing guidelines to implement virtual desktop infrastructure

More information

Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection

Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection Solution Brief Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection 2 Unitrends has leveraged over 20 years of experience in understanding ever-changing data protection challenges in

More information

CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY

CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY Reference Architecture CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY An XtremIO Reference Architecture Abstract This Reference architecture examines the storage efficiencies

More information

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE White Paper MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC VNX Family, EMC Symmetrix VMAX Systems, and EMC Xtrem Server Products Design and sizing best practices

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching

More information

Infortrend EonNAS 3000 and 5000: Key System Features

Infortrend EonNAS 3000 and 5000: Key System Features Infortrend EonNAS 3000 and 5000: Key System Features White paper Abstract This document introduces Infortrend s EonNAS 3000 and 5000 systems and analyzes key features available on these systems. Table

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage EMC Information Infrastructure Solutions Abstract Virtual desktop infrastructures introduce a new way for IT organizations

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

THESUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

THESUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9 PRODUCT CATALOG THESUMMARY ARKSERIES - pg. 3 ULTRASERIES - pg. 5 EXTREMESERIES - pg. 9 ARKSERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP Unlimited scalability Painless Disaster Recovery The ARK

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Flash Storage Optimizing Virtual Desktop Deployments

Flash Storage Optimizing Virtual Desktop Deployments Flash Storage Optimizing Virtual Desktop Deployments Ashok Rajagopalan UCS Product Management May 2014 In Collaboration with Intel Old Fashioned VDI (circa 2012) was Financially Unattractive to Most Average

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

SharePoint Microsoft SharePoint has become

SharePoint Microsoft SharePoint has become The Essential Guide to SharePoint S p o n s o r e d b y Microsoft SharePoint has become a mission-critical platform for sharing information and delivering improved collaboration to organizations of all

More information

Kaminario K2 All-Flash Array

Kaminario K2 All-Flash Array Kaminario K2 All-Flash Array The Kaminario K2 all-flash storage array delivers predictable performance, cost, scale, resiliency and simplicity so organizations can handle ever-changing and unforeseen business

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

EMC SOLUTION FOR SPLUNK

EMC SOLUTION FOR SPLUNK EMC SOLUTION FOR SPLUNK Splunk validation using all-flash EMC XtremIO and EMC Isilon scale-out NAS ABSTRACT This white paper provides details on the validation of functionality and performance of Splunk

More information

Turbo Charge Your Data Protection Strategy

Turbo Charge Your Data Protection Strategy Turbo Charge Your Data Protection Strategy Data protection for the hybrid cloud 1 WAVES OF CHANGE! Data GROWTH User EXPECTATIONS Do It YOURSELF Can t Keep Up Reliability and Visibility New Choices and

More information

MS Exchange Server Acceleration

MS Exchange Server Acceleration White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba

More information

How To Get A Storage And Data Protection Solution For Virtualization

How To Get A Storage And Data Protection Solution For Virtualization Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE White Paper EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE EMC Next-Generation VNX, EMC Storage Integrator for Windows Suite, Microsoft System Center 2012 SP1 Reduce storage

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

Luxembourg June 3 2014

Luxembourg June 3 2014 Luxembourg June 3 2014 Said BOUKHIZOU Technical Manager m +33 680 647 866 sboukhizou@datacore.com SOFTWARE-DEFINED STORAGE IN ACTION What s new in SANsymphony-V 10 2 Storage Market in Midst of Disruption

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

Maxta Storage Platform Enterprise Storage Re-defined

Maxta Storage Platform Enterprise Storage Re-defined Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged computing,

More information

How To Create A Server Virtualization Solution For A Large-Scale Data Center

How To Create A Server Virtualization Solution For A Large-Scale Data Center SERVER VIRTUALIZATION WITH EMC XTREMIO ALL-FLASH ARRAY AND VMWARE VSPHERE 5.5 EMC Solutions Abstract This white paper highlights the performance and operational advantages of server virtualization based

More information

Amazon Cloud Storage Options

Amazon Cloud Storage Options Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object

More information

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software White Paper Overview The Micron M500DC SSD was designed after months of close work with major data center service providers and

More information

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver

More information

What s New with VMware Virtual Infrastructure

What s New with VMware Virtual Infrastructure What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management

More information