Citrix XenDesktop 7.5 on Microsoft Hyper-V Server 2012 R2 with FlexPod Express. Solution Design

Size: px
Start display at page:

Download "Citrix XenDesktop 7.5 on Microsoft Hyper-V Server 2012 R2 with FlexPod Express. Solution Design"

Transcription

1 Citrix XenDesktop 7.5 on Microsoft Hyper-V Server 2012 R2 with FlexPod Express Solution Design Citrix Validated Solutions December 11 th 2014 Prepared by: Citrix APAC Solutions

2 TABLE OF CONTENTS Section 1: Executive Summary... 5 Project Overview... 6 Reference Architecture... 6 Purpose... 6 Audience... 6 Architecture Overview... 7 Citrix Virtual Desktop Types... 7 The Pod Concept... 7 Justification and Validation... 8 Citrix Validated Solution Overview... 9 Solution at a Glance Citrix Layered Architecture Design Recommendations Logical Architecture Overview Scale Out Guidance for HSD Scale Out Guidance for HVD Section 2: Design User Layer Design User Topology Endpoints Access Layer Design StoreFront Configuration Desktop Layer Design User Personalisation Applications Master Image Control Layer Design Infrastructure Delivery Controllers (XenDesktop) Access Controllers (StoreFront) Hypervisor Hyper-V Overview HSD Hyper-V Host

3 HVD Hyper-V Host Hyper-V Hardware Details Hyper-V General Details Hyper-V Network Details System Center Virtual Machine Manager VMM General Details VMM Network Details VMM Guest Virtual Machine Details Storage Technology Overview NetApp FAS Technology Overview Clustered Data ONTAP Multiprotocol Unified Architecture Logical Interface (LIF) Overview Flash Pool Overview Windows Server 2012 Hyper-V Integration Storage Design NetApp FAS2552 Hybrid Storage Array Architecture Aggregate Design Storage Virtual Machine Design Flexible Volume (FlexVol) Design Copy Offload (ODX) Settings File Share Design Logical Interface (LIF) Design IP Fast Path Gigabit Ethernet Windows Server 2012 Hyper-V Integration Network Overview Network Components VLAN Information DHCP Hardware Layer Design Physical Architecture Overview Physical Component Overview Server Hardware

4 Storage Hardware Bill of Materials - Hosted Shared Desktops Bill of Materials - Hosted Virtual Desktops Bill of Materials - Storage Section 3: Appendices Appendix A. Further Decision Points Appendix B. Server Inventory HSD Servers (Support up to 700 x User Desktop Sessions) HVD Servers (Support up to 700 x Win 7 Virtual Desktops) Appendix C. Windows 8.1 Hosted Virtual Desktops Overview Pod of 500 Windows 8.1 HVD Users Server Inventory Appendix D. Network Switch Requirements Switch Requirements Network Port Densities Appendix E. IP Addressing Hyper-V Hosts: NetApp FAS2552: Control Layer Guest VMS: Sample HSD DHCP Scope: Sample HVD DHCP Scopes: Appendix F. Service Accounts & Groups Role Groups Service Accounts Appendix G. XenDesktop Policies Test Environment Policy Settings Appendix H. Cisco C240 M3 SFF Server BIOS Settings Processor Memory Appendix I. Storage Calculations NetApp Sizing Guidance Infrastructure Share HSD Pooled - Windows Server 2008 R2 or 2012 R HVD Pooled - Windows

5 HVD Pooled - Windows HVD - Persistent Desktops File Sharing and User Data Appendix J. Test Results Validation End User Experience Monitoring Appendix K. References Citrix Cisco NetApp Revision History

6 SECTION 1: EXECUTIVE SUMMARY 5

7 Project Overview Reference Architecture In order to facilitate rapid and successful deployment of the Citrix XenDesktop FlexCast models, Citrix Consulting APAC have built and tested a solution using the components described in this document. The Citrix Validated Solution ( CVS ) provides prescriptive guidance for these components including design, configuration and deployment settings thereby allowing customers to quickly deploy a desktop virtualization solution using Citrix XenDesktop. Validation was performed by extensive testing using Login VSI to simulate real-world workloads and determine optimal configuration for the integration of components that make up the overall solution. Purpose The purpose of this document is to provide design information that describes the architecture for this Citrix Validated Solution which is based on Citrix Hosted Shared Desktop (HSD) and Citrix Hosted Virtual Desktop (HVD) FlexCast models. The solution is built on FlexPod Express which is a converged infrastructure featuring Cisco C240 M3 High-Density Rack servers and a NetApp FAS2552 Hybrid Storage Array. Microsoft Hyper-V Server 2012 R2 is the hypervisor utilised to support the virtualised environment. Audience This reference architecture document is created as part of a Citrix Validated Solution and is intended to describe the detailed architecture and configuration of the components contained within. Readers of this document should be familiar with Citrix XenDesktop, its related technologies and the foundational components, Cisco C240 M3 High-Density Rack servers, NetApp FAS2552 Hybrid Storage Array, networking components and Microsoft Hyper-V Server 2012 R2. 6

8 Architecture Overview This Citrix Validated Solution and its components was designed, built and validated to support two distinct Citrix virtual desktop types. The architecture for each desktop type is described to support up to 700 user desktop sessions or a single pod: Hosted Shared Desktops. Shared user sessions running XenDesktop Hosted Shared Desktops on Windows Server 2008 R2 or Server 2012 R2 Remote Desktop Session Hosts Hosted Virtual Desktops. Individual user sessions running XenDesktop Hosted Virtual Desktops on Windows 7 Enterprise x64 or Windows 8.1 x 64 Enterprise Each of these desktop types is described in the Citrix FlexCast model operating as virtual machine instances on Microsoft Hyper-V Server 2012 R2. This architecture is a single, self-supporting modular component identified as a Pod, described to support up to 700 users sessions allowing customers to consistently build and deploy scalable environments. Additional pods may be deployed thus scaling out the proposed architecture beyond 700 seats. Citrix Virtual Desktop Types This Citrix Validated Solution document references Citrix Hosted Shared Desktops and Citrix Hosted Virtual Desktops. Both types of virtual desktops are discussed below for reference. For more information, refer to Citrix FlexCast delivery methods Hosted Shared Desktop (HSD). A Windows Remote Desktop Session (RDS) Host using Citrix XenDesktop to deliver Hosted Shared Desktops in a locked down, streamlined and standardised manner with a core set of applications. Using a published desktop on to the Remote Desktop Session Host, users are presented a desktop interface similar to a standard Windows desktop operating system look and feel. Each user runs in a separate session on the RDS server. Hosted Virtual Desktop (HVD) aka Hosted VDI. A Windows desktop operating system, instance running as a virtual machine where a single user connects to the machine remotely. Consider this as 1:1 relationship of one user to one desktop. There are differing types of the hosted virtual desktop model (existing, installed, pooled, dedicated and streamed). This document refers to both pooled and dedicated (persistent) types of HVD. This document will discuss the delivery of persistent and non-persistent (state-less) desktop types. Hosted Shared Desktops and Hosted Virtual Desktops. Throughout this document nomenclature may reference the FlexCast model as; <FlexCast model> which should be substituted for either HSD or HVD as appropriate to the design under consideration. The Pod Concept The term pod is referenced throughout this solution design. In the context of the architecture described in this document a pod is a known entity, an architecture that has been pre-tested and validated. A pod consists of the hardware and software components required to deliver 700 virtual desktops using either HSD (Server 2008 R2, Server 2012 R2) or HVD (Windows 7, Windows 8.1) FlexCast models 1. For clarity this document does not attempt to describe combining both FlexCast models, it specifically discusses each type as a single entity of up to 700 desktops. 1 Windows 7, Server 2008 R2 and Server 2012 R2 have been used as the primary operating system to deliver the pod of 700 desktops within this architecture. For Windows 8.1 HVD details, refer to the Appendix. 7

9 Justification and Validation The construct of this Citrix Validated Solution is based on many decisions that were made during validation testing. Testing was carried out using the Login VSI virtual Session Indexer (VSI), an industry standard tool for user / session benchmarking. Login VSI allows comparisons of platforms and technologies under the same repeatable load. The Medium VSI workload is expected to approximate the average office worker during normal activities and was the workload used throughout testing. The Logon storm was used to measure the maximum session densities that are described in this document. The Duration of the logon storm was set to 1 hour for all tests after the logon storm event e.g. steady state; the system under test is noted to be at moderate load only. Note. All workloads were tested using the XenDesktop Template Policy High Server Scalability running in Legacy Graphics mode therefore the Bill of Materials described for each FlexCast model within this document are based on the density of users with these policy settings in place. Using these Citrix Policies allows the greatest host density for each FlexCast model. In conjunction with Login VSI, further Validation of the end user experience during test load scenarios was validated using Liquidware Labs Stratusphere UX. Stratusphere UX is a comprehensive set of monitoring, performance validation and diagnostics tools. 8

10 Citrix Validated Solution Overview The Illustration below depicts the layers of the Citrix XenDesktop Hosted Shared Desktop technology stack utilised in the solution. Figure 1. Citrix Validated Solution Stack depicting HSD Workloads The Illustration below depicts the layers of the Citrix XenDesktop Hosted Virtual Desktop technology stack utilised in the solution. Figure 2. Citrix Validated Solution Stack depicting HVD Workloads 9

11 The Illustration below depicts the combined physical and logical view of the scale out architecture for the HSD platform using the Cisco C240 M3 SFF servers. Figure 3. Logical View of the HSD Solution with Server 2008 R2 or Server 2012 R2 up to 700 desktops The Illustration below depicts the combined physical and logical view of the scale out architecture for the Windows 7 HVD platform using the Cisco C240 M3 SFF servers. Figure 4. Logical View of the HVD Solution with Windows 7 up to 700 desktops 10

12 Architecture Components Citrix XenDesktop. Two virtualised Delivery Controller servers will be deployed to support the XenDesktop Site. A single XenDesktop Site will be utilised to manage the initial desktop pod. Virtual Desktops. This solution will focus on the delivery of the two discrete virtual desktops types: o o Hosted Virtual Desktops (HVD). Describing the delivery of a single pod of Pooled or Persistent Windows 7-8 virtual desktops powered by Citrix XenDesktop 7.5. Hosted Shared Desktops (HSD). Describing the delivery of a single pod of Shared virtual desktops based on Microsoft Windows Server 2008 R2 or Server 2012 R2 Remote Desktop Session host workloads powered by Citrix XenDesktop 7.5. Microsoft Hyper-V Server 2012 R2 (Hyper-V). The hypervisor selected to host the virtualised desktop and server instances for this solution is Microsoft Hyper-V Server 2012 R2. Hyper-V will be deployed onto the Cisco C240 M3 SFF servers. Virtual Desktop Provisioning. This document describes the use of Citrix Machine Creation Services ( MCS ) for the provisioning of HSD and HVD guest workloads using a predefined master image containing the optimised operating system and Tier-1 application set. Applications. Tier-2 2 applications which may include line of business or customer specific applications that are not embedded as part of the master disk image may be delivered using Citrix XenDesktop or Microsoft App-V 3. Citrix StoreFront. Virtualised StoreFront servers will be deployed to provide application and desktop resource enumeration. Citrix Performance Management. Citrix Director and Citrix EdgeSight will provide monitoring capabilities into the virtual desktops and user sessions. FlexPod Express Converged Infrastructure Platform. FlexPod Express provides a pretested, low-cost converged infrastructure solution that is integrated and delivered by an ecosystem of joint channel partners. The converged solution includes the following components: o o o Cisco UCS C-Series Rack Servers. High density compute platform based on rack server form factor. To provide VM and Infrastructure redundancy, a minimum of two Cisco servers running Hyper-V 2012 R2 are required to instantiate the environment. Cisco Nexus Switches. As per the test environment, a pair of Cisco Nexus 3048TP 1GbE Top of Rack (ToR) switches has been used however existing 1GbE network switch infrastructure can be leveraged to minimise hardware acquisition cost. NetApp Storage. A NetApp FAS2552 Hybrid Storage Array will be utilised with dual controllers to present SMB 3.0 storage to the Hyper-V hosts. The NetApp FAS2552 Hybrid Storage Array will also provide file shares for the User Profile solution, SCVMM Library and a file share witness to the Hyper-V failover cluster. 2 The solution design for Tier-2 applications delivered by Citrix XenDesktop or Citrix XenApp is out of scope for this document. 3 The solution design of Microsoft App-V components is out of scope for this document. 11

13 The NetApp FAS2552 Hybrid Storage Array will contain a combination of SSD and SAS disks for dynamic workload acceleration. Supporting Infrastructure. The following components are assumed to exist within the customer environment and are required infrastructure components: o o o o Microsoft Active Directory Domain Services. A suitable Microsoft SQL database platform to support the solution database requirements 4. Licensing servers to provide Microsoft licenses are assumed to exist. DHCP Services with sufficient IP addresses to support the proposed virtual desktop workloads. This can be provisioned as part of the solution using the Windows Server 2012 R2 DHCP Role. This design document will focus on the desktop virtualisation components which include the desktop workload, desktop delivery mechanism, hypervisor, hardware, network and storage platforms. 4 This document provides sample sizing guidelines and the licensing requirements for the databases used in this Citrix Validated Solution; however it does not attempt to provide design guidelines for Microsoft SQL Server. The design and implementation for a highly available Microsoft SQL Server platform is required although considered out of scope for this design document. 12

14 Solution at a Glance This section defines the key decisions points and options offered by this Citrix Validated Solution. The subsequent sections within this document provide the detailed configuration of each element. Category Key Solution Requirements Minimum Infrastructure Requirement Design Decision Resilient infrastructure to deliver HSD and HVD desktops (pooled and Persistent) Low cost entry point platform utilising Cisco and NetApp hardware components suitable up to 700 seats Scalable solution up to 700 seats and scale out to 1,000 s per pod 1 x NetApp FAS2552 Hybrid Storage Array 2 x Cisco C240 M3 servers running Hyper-V 1GbE network Scalability Minimum, 1 x NetApp FAS2552 Hybrid Storage Array for a maximum of 700 desktops XenDesktop XenDesktop 7.5 Enterprise Minimum number of nodes is two (2) Cisco C240 M3 servers supporting up to: o o 300 HSDs and the supporting XenDesktop Infrastructure components or 180 Win8.1or 200 Win7 HVDs and the supporting XenDesktop Infrastructure components Machine Creation Services workload delivery Highly scalable and redundant Delivery Controller servers Vertical scalability by increasing CPU/RAM resources or Horizontal scalability by adding Delivery Controllers Desktop Types Hosted Shared Desktops (HSD) on Windows Sever 2008 R2 or Windows Server 2012 R2 Standard editions (for simplicity assume similar scalability) o o o 8 vcpus, 18GB RAM, 100GB disk, 1 vnic Horizontal scalability by deploying more VMs onto available hosts Redundancy by overprovisioning desktop capacity Hosted Virtual Desktops on Windows 7 Enterprise SP1 x64 or Windows 8.1 Enterprise x64 (Pooled or Persistent) o o o o 2 vcpus, 2.5GB RAM, 100GB disk, 1 vnic Horizontal scalability by deploying more VMs onto available hosts Redundancy by overprovisioning desktop capacity VM High Availability for persistent desktops Hypervisor Microsoft Hyper-V Server 2012 R2 Clustered server deployment managed via System Center Virtual Machine Manager (SCVMM) Vertical scalability by increasing CPU/RAM resources or Horizontal scalability by deploying additional server nodes Compute and Hardware Cisco C240 M3 SFF server nodes Dual socket Intel 10-core CPUs 128GB RAM for HSD 256GB RAM for HVD workloads 13

15 Category 300GB volume for Hyper-V / OS Design Decision Redundant network interfaces for Host Management, VM Guest and storage networks Storage NetApp FAS2552 Hybrid Storage Array Clustered Data ONTAP storage operating system Dual Controller two-node switchless cluster in a single 2U chassis 4 x 200GB SSD 20 x 600GB 10K SAS Configured with 1GbE network interfaces Networking and Related Hardware DNS round robin will be utilised to load balance the StoreFront servers Customer can leverage existing load balancer hardware investment or alternatively deploy a pair of Citrix NetScaler in High Availability to provide both load balancing and remote-access capability (Recommended) Customer can leverage existing 1GbE network switches to integrate the physical host servers and NetApp FAS2552 Hybrid Storage Array into their environment. Alternatively, a pair of Cisco Nexus 3048 or Catalyst switches can be procured. Refer to the Appendix for Network Requirements File Services CIFS/SMB files services are presented by a Storage Virtual Machine (SVM) hosted on the NetApp FAS2552 Hybrid Storage Array Support for SMB 3.0 functions; continuously available 5 share, persistent handles, remote VSS, witness, ODX copy offload, and Microsoft BranchCache User profile data only, VMM Library share and Cluster file share witness Up to 1.0GB per user for maximum of 700 users Additional storage requirements such as user home directories and group share drives will require additional shared storage which is not covered as part of the solution. The NetApp FAS2552 Hybrid Storage Array used in this solution could be expanded to cater for the additional requirements. See the Storage section for more information. Applications Baseline applications installed as per the SOE (Tier-1) Integration and deployment of Line of Business (LoB) or customerspecific applications (Tier-2) would need to be catered for. Additional services and infrastructure may be required Access Redundant StoreFront servers with DNS round robin for simplicity and low cost. Recommendation to leverage Citrix NetScaler HA appliances as the environment is scaled-out. Additional load balancing capability can be used via Citrix NetScaler appliances Vertical scalability to StoreFront servers by increasing CPU/RAM resources Remote Access solution, i.e. in the form of Citrix NetScaler or other is out of scope and would need to be factored in, if required. 5 Supported for and enabled on the Hyper-V file shares only, not on UPM file shares. 14

16 Category Design Decision Availability/Redundancy Assumes single data centre (single physical location) only Delivery Controllers redundant servers (N+1 VMs placed on different hosts) Hyper-V hosts Microsoft Failover Cluster, overprovision by having N+1 hosts. File share witness located on the highly available FAS2552. Hyper-V NICs active/active NIC teaming. VM High availability VM data Continuously available SMB 3.0 shares presented by the redundant NetApp FAS2552 Hybrid Storage Array controllers. StoreFront servers redundant servers (N+1 VMs placed on different hosts). DNS round-robin configured which can be further improved by integrating Citrix NetScaler SQL 2012 DB Servers redundant servers (N+1 VMs placed on different hosts) XenDesktop Databases database mirroring in active/passive setup SCVMM Server and Database none, stand-alone server setup. Minimal impact to XenDesktop environment if SCVMM server is unavailable, only the power functions of the VMs are affected. o o o All VMs that are running will continue to run, any connected user will notice no service disruption Any user who tries to connect to a session will succeed Power functions can still be managed manually from the local console if needed CIFS/SMB File Services provided by the NetApp FAS2552 Hybrid Storage Array in a dual controller HA configuration Witness and Continuously Available SMB 3.0 file share presented by the NetApp FAS2552 Hybrid Storage Array to the Microsoft Hyper-V hypervisor enables transparent failover of virtual machines Windows DHCP Services redundant servers (N+1 VMs placed on different hosts) Citrix License Server stand-alone, built in 30-day grace period Local Storage RAID-1 (Mirror) for the Hyper-V Boot volume Table 1. Solution at a glance 15

17 Citrix Layered Architecture The Citrix Validated Solution architecture breaks the design into a number of distinct layers, discussed below: User Layer 6. This layer details the user segments defined during the projects assess phase. Users are grouped based on their network connectivity to the data centre, recommended end point devices, security requirements, data storage needs and virtual workforce needs. Access Layer. This layer describes how the user layer will connect to their desktop, which is hosted in the desktop layer of the architecture. Local users will connect directly to StoreFront while remote users connect via a set of firewalls that protect the internal environment. To bridge the firewalls, remote users may connect with an SSL-VPN device (Citrix Access Gateway). Desktop Layer. This layer contains the user s virtual desktop, broken down into FlexCast models. It is subdivided into three components, Within each sub-layer, specifics are documented detailing the operating system, assigned policies, profile design and application requirements: o o o User Personalisation Applications Master Image Control Layer. This layer is responsible for managing and maintaining all other layers. It provides details on the controller requirements to support the entire solution. The Control layer is broken down into the following sub sections: o o o o o o Infrastructure. The infrastructure section is responsible for providing the underlying resources to support each component. These resources include Active Directory, database requirements and license servers. Desktop Controllers. The Desktop Controllers section provides details on the components required to support the desktop layer, which include XenDesktop. Access Controllers. The Access Controllers section focuses on the required versions and virtualisation resources. Hypervisor. The section described the configuration for Microsoft Hyper-V Server 2012 R2. Hyper-V is a Type 1 hypervisor that runs directly on the hardware resources described in the Hardware Layer. Storage. The storage layer describes the logical and physical entities as relates to the proposed NetApp storage architecture. Network. This section defines the physical network switching and logical connectivity requirements to support the solution. Hardware Layer. This layer is responsible for the physical devices required to support the entire solution. It includes servers, processors, memory and storage devices. This layer is broken down into the physical and logical components and provides the Bill of Materials (BoM) to deploy the entire solution. 6 User assessment in the context of this document is for reference only. User definition and segmentation for VDI desktop types is out of scope for this document. 16

18 The illustration below describes the conceptual architecture: Figure 5. Architecture Conceptual View 17

19 The illustration below describes the distinct layers of the architecture: Figure 6. Architecture Layered View Design Recommendations Assumptions: The following assumptions have been made: Required Citrix and Microsoft licenses and agreements are available. Required power, cooling, rack and data centre space is available. No network constraints that would prevent the successful deployment of this design. Microsoft Windows Active Directory Domain services are available. Microsoft SQL Database platform is available. Certificate and/or PKI services are assumed to exist or external services may be used. A current and supported version of Citrix Receiver must be deployed to ensure all features and components of the solution are at a supported level, refer to the following link for the latest Citrix Receiver Downloads. The User layer in the context of this document is for reference only. User analysis, definition and segmentation for the use of VDI desktop types is out of scope for this document. 18

20 Logical Architecture Overview This section discusses the logical architecture and concepts for the remainder of this document. From an architectural perspective Hyper-V will be deployed onto the aforementioned hardware (Hardware Layer) with the infrastructure servers (Control layer) and virtual desktops (Desktop Layer) deployed as Hyper-V virtual machine instances that reside on SMB 3.0 storage presented from the NetApp FAS2552 Hybrid Storage Array. Storage will be presented to the Hyper-V failover cluster hosts via an isolated non-routable VLAN ensuring SMB 3.0 traffic utilises the specified network. From a physical hardware perspective each server node will be configured identically as per the recommended Cisco C240 Bill of Materials. The NetApp FAS2552 Hybrid Storage Array dictates the actual pod size as defined by its physical capabilities (additional pods can be deployed). From a logical perspective the hosts for each desktop type can be defined as follows: A minimum of two sever nodes and the NetApp FAS2552 Hybrid Storage Array is required to establish the foundation of a Citrix XenDesktop environment complete with the necessary infrastructure servers, guest VMs deployed in a redundant fashion. The first two servers, Node1 and Node 2 will be referred to as Shared Infrastructure & Desktop Nodes as they will host both Infrastructure Server VMs and Desktop VM workloads. The platform can be scaled-out to support additional capacity by simply adding subsequent physical servers up to the maximum as defined by the NetApp FAS2552 Hybrid Storage Array for each workload and referred to as Desktop nodes. Each Desktop node will only support guest HSD or HVD workloads. The following illustrations describe the architecture as relates to the pod definitions. 19

21 Pod of 700 HSD Users The logical and physical components that make up the platform to deliver a 700 user Hosted Shared Desktop solution are described below: Figure 7. VM Allocation for HSD Component Qty # of Citrix XenDesktop Enterprise Users Up to 700 # of XenDesktop Sites 1 # of XenDesktop Delivery Controllers 2 # of StoreFront Servers 2 # of Citrix/Microsoft License Server 7 1 # of MS SCVMM Servers 1 # of Storage Management Server 1 # of SQL 2012 Standard Servers (DB Mirror in Active/Passive) 8 2 # of Cisco C240 Server Nodes running MS Hyper-V 2012 R2 5 # of NetApp FAS2552 Storage 1 # of XenApp RDS (HSD) Windows Server VMs 24 Table User HSD Pod Detail 7 Optional. License services can be optionally deployed onto existing servers to conserve resources. 8 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 20

22 Pod of 700 Windows 7 HVD Users The logical and physical components that make up the platform to deliver a 700 user Hosted Virtual Desktop solution (Windows 7) are described below: Figure 8. VM Allocation for HVD on Windows 7 Component Qty # of Citrix XenDesktop Enterprise Users Up to 700 # of XenDesktop Sites 1 # of XenDesktop Delivery Controllers 2 # of StoreFront Servers 2 # of Citrix/Microsoft License Server 9 1 # of MS SCVMM Servers 1 # of Storage Management Server 1 # of SQL 2012 Standard Servers (DB Mirror in Active/Passive) 10 2 # of Cisco C240 Server Nodes running MS Hyper-V 2012 R2 6 # of NetApp FAS2552 Storage 1 # of Windows 7 Enterprise HVD (virtual desktops) 700 Table User HVD on Windows 7 Pod Detail 9 Optional. License services can be optionally deployed onto existing servers to conserve resources. 10 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 21

23 Scale Out Guidance for HSD This section outlines the sizing metrics applicable to the NetApp FAS2552 Hybrid Storage Array, Cisco C240 server nodes, network switch ports, Hyper-V hosts, Infrastructure server VMs and the required Citrix and Microsoft licenses 11 to stand up the HSD solution based on the suggested scale-out increment. The solution can be scaled out incrementally by adding additional single server nodes, however in this section the scenarios depicts the addition of two server nodes for demonstration purposes. Notes on Microsoft Licensing used as per the below samples 12. # of MS Core Infrastructure Suite (CIS) Standard. MS CIS includes System Center 2012 R2 Standard and licenses for 2 x Windows Server 2012 Standard VMs or Operating System Environment). Refer to # of MS SQL Server 2012 Standard Server 13. Assumes SQL Server is licensed as a 2 VCPU (v-cores) virtual machine with MS Software Assurance. SQL Server license requires minimum of 4 core licenses. Active-Passive SQL Server deployment means no additional licenses are required for secondary passive SQL Server. Refer to In the context of this document the full or maximum scale out load is described while running the HSD desktop types under test load to their maximum densities that are: ~700 Server 2008 R2 RDS hosted shared desktops or ~700 Server 2012 R2 RDS hosted shared desktops At this point the NetApp FAS2552 Hybrid Storage Array is stated to be at full load during the logon storm test of the desktop type. Note: The logon storm is one of the most aggressive components of the testing in the consumption of resources. 11 Each customer will have different Citrix and Microsoft license agreements and as such should be factored into the final configuration. 12 Actual customer licensing requirements may differ based on their situation, agreements or other factors. 13 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 22

24 Scenario: 2 x Nodes Hardware Components Qty Infrastructure Components Qty # of Cisco C240 nodes 2 # of SCVMM server 1 # of RU (Cisco server nodes) 4 # of Hyper-V hosts 2 # of 1GbE Ports (Hyper-V) 8 # of XenDesktop Sites 1 # of 1GbE Ports (NetApp) 6 # of HSD users 300 Total # of 1GbE Ports 10 # of HSD Windows Server VMs 10 # of FAS2552 Appliances (Storage) 1 Storage Management VM 14 1 # of ToR 1GbE 48-port Switch 2 Table 4. Hardware Component Breakdown - 2 x Nodes Citrix/Microsoft License Components Qty # of Citrix XenDesktop Enterprise User/Device 300 # of MS Remote Desktop Services CALs 300 # of MS Core Infrastructure Suite Standard 11 # of MS SQL Server 2012 Standard Server 1 Table 5. Component Breakdown - 2 x Nodes Figure 9. Rack Layout 2 x Nodes 14 Storage Management Server can also be deployed on existing Windows Server VMs to minimise VM resources used and Windows Server license consumption. 23

25 Scenario: 4 x Nodes Hardware Components Qty Infrastructure Components Qty # of Cisco C240 nodes 4 # of SCVMM server 1 # of RU (Cisco server nodes) 8 # of Hyper-V hosts 4 # of 1GbE Ports (Hyper-V) 16 # of XenDesktop Sites 1 # of 1GbE Ports (NetApp) 6 # of HSD users 660 Total # of 1GbE Ports 22 # of HSD Windows Server VMs 22 # of FAS2552 Hybrid Storage Array 1 Storage Management VM 15 1 # of ToR 1GbE 48-port Switch 2 Table 6. Hardware Component Breakdown - 4 x Nodes Citrix/Microsoft License Components Qty # of Citrix XenDesktop Enterprise User/Device 660 # of MS Remote Desktop Services CALs 660 # of MS Core Infrastructure Suite Standard 17 # of MS SQL Server 2012 Standard Server 1 Table 7. Component Breakdown - 4 x Nodes Figure 10. Rack Layout 4 x Nodes 15 Storage Management Server can also be deployed on existing Windows Server VMs to minimise VM resources used and Windows Server license consumption. 24

26 Scenario: 5 x Nodes Hardware Components Qty Infrastructure Components Qty # of Cisco C240 nodes 5 # of SCVMM server 1 # of RU (Cisco server nodes) 10 # of Hyper-V hosts 5 # of 1GbE Ports (Hyper-V) 20 # of XenDesktop Sites 1 # of 1GbE Ports (NetApp) 6 # of HSD users 700 Total # of 1GbE Ports 26 # of HSD Windows Server VMs 26 # of FAS2552 Hybrid Storage Array 1 Storage Management VM 16 1 # of ToR 1GbE 48-port Switch 2 Table 8. Hardware Component Breakdown - 5 x Nodes Citrix/Microsoft License Components Qty # of Citrix XenDesktop Enterprise User/Device 700 # of MS Remote Desktop Services CALs 700 # of MS Core Infrastructure Suite Standard 18 # of MS SQL Server 2012 Standard Server 1 Table 9. Component Breakdown - 5 x Nodes Figure 11. Rack Layout 5 x Nodes 16 Storage Management Server can also be deployed on existing Windows Server VMs to minimise VM resources used and Windows Server license consumption. 25

27 Scale Out Guidance for HVD This section outlines the sizing metrics applicable to the NetApp FAS2552 Hybrid Storage Array, Cisco C240 server nodes, network switch ports, Hyper-V hosts, Infrastructure server VMs and the required Citrix and Microsoft licenses 17 to stand up the HVD solution based on Windows 7 (Windows 8.1 not shown) and the suggested scale-out increment. The solution can be scaled out incrementally by adding additional single server nodes, however in this section the scenarios depicts the addition of two server nodes for demonstration purposes. Notes on Microsoft Licensing used as per the below samples 18. # of MS Core Infrastructure Suite (CIS) Standard. MS CIS includes System Center 2012 R2 Standard and licenses for 2 x Windows Server 2012 Standard VMs or Operating System Environment). Refer to # of MS SQL Server 2012 Standard Server 19. Assumes SQL Server is licensed as a 2 VCPU (v-cores) virtual machine with MS Software Assurance. SQL Server license requires minimum of 4 core licenses. Active-Passive SQL Server deployment means no additional licenses are required for secondary passive SQL Server. Refer to In the context of this document full or maximum load is described while running the HVD desktop types under test load to the maximum densities that are: ~700 Windows 7 virtual desktops or ~500 Windows 8.1 virtual desktops. At this point the NetApp FAS2552 Hybrid Storage Array is stated to be at full load during the logon storm test of each desktop type. Note: The logon storm is one of the most aggressive components of the testing in the consumption of resources. 17 Each customer will have different Citrix and Microsoft license agreements and as such should be factored into the final configuration. 18 Actual customer licensing requirements may differ based on their situation, agreements or other factors. 19 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 26

28 Scenario: 2 x Nodes Hardware Components Qty Infrastructure Components Qty # of Cisco C240 nodes 2 # of SCVMM server 1 # of RU (Cisco server nodes) 4 # of Hyper-V hosts 2 # of 1GbE Ports (Hyper-V) 8 # of XenDesktop Sites 1 # of 1GbE Ports (NetApp) 6 # of Windows 7 HVD users 200 Total # of 1GbE Ports 14 # of VDIs 200 # of FAS2552 Hybrid Storage Array 1 Storage Management VM 20 1 # of ToR 1GbE 48-port Switch 2 Table 10. Hardware Component Breakdown - 2 x Nodes Citrix/Microsoft License Components Qty # of Citrix XenDesktop Enterprise User/Device 200 # of MS Virtual Desktop Access 200 # of MS System Center 2012 R2 CMS Client ML 200 # of MS Core Infrastructure Suite Standard 6 # of MS SQL Server 2012 Standard Server 1 Table 11. Component Breakdown - 2 x Nodes Figure 12. Rack Layout 2 x Nodes 20 Storage Management Server can also be deployed on existing Windows Server VMs to minimise VM resources used and Windows Server license consumption. 27

29 Scenario: 4 x Nodes Hardware Components Qty Infrastructure Components Qty # of Cisco C240 nodes 4 # of SCVMM server 1 # of RU (Cisco server nodes) 8 # of Hyper-V hosts 4 # of 1GbE Ports (Hyper-V) 16 # of XenDesktop Sites 1 # of 1GbE Ports (NetApp) 6 # of Windows 7 HVD users 470 Total # of 1GbE Ports 22 # of VDIs 470 # of FAS2552 Hybrid Storage Array 1 Storage Management VM 21 1 # of ToR 1GbE 48-port Switch 2 Table 12. Hardware Component Breakdown - 4 x Nodes Citrix/Microsoft License Components Qty # of Citrix XenDesktop Enterprise User/Device 470 # of MS Virtual Desktop Access 470 # of MS System Center 2012 R2 CMS Client ML 470 # of MS Core Infrastructure Suite Standard 6 # of MS SQL Server 2012 Standard Server 1 Table 13. Component Breakdown - 4 x Nodes Figure 13. Rack Layout 4 x Nodes 21 Storage Management Server can also be deployed on existing Windows Server VMs to minimise VM resources used and Windows Server license consumption. 28

30 Scenario: 6 x Nodes Hardware Components Qty Infrastructure Components Qty # of Cisco C240 nodes 6 # of SCVMM server 1 # of RU (Cisco server nodes) 12 # of Hyper-V hosts 6 # of 1GbE Ports (Hyper-V) 24 # of XenDesktop Sites 1 # of 1GbE Ports (NetApp) 6 # of Windows 7 HVD users 700 Total # of 1GbE Ports 30 # of VDIs 700 # of FAS2552 Hybrid Storage Array 1 Storage Management VM 22 1 # of ToR 1GbE 48-port Switch 2 Table 14. Hardware Component Breakdown - 6 x Nodes Citrix/Microsoft License Components Qty # of Citrix XenDesktop Enterprise User/Device 700 # of MS Virtual Desktop Access 700 # of MS System Center 2012 R2 CMS Client ML 700 # of MS Core Infrastructure Suite Standard 6 # of MS SQL Server 2012 Standard Server 1 Table 15. Component Breakdown - 6 x Nodes Figure 14. Rack Layout 6 x Nodes 22 Storage Management Server can also be deployed on existing Windows Server VMs to minimise VM resources used and Windows Server license consumption. 29

31 SECTION 2: DESIGN 30

32 User Layer Design User Topology This design is focused on the delivery of virtual desktops using Citrix XenDesktop as discussed in the section, Citrix Virtual Desktops Types. There are a number of classifications that can be used to define a user s role within an organisation and determine the most appropriate virtual desktop type that is best suited for a customer s environment and circumstances 23. Figure 15. User Layer Example The table below provides some example User Type classifications and alignment of FlexCast models, this Citrix Validation Solution is focused: Example: User Type Example: Description Example: Location / Remote LAN / WAN Example: Desktop Types (Flex Cast) Kiosk Worker Public non trusted user LAN / WAN Hosted Shared Task Workers Call Centre LAN Hosted Shared Knowledge Workers Finance department Remote / LAN / WAN Hosted Shared or Hosted Virtual (Pooled) Developer/Power User Engineering All Hosted Virtual (Persistent) Endpoints Table 16: Example User Role Classifications A current and supported version of Citrix Receiver must be deployed to ensure all Citrix XenDesktop features and components of this Citrix Validated Solution are at a supported level, refer to the following link for the latest Citrix Receiver Downloads. 23 A desktop transformation assessment to determine the best fit of a user Role to desktop type is out of scope for this document. 31

33 Access Layer Design The Access Layer explains how a user group will connect to their assigned virtual desktop. User location, connectivity and security requirements play a critical role in defining how users authenticate. Citrix Storefront provides a unified application and desktop aggregation point. Users can access their desktop through a standard Web browser using Citrix Receiver. Figure 16. Access Layer Design Components StoreFront Configuration The key design decisions for the Access Layer are as follows: Decision Point Version, Edition StoreFront Version 2.5 Description / Decision Authentication Point Security Active Directory A server certificate will be installed to secure authentication traffic: https will be required for all web sites, ensuring that user s credentials are encrypted as they traverse the network. Table 17: Citrix StoreFront Configuration StoreFront Configuration. A single store will be created to provide the required access and enumeration of the HSD or HVD desktops. The StoreFront servers will be added into a single server group, providing additional capacity and increasing availability. A server Group provides a unified configuration and synchronisation of user settings. 32

34 Desktop Layer Design The desktop layer focuses on the design considerations for the user s desktop, which must provide them with the right set of applications, capabilities and resources based on their needs. Each of the virtual desktops within the Citrix Validated Solution represent true-to-production configuration consisting of a core set of applications that are pre-installed as part of the virtual desktop master image. Each of the virtual desktops, Windows 7, Windows 8.1 or Windows Server 2008 R2 Windows, Server 2012 R2 RDS workloads will be deployed using Citrix Machine Creation Services. Figure 17. Desktop Layer Design Components User Personalisation Providing the right level of personalisation requires an understanding of the needs for the user group. Personalisation decisions must be weighed against user location, data centre connectivity and security requirements. Utilising technologies like profiles and policies a user group can receive a desktop where userlevel personalisation changes are persisted between logins of the pooled desktops types that are described within this document. Citrix Profile Management will be leveraged and enabled through a Windows service that provides a mechanism for capturing and managing user personalisation settings within the virtual desktop environment. Citrix Profile Management is installed by default during the installation of the Virtual Desktop agent. 33

35 The key design decisions for Citrix Profile Management are as follows: Decision Point Description / Decision Version, Edition Citrix User Profile Management version 5.1 Storage Allocation Profile Storage Location 1 GB per User for Profile related data only Refer to the storage section for further details. Hosted Shared Desktop: User Profile Data: \\svm2\profiledata\hsd-upm Hosted Virtual Desktop: User Profile Data: \\svm2\profiledata\hvd-upm Refer to the Appendix for further information: DECISION POINT Folder redirection Storage Location Refer to the storage section for further details. Applied using Group Policy: (minimum requirements): Application Data Redirected folder location Hosted Shared Desktop: User Data: \\svm2\profiledata\hsd-userdata Hosted Virtual Desktop: User Data: \\svm2\profiledata\hvd-userdata Refer to the Appendix for further information: DECISION POINT Table 18: Citrix Profile Management Key Decisions Citrix Profile Management together with standard Microsoft Windows Folder Redirection that leverages Active Directory GPOs will be deployed to support the user personalisation configuration requirements. Storage is presented by the NetApp FAS2552 Hybrid Storage Array using an SMB file share that provides the repository for user profile/personalisation data. Please refer to the Storage section for full details. 34

36 Applications The Citrix Validated Solution was tested utilising application sets representative of enterprise-level Standard Operating Environment ( SOE ) applications. These applications are pre-installed or embedded as part of the master image. Note a number of pre-requisite applications we re required to drive the Login VSI scalability testing. The following table represents the application set that formed the desktop workload profile: Hosted Shared Desktop Application Set Application HSD Operating System Citrix Applications Description / Decision Microsoft Windows Server 2008 R2 Standard Edition with SPK1 or Microsoft Windows Server 2012 R2 Standard Edition Hyper-V Integration Services Citrix Virtual Delivery Agent Citrix Profile Management v5.1 Citrix ShareFile Desktop Widget v Citrix Receiver v Productivity Applications Microsoft Excel Professional 2010 x86 Microsoft Outlook Professional 2010 x86 Microsoft PowerPoint Professional 2010 x86 Microsoft Word Professional 2010 x86 Baseline Applications Adobe Acrobat Reader v Adobe Flash Player v Adobe Shockwave Player v Adobe AIR v Apple QuickTime v Doro PDF Printer v Cisco WebEx Connect v Google Chrome v Java 7 Update 13 v Mozilla Firefox v Microsoft.NET Framework 4 Client Profile v Microsoft Internet Explorer 9 Microsoft Silverlight v Microsoft Windows Media Player v12.x Skype v WinZip v Table 19: HSD Application Set 24 Application required and deployed by Login VSI for scalability testing. 25 Application required and deployed by Login VSI for scalability testing. 26 Application required and deployed by Login VSI for scalability testing. 27 Application required and deployed by Login VSI for scalability testing. 35

37 Hosted Virtual Desktop Application Set Application HVD Operating System Citrix Applications Description / Decision Microsoft Windows 7 Enterprise Service Pack 1 x64 or Microsoft Windows 8.1 Enterprise Hyper-V Integration Services Citrix Virtual Delivery Agent Citrix Profile Management v5.1 Citrix ShareFile Desktop Widget v Citrix Receiver v Productivity Applications Microsoft Excel Professional 2010 x86 Microsoft Outlook Professional 2010 x86 Microsoft PowerPoint Professional 2010 x86 Microsoft Word Professional 2010 x86 Baseline Applications Adobe Acrobat Reader v Adobe Flash Player v Adobe Shockwave Player v Adobe AIR v Apple QuickTime v Doro PDF Printer v Cisco WebEx Connect v Google Chrome v Java 7 Update 13 v Mozilla Firefox v Microsoft.NET Framework 4 Client Profile v Microsoft Internet Explorer 9 Microsoft Silverlight v Microsoft Windows Firewall Microsoft Windows Media Player v12.x Skype v WinZip v Table 20: HVD Application Set 28 Application required and deployed by Login VSI for scalability testing. 29 Application required and deployed by Login VSI for scalability testing. 30 Application required and deployed by Login VSI for scalability testing. 31 Application required and deployed by Login VSI for scalability testing. 36

38 Master Image The master image is defined by an operating system, image size and a set of applications that are installed into the image. Configuration settings will be applied directly to the master image and using Active Directory Group Policies where appropriate, ensuring consistent deployment and optimisation. Antivirus should be included with specific configurations as documented within this article 32 : Hosted Shared Desktop Workload Figure 18. HSD Workload Configuration Based on the system testing carried out, the following table describes the most optimal configuration for user/session density using; HSD on Windows Server 2008 R2 or Windows Server 2012 R2 RDS workloads. Server Node # of VMs per Node RAM vcpu User Sessions per VM Total # of Users per Node Shared Infrastructure & Desktop Node 5 18 GB 8 ~ Desktop Node 6 18 GB 8 ~ Table 21: HSD Virtual Machine Specification and Sizing Estimates 32 Expect at least ~7% reduction in maximum host density numbers when including Antivirus in the workload image. The minimum overhead incurred by Microsoft System Center EndPoint Protection Anti-Virus during testing phases has consistently reduced density numbers by ~ 7%.Expect per server maximum density numbers to reduce by 10% when running under full load and utilising 1 GbE end to end. 37

39 Virtual Machine Specifications Storage Pagefile Network Interface Memory Description / Decision System Drive: (Difference Disk) C:\ = 100GB Fixed 18GB (1 x Assigned Memory) Single - Synthetic NIC for production traffic 18GB Dynamic Memory not used vcpu 8 Operating System Microsoft Windows Server 2008 R2 Standard Edition with Service Pack 1, or Microsoft Windows Server 2012 R2 Standard Edition Table 22: HSD-Windows Server 2008 R2 & 2012 R2 RDS Virtual Machine Specification The table below describes the per user/desktop IO profile of the workloads as measured from the actual virtual machine: 33 Virtual Machine Operating System Microsoft Windows Server 2008 R2 Microsoft Windows Server 2012 R2 Logon Logoff Steady State 30 reads /10 writes 10 reads /30 writes 1 read / 2 writes 50 reads/ 25 writes 10 reads /30 writes 1 read / 2 writes Table 23: HSD-Windows Server 2008 R2 & 2012 R2 RDS VM I/0 Profile The table below describes the IO profile of the workloads as measured from the actual host machine: Storage IO Profile per Host (assume 200 HSD s running on the host) Logon Steady State Microsoft Windows Server 2008 R2 50 reads / 700 writes 50 reads / 500 writes Microsoft Windows Server 2012 R2 100 reads / 700 writes 50 reads / 600 writes Table 24: HSD-Windows Server 2008 R2 & 2012 R2 RDS Host I/0 Profile 33 The IO profile is described as seen from each virtual machine. These are instantaneous values, highly variable and do not necessarily reflect the true IO of the system. 38

40 Hosted Virtual Desktop Workload (Pooled) Figure 19. Hosted Virtual Desktop Workload Configuration (Pooled) Based on the system testing carried out, the following table describes the most optimal configuration for the Windows 7 workload for user/vm density: Server Node Shared Infrastructure & Desktop Node # of VMs per Node RAM vcpu Total # of Users per Node GB Desktop Node GB Table 25: Windows 7 HVD Virtual Machine Specification and Sizing Estimates Based on the system testing carried out, the following table describes the most optimal configuration for the Windows 8.1 workload for user/vm density: Server Node Shared Infrastructure & Desktop Node # of VMs per Node RAM vcpu Total # of Users per Node GB 2 90 Desktop Node GB Table 26: Windows 8.1 HVD Virtual Machine Specification and Sizing Estimates 39

41 Virtual Machine Specifications Storage Pagefile Network Interface Memory Description / Decision System Drive: (Difference Disk) C:\ = 100GB 4GB (~1.5 x Assigned Memory) Single - Synthetic NIC for production traffic 2.5GB Dynamic Memory enabled. Please refer to the VMM section for further details vcpu 2 Operating System Microsoft Windows 7 Enterprise Service Pack 1 x64 Table 27: HVD-Windows 7 and Windows 8.1 Virtual Machine Specification The table below describes the per user/desktop IO profile of the workload based on the actual virtual machine: 34 Virtual Machine Operating System Logon Logoff Steady State Microsoft Windows reads / 16 writes 65 reads / 40 writes 2 read / 3 writes Microsoft Windows reads / 20 writes 80 reads / 25 writes 2 read / 3 writes Table 28: HVD-Windows 7 & 8.1 VM I/0 Profile The table below describes the IO profile of the workloads as measured from the actual host machine: Storage IO Profile per Host Boot Logon Steady State Logoff Storage IO Profile per host Microsoft Windows 7 (assume 150 HVDs running on the host) 8,000 reads / 700 writes 3,500 reads / 1,200 writes 250 reads / 550 writes 8,000 reads / 900 writes (includes a reboot cycle) Storage IO Profile per host Microsoft Windows 8.1 (assume 130 HVDs running on the host) 8,500 reads / 1,200 writes 2,800 reads / 900 writes 250 reads / 550 writes 10,000 reads / 500 writes (includes a reboot cycle) Table 29: HVD-Windows 7 & 8.1 Host I/0 Profile 34 The IO profile is described as seen from each virtual machine. These are instantaneous values, highly variable and do not necessarily reflect the true IO of the system. 40

42 The virtual workloads are deployed using Citrix Machine Creation Services (MCS). MCS utilises the hypervisor APIs to deploy, stop start and delete virtual machines. A master image must first be deployed that contains the virtual machine resource requirements such as vcpu and memory. Applications and agents are installed in the master image that is required for the virtual machine deployment. Finally a snapshot is created within the hypervisor that will be used for the Catalogs base image deployment by MCS. A XenDesktop Catalog is deployed based on this master image snapshot, for each virtual machine created within this Catalog MCS will create the following virtual disks: Identity disk. An Identity disk which is used to provide each VM with a unique identity. Difference disk. A Difference disk which is used by each VM to store writes that are typically made to the system. Pooled stateless (non-persistent) desktops using MCS are unique in that the differencing disk is deleted and recreated at each boot cycle ensuring that the VM is set back to a clean state after each reboot, effectively deleting any newly written or modified data. In this scenario, certain processes are no longer efficient and optimisation of this image is required. Please refer to the section workload optimisations for further details. Hosted Virtual Desktop Workload (Persistent) Figure 20. Hosted Virtual Desktop Workload Configuration (Persistent) Persistent stateful desktops using MCS by default retain their original differencing disk and the link to the original master image (and snapshot) after each reboot. Once the persistent desktop is deployed it must be managed by the customers existing Electronic Distribution Tools sets (ERD) such as SCCM, Altiris etc. similarly the desktop also needs to be managed from that point onwards as a standalone entity unlike a pooled desktop described below. For the context of scalability numbers and I/O profile that is related to this CVS a persistent desktop will be considered the same as a pooled desktop type, with the exception that the storage footprint will grow over time. For further details on the persistent desktop storage allocation please refer to the Storage Design section. The virtual machine specifications below are for initial guidelines only. It is expected that customers requiring persistent desktop types are likely to require significant tailoring of the virtual desktop machine specifications. 41

43 Virtual Machine Specifications Storage Pagefile Network Interface Memory Description / Decision System Drive: (Difference Disk) C:\ = 100GB 4GB (~1.5 x Assigned Memory) Single - Synthetic NIC for production traffic 2.5GB Dynamic Memory enabled. Please refer to the VMM section for further details vcpu 2 Operating System Microsoft Windows 7 Enterprise Service Pack 1 x64 or Microsoft Windows 8.1 Enterprise x64 Table 30: HVD Persistent Virtual Machine Specification Workload Optimisations Optimisations and configurations can be applied at several levels: Workload Configuration master image. Changes are made directly to the master image. These changes are considered inappropriate to be applied using GPOs or are required settings prior to MCS generalising the image. The master image is then shut down and a snapshot taken by the hypervisor. MCS is then used to deploy the master image (from the snapshot) either to create a new or update an existing XenDesktop Catalog. Workload Configuration GPO. These changes are applied via Active Directory GPO and are considered baseline configurations required in almost all instances. Typical use cases for this GPO are Event log redirection, Citrix Profile Management configuration and target device optimisations. In addition this GPO may have Loopback processing enabled allowing user based settings to be applied at the virtual desktop Organisation Unit level. User Optimisations GPO. This Active Directory GPO contains optimisations for the user within the virtual desktop environment. User optimisations cannot typically be deployed as part of the master image and are considered independent. Typical use cases for this GPO are folder redirection and user specific optimisations. For details pertaining to the above optimisations please refer to the following links for further guidance: %20Windows%207%20Optimization%20Guide.pdf 42

44 Control Layer Design The control layer provides the design decisions for the underlying infrastructure supporting the virtual desktop layer. The Control Layer design is unique per data centre and subdivided into the following components: Infrastructure Desktop Delivery Controllers (XenDesktop) Image Controllers (Machine Creation Services) Access Controllers (StoreFront) Hypervisor Storage Network Figure 21. Control Layer Logical View 43

45 Infrastructure The infrastructure for this Citrix Validated Solution provides a set of common components, namely a database, license server, Active Directory and network components. File Services are covered in a separate section. Database Citrix XenDesktop and Virtual Machine Manager require databases to store configuration metadata and statistical information. A highly available database platform utilising Microsoft SQL Server is required as the database platform. The database platform must be designed in such a way as to provide adequate resources and availability to support the environment. SQL Version Redundancy Category Design Decision Microsoft SQL Server 2012 Standard Edition SP1 (used at the time of testing) Please refer to the following article for a list of Citrix supported database platforms: /Database%20Chart.pdf XenDesktop: Mirrored Please refer to the following article for database fault tolerance: Microsoft VMM: Please refer to the following article for further details: Number of Servers 2 35 Server O/S CPU Allocation RAM Allocation Microsoft Windows Server 2012 R2 Standard Edition 2 vcpu (Example) 8GB (Example) Storage Allocation C:\ 100 D:\ 150 (Databases) (Example) Table 31: Database Summary This document provides sample sizing guidelines and the licensing requirements for the actual databases used in this Citrix Validated Solution, however does not attempt to provide design guidelines for Microsoft SQL Server. The design and implementation for a highly available Microsoft SQL Server platform is required although considered out of scope for this design document. 35 Assumes SQL Server is licensed as a 2 VCPU (v-cores) virtual machine with MS Software Assurance. SQL Server license requires minimum of 4 core licenses. Active-Passive SQL Server deployment means no additional licenses are required for secondary passive SQL Server. Refer to 44

46 Licensing The licensing component (Microsoft and Citrix) grants each user access to the environment, as long as enough licenses are available. In addition, the type of license can also grant/deny different levels of functionality. The key design decisions for the license server are as follows: Category Citrix Microsoft License Server Version DECISION POINT Redundancy Built in Grace period and Hypervisor DECISION POINT Number of Servers 1 DECISION POINT Server Name(s) DECISION POINT DECISION POINT Server O/S Microsoft Windows Server 2012 R2 Standard Edition DECISION POINT CPU Allocation 2 DECISION POINT RAM Allocation 4GB DECISION POINT Storage Allocation C:\ 100GB DECISION POINT License Type DECISION POINT DECISION POINT Table 32: Licensing Summary Redundancy. Redundancy is built into the Citrix License service via the built-in 30 day grace period. Service redundancy can be further facilitated by the underlying hypervisor; therefore a single server is recommended. Active Directory Integration. The License server computer object will be logical located in a dedicated Organisational Unit (OU) with specific Group Policy Objects applied as appropriate to the role please refer to the Active Directory Section for more details. 45

47 Active Directory This Citrix Validated Solution has a requirement to use Microsoft Active Directory Domain Services and as such, it is an assumption that such an environment already exists within the customer s environment. The decisions discussed below describe requirements from the existing Active Directory in the form of Organisational Units and Group Policy Objects. Supplementary requirements must also be met, to ensure sufficient capacity from authenticating Domain Controllers can handle any additional requirements or load placed on the system by adding further Users, Groups, machine Objects and policy processing load. DECISION POINT A CIFS server is necessary to provide SMB clients with access to the Storage Virtual Machine (SVM) hosted on the NetApp FAS2552 Hybrid Storage Array. The computer objects created during the setup procedure will reside in the File Servers OU Category Group Policy Application Recommended: 36 Decision / Description Each infrastructure server role will have a minimum security baseline applied (MSB) via GPO All RDS workloads will have a minimum security baseline applied (MSB) via GPO Windows desktop workloads will have a minimum security baseline applied (MSB) via GPO RDS workloads will have a Machine GPO applied specific to their application delivery requirements. This GPO may have Loopback mode enabled to apply user based settings at the RDS workload OU level Windows desktop workloads will have a Machine GPO applied specific to their application delivery requirements. This GPO may have Loopback mode enabled to apply user based settings at the machine workload OU level User based policies may be applied at the user or machine level using the loopback mode Infrastructure servers such as Hyper-V hosts will be deployed in relevant OUs and MSBs applied appropriate to their role. Table 33: Active Directory Requirements The recommended Group Policy and Organisational Unit strategy applied to this Citrix Validated Solution is based on deploying Group Policy Objects in a functional approach, e.g. settings are applied based on service, security or other functional role criteria. This ensures that security settings targeted for specific role services such as IIS, SQL etc. receive only their relevant configurations. It is anticipated that the final design will be customer dependant and based on other factors such as role based administration and other typical elements outside the scope of this document. Refer to the Appendix: DECISION POINT 36 Reference to Minimum Security Baselines in the form of GPOs will be the customer s responsibility. GPOs described in this document in all cases will be integrated into the customer Active Directory environment. 46

48 Figure 22. Sample Active Directory OU Structure and GPO Linking 47

49 Delivery Controllers (XenDesktop) Delivery Controllers, also known as XenDesktop controllers (Image Controllers), are responsible for enumerating, allocating, assigning and maintaining virtualised desktops and applications. Delivery Controllers within a single data centre are grouped together into a XenDesktop site, which functions as a single administrative entity. This Citrix Validated Solution specifically defines the Hosted Virtual Desktop and Hosted Shared Desktop FlexCast delivery models. From a XenDesktop perspective each desktop type will belong to a Catalog configured specifically for that FlexCast delivery type and associated with storage; managed SMB 3.0 file shares presented to the Hyper-V Failover Cluster from the NetApp storage array. The Illustration below identifies the components of the XenDesktop Site describing three XenDesktop Catalogs: Hosted Shared, Hosted Virtual (Pooled) and Hosted Virtual (Persistent). Figure 23. XenDesktop Site Component and Layer view 48

50 XenDesktop Site Based on the validation testing and resiliency requirements of this Citrix Validated Solution the following table describes the XenDesktop site design parameters. Category Design Decision Version, Edition Citrix XenDesktop 7.5 Sites per Data Centre Site Name(s) Server O/S Controllers per Site XenDesktop Administrators Site Database Configuration Database Monitoring Database Catalogs Delivery Groups Citrix Policies The Citrix Validated Solution is designed as a single Site for a single data centre DECISION POINT Microsoft Windows Server 2012 R2 Standard Edition 2 for redundancy (Single Site Deployment) Each Delivery Controller also functions as an MCS Image Controller DECISION POINT Refer to the Section Databases Refer to the Section Databases Refer to the Section Databases A Catalog will be created for each Desktop type and aligned with the NetApp FAS2552 Hybrid Storage Array FlexVol and Network VLANs presented by the Failover Cluster. For 700 HSD shared desktops: Example (Minimum requirement) Or: 2 Catalogs are required: o o o Catalog desktops and a single FAS2552 FlexVol Catalog desktops and a single FAS2552 FlexVol Associated Networks For 700 HVD Windows 7 desktops: Example (Minimum requirement) 2 Catalogs are required: o o o Catalog desktops and a single FAS2552 FlexVol Catalog desktops and a single FAS2552 FlexVol Associated Networks Example (Minimum requirement): A single Delivery Group will be created for each virtual desktop type. The Delivery Group can host desktops from multiple Catalogs of the same type Refer to the Appendix for further details Table 34: XenDesktop Site Summary 49

51 Category Hypervisor integration Host Connections Design Decision System Center Virtual Machine Manager 2012 R2 VMM console installed on each Delivery Controller server Example (Minimum requirement): A single Host Connection will be created to the Hyper-V Failover Cluster Type: Microsoft System Center Virtual Machine Manager Name: <Based on Cluster name storage and associated networks> Address: <FQDN of the VMM server> Table 35: XenDesktop Site Summary For XenDesktop Catalogs hosting server operating systems (also known as XenApp), users are load balanced based on resource availability at user logon. Load management includes Load Throttling, which ensures that a new server brought into service does not initially receive a disproportional number of connections. This Citrix Validated Solution recommends implementing a Custom load evaluator with the following minimum parameters: Load Evaluator Parameter Setting Applied To CPU Utilization 85% Full, 10% No load Custom Memory Usage 80% Full, 10% No load All servers Server User Load 30 Full Table 36: XenApp Load Evaluator Details XenDesktop Site. The XenDesktop site will consist of two virtualised Desktop Delivery Controllers. Each Delivery Controller virtual machine will always be separated on one of the two shared infrastructure/virtual desktop hypervisor hosts using Failover Clustering Availably Sets. This will ensure resiliency of the environment. A host connection will be defined that establishes a connection to the VMM server and the Failover Cluster. A specified service account will be used for this purpose refer to the Appendix for further details. Catalogs. For each virtual desktop type that is being deployed at least one Catalog will be created defining the Catalog Base image for the associated SMB 3.0 share. The Catalog(s) will then be aggregated into a single unified Delivery Group for the presentation to users via StoreFront. For maximum density of desktops and higher performance of the NetApp FAS2552 Hybrid Storage Array at least 2 Catalogs should be created with VMs distributed between them, each Catalog will be associated with a separate FlexVol and associated SMB 3.0 share. The following example describes this requirement: Example: # of Desktops required: 700 o # of Master Images: 1 o # of Catalogs required: 2 o # of desktops per Catalog: 350 o o # of NetApp FAS2552 FlexVols required: 1 per Catalog # of SMB 3.0 shares required: 1 per FlexVol o # of Delivery Groups: at least 1 Please refer to the Storage Design section for further details. 50

52 Within the host connection, a storage connection and network related resource will be specified for each Catalog. E.g. Cluster Name Storage (SMB 3.0 Share associated with the FlexVol) HVD-VLAN_1 and HVD-VLAN_2 Figure 24. XenDesktop Conceptual Catalog Host Connection Desktop Presentation. From the corporate LAN/WAN, StoreFront will be utilised for the presentation of desktops to end users. Desktop Director and EdgeSight. Citrix EdgeSight is now integrated into a single console within Desktop Director, with its feature set enabled based on Citrix Licensing. The monitoring database used by EdgeSight will be separated from the site and logging database to allow appropriate management and scalability of the databases. Historical data retention is available for 90 days by default with platinum licensing. Administrators can select specific views delegating permissions concisely for helpdesk staff, allowing easy troubleshooting and faster resolution of problems. Citrix EdgeSight will provide the following key components: Performance Management. EdgeSight provides the historical retention with reporting capabilities. Real Time. Director provides the real time views for support staff to further investigate any reported problems. Active Directory Integration. Each computer object will be logical located in a dedicated Organisational Unit with specific Group Policy Objects applied as appropriate to the role please refer to the Active Directory Section for more details. 51

53 Image Controllers (Machine Creation Services) Image Controllers are responsible for providing the actual desktop image for Pooled desktops. Pooled desktop images are created with the built-in Citrix Machine Creation Services (MCS) functionality on each Desktop Controller. MCS is a collection of services that work together to create virtual servers and desktops from a master image on demand, optimising storage utilisation and providing a pristine virtual machine to users of pooled or shared desktop types every time they log on. Machine Creation Services is fully integrated and administered in Citrix Studio not requiring additional servers. There are virtually no moving parts within MCS, as all operations are executed directly from the Citrix Delivery Controllers. Each pooled desktop has one difference disk and one identity disk. The difference disk is used to capture any changes made to the master image while the identity disk stores machine identification information. The key design decisions for the Image (Desktop Delivery) controllers are as follows: Category Preferred Imaging Solution Machine Creation Services Description MCS Storage Type Server Names Server O/S CPU Allocation RAM Allocation Storage Allocation SMB 3.0 File Share Storage, provided by the NetApp FAS2552 Hybrid Storage Array Desktop Delivery Controllers DECISION POINT Microsoft Windows Server 2012 R2 Standard Edition 2 vcpu 8GB C:\ 100GB Table 37: Image Controllers Key Decisions 52

54 Access Controllers (StoreFront) Access Controllers are responsible for user authentication and connectivity to the environment. They provide the framework allowing users to access the environment from any device and any location. All users, regardless of being internal or external will need to gain access to a list of their virtualised resources via StoreFront. The key design decisions for StoreFront controllers are as follows: Server O/S Category Design Decision Microsoft Windows Server 2012 R2 Standard Edition Servers per Site 2 Server Name(s) CPU Allocation RAM Allocation Storage Allocation Access Method Load Balancing DECISION POINT 2 vcpu 4 GB C:\ 100GB Internal DNS Round Robin (Recommendation: Citrix NetScaler) Table 38: StoreFront Site Summary Two virtualised StoreFront servers will be deployed. Each StoreFront virtual machine will always be separated on one of the two shared infrastructure/virtual desktop hypervisor hosts using failover clustering Availably Sets. This will ensure resiliency of the environment. The Citrix StoreFront servers may be load balanced using DNS round-robin. Optionally, Citrix StoreFront servers may be load balanced using Citrix NetScaler appliances configured in high availability mode (HA). Citrix specific service monitors can then be utilised to monitor the health of the StoreFront services to ensure intelligent load balancing decisions are performed increasing service availability. Active Directory Integration. Each Machine object will be logical located in a dedicated Organisational Unit with specific Group Policy Objects applied as appropriate to the role please refer to the Active Directory Section for more details. 53

55 Hypervisor Microsoft Hyper-V Server 2012 R2 Edition will be deployed to each Cisco C240 M3 SFF server. All servers will be configured into a single Hyper-V failover cluster. Hyper-V will provide the hypervisor hosting platform to the virtualised desktop and infrastructure server instances. Microsoft System Center Virtual Machine Manager (VMM) will be leveraged to provide the virtual machine operations and management interface to Hyper-V. VMM will also provide the integration interface between Citrix XenDesktop and the underlying hypervisor within the XenDesktop host connection. Hyper-V Overview The Illustration below depicts the physical components logically connected between a single Cisco C240 M3 SFF Server, the hypervisor, storage and associated switching infrastructure: Network. 4 x 1 Gbps On-board Ethernet Adapter. Management. 1 x 1 Gbps On-board Management Adapter for CIMC (Cisco Integrated Management Controller). Network Teaming. 2 x Network Teams created each consisting of 2 x Physical Network adapters (pnic). o o Team A. Management and VM Data traffic (pnic1 + pnic2). Team B. Storage and Live migration traffic (pnic3 + pnic4). HSD Hyper-V Host Figure 25. HSD Hyper-V Host Logical View 54

56 HVD Hyper-V Host Figure 26. HVD Hyper-V Host Logical View Hyper-V Hardware Details Hardware Category Decision / Description Refer to the section: Cisco C240 M3 SFF Rack Mounted Server(s) Table 39: Hyper-V Hardware Details Hyper-V General Details Failover Clustering. A single failover cluster will be deployed and shared for both the infrastructure VMs and the desktop VMs. Availability sets will be configured to ensure redundant infrastructure VMs do not reside on the same hosts. Optionally VMs may be configured with a preferred node. The failover cluster will utilise a Node and File Share Witness to maintain Quorum configuration in the event there are an even number of nodes in the cluster at any one time. The file share location used for the witness will be presented from the NetApp FAS2552 Hybrid Storage Array. 55

57 Live Migration. The ability to live migrate virtual machines across clustered hosts will be enabled, allowing scheduled down time for maintenance tasks to occur on individual hosts. The following are required parameters for Live Migration: Live Migrations will be limited to the default of 2 simultaneous migrations. The Live Migration IP subnet will be defined for migration traffic. Kerberos authentication protocol will be enabled for migrations. Compression will be enabled. Active Directory Integration. Each Hyper-V computer object will be logically located in a dedicated Organisational Unit with specific Group Policy Objects applied as appropriate to the role please refer to the Active Directory Section for more details. Category Decision / Description Hyper-V 2 x 300 GB Drives configured as a RAID 1 for the Hyper-V partition Version Microsoft Windows Hyper-V Server 2012 R2 Fail Over Clustering Host Configuration Active Directory Integration Operating System Performance Power Scheme Failover Clustering Enabled Quorum configuration - Node and File Share Witness Availability - High availability is required Infrastructure VM redundancy - Availability Sets are required For 700 HSD shared desktop users - 5 x clustered hosts For 700 HVD Windows 7 desktops - 6 x clustered hosts For 500 HVD Windows 8.1 desktops - 5 x clustered hosts Hosts joined to the Active Directory Domain customer.domain.com High Performance Network Settings VM Switch NTP Network Team A (VM Switch): VM Networks (Trunked VLANs) Management Network (Native VLAN) Network Team B: Storage Network (Native VLAN, Layer 2) Live Migration Network (Trunk, Layer 2) A Single VM Switch will be created: Created with the NIC Team defined for VM traffic (Team A) External Network Management VLAN ID Not required (Native VLAN) All Hyper-V Hosts will be members of an Active Directory domain and as such will inherit the proposed Active Directory time hierarchy CPU Parking CPU core parking: Turned off for maximum performance RDS Printer Mapping Disabled Live Migration Live Migration: Enabled Cluster Authentication Protocol: Kerberos 56

58 See; Live Migration Network: Dedicated Layer 2 VLAN System Center 2012 R2- Virtual Machine Manager (VMM) Standalone server deployment with HA enabled at the hypervisor level Storage Settings Managed SMB 3.0 file shares presented from the NetApp FAS2552 Hybrid Storage Array via the integration between SCVMM and NetApp SMI-S provider SMB 3.0 Client Specific Configurations Default client configuration To allow remote PowerShell commands to be run on the Hyper-V hosts as part of Machine Creation Services CredSSP must be configured. Credential Security Support Provider (CredSSP) For each XenDesktop Delivery Controller: Enable CredSSP for each Hyper-V host explicitly. PowerShell Example: enable-wsmancredssp -Role Client DelegateComputer host1.mydomain.net enable-wsmancredssp -Role Client DelegateComputer host2.mydomain.net enable-wsmancredssp -Role Client DelegateComputer host n.mydomain.net For each Hyper-V host: Enable CredSSP for each Delivery Controller explicitly. PowerShell Example: enable-wsmancredssp -Role Client DelegateComputer DDC1.mydomain.net enable-wsmancredssp -Role Client DelegateComputer DDC2.mydomain.net Table 40: Hyper-V General Details 57

59 Hyper-V Network Details Network. Hyper-V will be configured with two Network Teams also known as Load Balancing and Fail Over (LBFO) to aggregate the 4 physical network adapters presented to each host server. Each network Team will have 2 physical network adapters, each connected to separate physical network switches. Each Team will be configured as an LACP channel to support aggregated throughput of the storage network. Each Team will be configured as follows: Team A. This team will be used for VM Data traffic and hypervisor management. o o A Hyper-V Virtual Machine switch will be created to utilise this Team, Microsoft Network Adapter Multiplexor Driver (##) Configured as an External network with management enabled. The switch port configuration will be trunked defining the VM networks with the native VLAN set to the management VLAN and an appropriate IP address defined for the VLAN. This network Team will be configured as Active/Active using Hyper- V Port as the load balancing algorithm. Team B. This team will be used for SMB 3.0 storage traffic and Live Migration: o o o o A new virtual interface will be created on this Team (vnic) and a VLAN ID assigned that will be used for Live Migration traffic. The switch port configuration will be trunked to the Live Migration VLAN ID with the Native VLAN being used for storage traffic. This network Team will be configured as Active/Active using Dynamic as the load balancing algorithm. A separate IP Address is required for the storage network and the Live Migration network; these networks will not require a default gateway and will remain nonroutable. This interface will have Jumbo frames enabled increasing the payload of Ethernet frames for both the Live-Migration and storage networks. Category Physical Network Adaptors (pnic) Decision / Description Cisco UCS C240 M3 SFF Server: 4 x 1 Gb Adapters (pnic1 + pnic2 + pnic3 + pnic4) 1 x 1 Gb (CIMC) Adapter (Lights out management) Team A: pnic 1 + pnic 2 Description: Team dedicated to management and the VM Switch, VM traffic Teamed Adapters Team parameters: Switch Independent mode or LACP pnic 1 + pnic 2 Active/Active Load Balancing mode Hyper-V Port Switch requirements: Switch port mode Trunk Native VLAN defined for management = management VLAN Allowed VLANs = Infrastructure, management HVD and HSD VLANs Team B: pnic 3 + pnic 4 Description: 58

60 Category Decision / Description Team used to support both storage and Live Migration vnic Live Migration VLAN ID Live Migration (vnic logically created on the Team) Team parameters: LACP mode only pnic 3 + pnic 4 Active/Active Load Balancing mode Dynamic Jumbo Frames enabled Switch requirements: Switch port mode Trunk Native VLAN defined = storage VLAN Allowed VLANs = Live Migration and Storage VLAN Jumbo Frames enabled Table 41: Hyper-V Network Details It is a recommendation that CoS and QoS are configured for the differing traffic types to ensure each network has sufficient bandwidth and correct priorities for the traffic types under heavy loads. 59

61 System Center Virtual Machine Manager VMM General Details Category Prerequisite Software VMM Management Server Server O/S Server Name(s) CPU Allocation RAM Allocation Network Storage Allocation VMM Console VMM Library VMM Database Service Accounts SMI-S Provider VM Placement Path Microsoft.NET Framework 4.5 Decision / Description Windows Assessment and Deployment Kit (WADK) for Windows Server 2012 R2 VM Guest Microsoft Windows Server 2012 R2 Standard Edition DECISION POINT 2 vcpu 8GB dynamic memory not enabled The server will be multi-homed with 2 network interfaces NIC 1: NIC 2: Infrastructure VLAN Storage VLAN C:\ 100GB D:\150GB Installed on: VMM Management server XenDesktop Controllers Other Management server(s) Initially hosted on the VMM Management server Library share: (refer to the storage section for further details) Disk Space Requirements: ~150GB Dependant on storage requirements (e.g. ISO images templates etc.) Share path: Error! Hyperlink reference not valid. Requirements: Refer to the Database section for further details Disk Space for database: ~5 - ~150 GB dependant on usage profile. Run As Account (also used for XenDesktop Host Connection) SMI-S Service Account SCVMM Service Account Refer to the Appendix for further details Configured to utilise the NetApp SMI-S provider located on the Management server: Path: managementserver.customer.domain Managed File Shares: Naming examples are shown dependent on desktop types deployed, see Storage Section for further details: 60

62 \\<NetAppStorageVirtualMachine>\PooledVM\ \\<NetAppStorageVirtualMachine>\PersistentVM\ \\<NetAppStorageVirtualMachine>\HostedSharedVM\ \\<NetAppStorageVirtualMachine>\InfrastructureVM\ Table 42: SCVMM General Details The VMM server will be multi-homed with two network interfaces defined as follows: Network 1: This network will provide access to the normal operational infrastructure VLANs such as authentication traffic etc. Network 2: This network is defined at layer 2 only for SMB storage with no capability to route traffic. This network allows the VMM server to communicate with the file shares associated with the NetApp FAS2552 Hybrid Storage Array for VM storage and avoids SMB 3.0 Multichannel using any available networks over 1 GbE. VMM Network Details Category Logical and VM Networks Decision / Description The following VM and Logical Networks will be created: Infrastructure VLAN (ID) HVD VLAN(s) (ID) HSD VLAN (ID) Storage VLAN (ID) Table 43: SCVMM Network Details 61

63 VMM Guest Virtual Machine Details Category Integration Services Version: Decision / Description Dynamic RAM Power Actions: Configured on each VM Guest type where indicated throughout the design: Minimum Memory: Value should be indicative of the expected working load of the guest to avoid excessive paging while expanding Maximum Memory: Maximum as defined for the guest workload Action to take when the virtualisation server stops: Dependent on customer requirements for each VM: Example: Turn off virtual machine (Avoids the creation of.bin files reserved to the size of RAM assigned to each virtual machine, saving disk space) Restart the VM if HA is required Table 44: Virtual Machine Guest Details Virtual Machine Manager. System Center 2012 R2 - Virtual Machine Manager (VMM) will be deployed as the management solution for the virtualised environment. VMM will provide the management interface to the virtualised Hyper-V environment for VM Templates, logical networks Clustered or standalone Hyper-V hosts and other related services. VMM Database. Refer to the section Database Platform VMM Library Server. The VMM Library server will initially be configured on the VMM server, once the environment is built and tested additional Library shares may be used to meet any expanding storage requirements of the virtual environment. E.g. additional virtual machine templates, ISO repository. VMM Networking. A VM network and Logical Network will be created for each VLAN with the associated VLAN ID defined at the Logical Network object. Each Logical network object will be associated with the Hyper-V Switch. Each VM Network will be associated with the appropriate guest virtual machine. Active Directory Integration. The VMM machine object will be logical located in a dedicated Organisational Unit with specific Group Policy Objects applied as appropriate to the role, please refer to the Active Directory Section for more details. 62

64 Storage Technology Overview This section provides an overview of the NetApp technology, storage design, the aggregate and volume layout and the Storage Virtual Machine design. NetApp FAS Technology Overview NetApp FAS systems are enterprise-class storage systems which deliver the availability, scalability, performance, and flexibility to drive the most demanding SAN and NAS workloads. The FAS family combined with the NetApp clustered Data ONTAP 8 operating system eliminates disruptions to client operations. The feature-rich platform follows a common set of guiding principles; Nondisruptive Operations: Storage maintenance, hardware lifecycle operations, and software upgrades can be performed without interruption. Planned and unplanned downtime can be eliminated for continuous business availability. Proven Efficiency: Allows consolidation and sharing of the same infrastructure for workloads or tenants with different performance, capacity, and security requirements saves time and money. Seamless Scalability: Capacity, performance, and operations can be scaled without compromise, encompassing multiple media types, unifying SAN and NAS protocols. Clustered Data ONTAP A key differentiator in a clustered Data ONTAP environment is that High-availability (HA) pairs of storage controllers are combined into a cluster to form a shared pool of physical resources that are available to applications, SAN hosts, and NAS clients. The shared pool appears as a single system image for management purposes. This means that there is a single common point of management, whether through the graphical user interface (GUI) or command-line interface tools, for the entire storage cluster. Figure 27. Clustered Data ONTAP product overview Clustered Data ONTAP allows for the transparent movement of data and network connections anywhere within the storage cluster. The capability to move individual data volumes or LUNs, 63

65 known as NetApp DataMotion allows the redistribution across a cluster at any time and for any reason. DataMotion is transparent and non-disruptive to NAS and SAN hosts, and it enables the storage infrastructure to continue to serve data throughout these changes. To improve data access in NAS applications, NetApp virtualizes storage at the file-system level. This enables all client nodes to mount a single file system, access all stored data, and automatically accommodate physical storage changes that are fully transparent to the clients. Each client or server can access a huge pool of data residing across the clustered Data ONTAP system through a single mount point. With clustered Data ONTAP, each storage controller is referred to as a cluster node. Nodes are allowed to be different FAS models and sizes. Disks are grouped into aggregates, which are groups of disks of a particular type that are composed of one or more RAID groups protected by using NetApp RAID DP technology. Multiprotocol Unified Architecture A multiprotocol unified architecture provides the capability to support several data access protocols concurrently in the same overall storage system over a whole range of controller and disk storage types. Clustered Data ONTAP supports a full range of data access protocols concurrently. The supported protocols include: SMB 1.0, 2.0, 2.1 SMB 3.0 including support for non-disruptive failover in Microsoft Hyper-V environments (clustered Data ONTAP 8.2) and Microsoft SQL Server (clustered Data ONTAP 8.2.1) NFSv3, NFS v4, and NFSv4.1 including pnfs iscsi Fibre Channel FCoE Data replication and storage efficiency features are seamlessly supported across all protocols in clustered Data ONTAP. Figure 28. Multiprotocol Unified Architecture Overview 64

66 SAN Data Services With the supported SAN protocols (that is, Fibre Channel, FCoE, and iscsi), clustered Data ONTAP provides LUN services. This is the capability to create LUNs and make them available to attached hosts. Because the cluster consists of numerous controllers, there are several logical paths to any individual LUN. A best practice is to configure at least one path per node in the cluster. Asymmetric Logical Unit Access is used on the hosts so that the optimized path to a LUN is selected and made active for data transfer. Support for multipath I/O is also available from leading OS and third-party driver vendors. NAS Data Services Clustered Data ONTAP can provide a single namespace with the supported NAS protocols such as SMB (CIFS) and NFS (NAS clients can access a very large data container by using a single NFS mount point or CIFS share). Each client, therefore, needs only to mount a single NFS file system mount point or access a single CIFS share, requiring only the standard NFS and CIFS client code for each operating system. The namespace of clustered Data ONTAP is composed of potentially thousands of volumes joined together by the cluster administrator. To the NAS clients, each volume appears as a folder or subdirectory, nested off the root of the NFS file system mount point or CIFS share. Volumes can be added at any time and are immediately available to the clients, with no remount required for visibility to the new storage. The clients have no awareness that they are crossing volume boundaries as they move about in the file system, because the underlying structure is completely transparent. Data ONTAP can be architected to provide a single namespace, yet it also supports the concept of several securely partitioned namespaces, called Storage Virtual Machines or SVMs. This accommodates the requirement for multi-tenancy or isolation of particular sets of clients or applications. The illustration below shows a single SVM in a two-node cluster providing data services to SAN hosts and NAS clients. Figure 29. SVM Overview By virtualizing physical resources into the SVM construct, clustered Data ONTAP implements multi-tenancy and scale-out, allowing the cluster to host isolated independent workloads and applications. 65

67 Logical Interface (LIF) Overview A LIF (logical interface) is an IP address with associated characteristics, such as a role, a home port, a home node, a routing group, a list of ports to fail over to, and a firewall policy. You can configure LIFs on ports over which the cluster sends and receives communications over the network. LIFs can be hosted on the following ports: Physical ports that are not part of interface groups Interface groups VLANs Physical ports or interface groups that host VLANs LIF failover refers to the automatic migration of a LIF in response to a link failure on the LIF's current network port. When such a failure is detected, the LIF is migrated to a different physical port. A failover group contains a set of network ports (physical, VLANs, and interface groups) on one or more nodes. A LIF can subscribe to a failover group. The network ports that are present in the failover group define the failover targets for the LIF. Figure 30. Clustered Data ONTAP Networking Logical Architecture Flash Pool Overview Flash Pool enables faster workloads for both reads and writes by shifting operations to SSD from traditional HDD. Shared storage infrastructures experience dynamic changes in workload as they respond to the demands of multiple hosted applications. Flash Pool reacts in real time to these urgent changes, unlike traditional automated tiering solutions that wait for subsequent data movement windows. Flash Pool enables faster workloads for both reads and writes by shifting operations to SSD from traditional HDD. 66

68 Figure 31. Flash Pool Technology Overview Windows Server 2012 Hyper-V Integration In clustered Data ONTAP 8.2, NetApp implements SMB 3.0, the protocol s latest version, with features like persistent file handles (continuously available file shares) and fully supports transparent clustered client failover and witness protocol. Datacentre design is simple with SMB 3.0 file shares since several virtual machine hard drives (VHDs or VHDXs) can be located on a file share. Other features in the SMB 3.0 space that provide added value include scale-out awareness, ODX, and VSS for SMB file services. Offloaded Data Transfer (ODX) is particularly powerful because it can rapidly speed up virtual machine deployment by cloning virtual machines on the storage system with zero network traffic required on the Windows Server 2012 Hyper-V host-side. ODX as implemented in clustered Data ONTAP can even work seamlessly between volumes used for NAS or SAN protocols on the storage, a feature exclusive to NetApp. LUNs that are used as cluster shared volumes (CSVs) can be accessed seamlessly by several Windows Server 2012 Hyper-V nodes while being distributed across several physical NetApp storage nodes. For Microsoft Windows Server 2012 Hyper-V on NetApp, this provides a truly distributed file system, with everything mounted under a common name space for SAN, NAS, or both. 67

69 LNK NV 0b 0a LNK e0c 0c e0d 0d e0e 0e e0f 0f LNK LNK LNK LNK x2 2 e0a e0b LNK NV 0b 0a LNK e0c 0c e0d 0d e0e 0e e0f 0f LNK LNK LNK LNK x2 2 e0a e0b DS2246 Desktop Virtualization Design Storage Design NetApp FAS2552 Hybrid Storage Array Architecture For this CVS design the physical storage is a NetApp FAS2552 Hybrid Storage Array. The FAS2552 is a high-availability dual controller configuration with 4 x 200GB SSDs and 20 x 600GB 10K SAS HDDs. Figure 32. FAS2552 Physical Front View Note that the illustration below is a logical view since both controllers and disks for this design reside in the one physical 2U chassis; this diagram illustrates the multipath high-availability (HA) connectivity to the disk drives which reside within the chassis. FAS2552 Dual Controller (rear) controller01 SSN DC AC MAC B B A 1 2 SSN DC AC MAC controller02 Logical Storage Configuration B A A Multipath I/O 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB 1200GB FAS2552 Dual Controller (front) Figure 33. FAS2552 Logical View Aggregate Design In this storage design a hybrid disk configuration consisting of 4 x 200GB SSDs and 20 x 600GB 10K SAS HDDs. As shown in the illustration below, each node has a 2 x 600GB disk RAID-4 root aggregate. As of Data ONTAP 8.2 RAID types can be mixed in a Flash Pool, and for the Data Aggregate allocated to node01 we setup a 15-disk RAID-DP RAID group for the SAS HDDs while using RAID-4 for the SSD RAID group. One 200GB SSD and one 600GB SAS HDD were reserved as spare drives. 68

70 The Illustration below describes the aggregate layout for this CVS design: controller01 Root Aggregate (RAID-4) Parity Root Data Aggregate (RAID-DP hybrid with RAID-4 for FlashPool) Parity Parity Data Data Data Data Data Data Data Data Data Data Data Data Data Parity SSD SSD Spare SAS Spare SSD controller02 Root Aggregate (RAID-4) Parity Root Figure 34. Aggregate Layout for NetApp FAS255 Storage Virtual Machine Design In this storage design two Storage Virtual Machines (SVM) are configured on the NetApp FAS2552 Hybrid Storage Array. The purpose of which is to provide network and operational separation of the Hyper-V and the user file shares. As per NetApp best practices one data logical interface (LIF) is created per controller for each SVM; The first SVM svm1 is configured with connectivity to VLAN 130 for the provision of SMB 3.0 file shares to Hyper-V The second SVM svm2 is configured with connectivity to VLAN 64 for the provision of SMB 3.0 file shares for user profiles and redirected folders Both SVMs are configured with a logical interface (LIF) with connectivity to VLAN 25 for SVM management 69

71 The Illustration below describes the Storage Virtual Machine Layout in this CVS design: svm1 svm2 Clustered DATA ONTAP FAS2552 Figure 35. Storage Virtual Machine Layout In accordance with NetApp best practices the following options were configured on the SVMs: SMB 3.0 Enabled: true Copy Offload (ODX) Feature: true Remote VSS Settings (Shadow Copy Feature VSS): true Automatic Node Referral Settings: false For additional information regarding these settings refer to TR-4172: Microsoft Hyper-V over SMB 3.0 with Cluster Data ONTAP: Best Practices Flexible Volume (FlexVol) Design NetApp CIFS SVM root and data volumes should be FlexVol volumes and have a security style of NTFS. The table below details the flexible volume layout in this CVS design: Volume Aggregate Allocated Size Thin Provisioned Purpose svm1_root aggr1_cluster1 1 GB No Root volume for cvs_svm1 SVM vol_infra aggr1_cluster1 250 GB Yes Infrastructure VM virtual disks vol_media aggr1_cluster1 250 GB Yes Media and ISO repository SCVMM Library vol_quorum aggr1_cluster1 2 GB No Hyper-V Failover Cluster Quorum vol_vdi1 aggr1_cluster GB No XenDesktop/XenApp HVD/HSD VM disk files vol_vdi2 aggr1_cluster GB No XenDesktop/XenApp HVD/HSD VM disk files svm2_root aggr1_cluster1 1 GB No Root volume for cvs_svm2 SVM vol_upm aggr1_cluster GB Yes UPM data and redirected folders. Assumes 1GB per user TOTAL 5504 GB (5.5 TB) Table 45: Volume Layout 70

72 Copy Offload (ODX) Settings The ODX feature in clustered Data ONTAP allows creating copies of master VHDXs by simply copying a master VHDX file hosted by a NetApp array. Since an ODX-enabled copy occurs directly on the NetApp storage array and does not transport any data on the network wire, the copy process happens significantly faster and with minimal storage array CPU impact. The deduplication storage efficiency feature is required to be enabled in order to support Copy Offload (ODX) functionality and in this CVS design is enabled on the vol_vdi1 and vol_vdi2 flexible volumes. Deduplication on these volumes provided a capacity saving dependent upon the The illustration below describes the copy offload mechanism within clustered Data ONTAP. Token Offload Read Token Token Offload Write Figure 36. Copy Offload (ODX) File Share Design To support SMB 3.0 continuous availability for the Hyper-V VM repositories the infra, vdi1 and vdi2 file shares are configured with the continuously-available property enabled and the change notify property is disabled. SVM File Share Name Flexible Volume VLAN Logical Interface (LIF) Failover Group svm1 infra vol_infra VLAN 130 svm1_lif1 / svm1_lif2 fg_vlan130 svm1 media vol_media VLAN 130 svm1_lif1 / svm1_lif2 fg_vlan130 svm1 quorum vol_quorum VLAN 130 svm1_lif1 / svm1_lif2 fg_vlan130 svm1 vdi1 vol_vdi1 VLAN 130 svm1_lif1 / svm1_lif2 fg_vlan130 svm1 vdi2 vol_vdi2 VLAN 130 svm1_lif1 / svm1_lif2 fg_vlan130 svm2 upm vol_upm VLAN 64 svm2_lif1 / svm2_lif2 fg_vlan64 Table 46: File Shares 71

73 As noted above the CIFS SMB file shares are configured with an NTFS security style. The table below describes the NTFS groups and security permissions associated with each share. File Share Name NTFS Groups NTFS Permissions infra VMM-Full-Admins Group Full Control media VMM-Full-Admins Group Full Control quorum vdi1 vdi2 HYPER-V Machine Account(s) VMM-Full-Admins Group HYPER-V Machine Account(s) VMM-Full-Admins Group HYPER-V Machine Account(s) VMM-Full-Admins Group Full Control Full Control Full Control upm VMM-Full-Admins Group Full Control Table 47: File Share Permissions Refer to the Clustered Data ONTAP 8.2: File Access Management Guide for CIFS for detailed information regarding the configuration of CIFS shares and NTFS permissions 37. Logical Interface (LIF) Design In this CVS design the architectural decision is to serve svm2 traffic from the 1GbE ports on the second controller so as to provide some balancing of the network traffic across the dual controllers and thereby make best use of the available CPU resources. NetApp recommends; All data ports are members of an appropriate failover group All data LIFs are associated with the appropriate VLAN failover group The same physical port on each controller is used for the same purpose The table below details each LIF and its associated; SVM, role, protocols, port, and failover group. LIF SVM (vserver) Role Protocol Home Port Failover Group svm1_lif1 svm1 data cifs,nfs cluster1-01:a0a-130 fg_vlan130 svm1_lif2 svm1 data cifs,nfs cluster1-01:a0a-130 fg_vlan130 svm1_lif3 svm1 mgmt. - cluster1-01:a0a-25 fg_vlan25 svm2_lif1 svm2 data cifs,nfs cluster1-02:a0a-64 fg_vlan64 svm2_lif2 svm2 data cifs,nfs cluster1-02:a0a-64 fg_vlan64 svm2_lif3 svm2 mgmt. - cluster1-02:a0a-25 fg_vlan25 Table 48: LIF Configuration 37 Particular attention should be paid to the security requirements for the user profile and redirected folder share in a production environment, the settings are less restrictive due to the requirements of the Login VSI application used to simulate user load. 72

74 FAS2552 Desktop Virtualization Design In accordance with best practices at least one data LIF was created per node for every SVM in the cluster. The data LIF is configured to AutoRevert., each LIF s IP address has an entry in DNS, and no NetBIOS aliases are allowed for DNS entries. Network interface failover groups are configured to specify network ports to which the LIF can be moved. Also, Snap Manager for Hyper- V (SMHV) requires one additional management LIF for the SVM. The following settings were also configured on the network in accordance with NetApp best practice recommendations; Flow Control is disabled Jumbo Frames is enabled Link Aggregation Control Protocol (LACP) is enabled For additional information regarding general NetApp networking recommendations refer to TR- 4182: Ethernet Storage Best Practices for cdot and for detail regarding the setup of jumbo frames and link aggregation with Hyper-V and clustered Data ONTAP, refer to TR-4339: FlexPod Express with Microsoft Windows Server 2012 R2 Hyper-V: Small and Medium Configurations Implementation Guide. The illustration below describes the configuration of jumbo frames on each network component in this CVS design. MTU 9126 Hyper-V vswitch Physical NICs switch1 switch2 NetApp FAS2552 Hybrid Storage Array Hyper-V MTU 9014 MTU 9014 MTU 9126 MTU 9000 Figure 37. Jumbo Frames IP Fast Path IP fast path is a mechanism that uses the network interface of an inbound request to send the response by bypassing a routing table lookup. Fast path is enabled by default for all TCP and NFS UDP (NFS/UDP) connections. During performance testing it was determined that the IP Fast Path caused additional SMB latency in the CVS network environment. This issue was mitigated in the environment by disabling the IP Fast Path option on the NetApp FAS2552 which allowed for the expected SMB performance to be achieved. For more information refer to NetApp Support: Enabling or disabling fast path. 73

75 FAS2552 Desktop Virtualization Design 10 Gigabit Ethernet Clustered Data ONTAP requires more cables because of its enhanced functionality. A minimum of three networks are required when creating a new clustered Data ONTAP system: cluster at 10GbE, management at 1GbE, and data at 1GbE or 10GbE, although 10GbE is not required it is a best practice for optimal data traffic. In this CVS design the architectural decision is to present data via an interface group consisting of multiple 1GbE on the NetApp FAS2552 Hybrid Storage Array to reduce the overall cost of the solution. In addition a switchless cluster design was utilised for the single pod design via the interconnection of the 10GbE cluster interfaces built-in to the FAS2552 controllers within the chassis. Note that when scaling beyond a single pod to increase additional desktop capacity it is necessary to allow for a dedicated 10GbE cluster switch to interconnect additional NetApp FAS2552 Hybrid Storage Array into a single cluster for ease of management. Windows Server 2012 Hyper-V Integration Microsoft SCVMM 2012 R2 uses the new Microsoft Storage Management Service to communicate with the external NetApp storage array through a Storage Management Initiative Specification (SMI-S) Agent. The Storage Management Service is installed by default during the installation of SCVMM 2012 R2. In this CVS design the NetApp Data ONTAP SMI-S Agent v5.1 is installed on the Windows 2012 R2 SCVMM virtual server. In accordance with Microsoft requirements and NetApp best practices, the SMI-S agent interface is used to create and deploy new storage from through the VMM console. The illustration below describes the SMI-S logical data flow: MGT01 SCVMM 2012R2 Management Server SMI-S User1 SMI-S Traffic AD User to NetApp FAS User Mapping Data Flow API Calls Directly to Data ONTAP NetApp SMI-S Agent v5.1 FAS User1 NetApp FAS2552 Hybrid Storage Array Figure 38. Data ONTAP SMI-S Agent 5.1 interaction with NetApp storage system For additional information regarding these settings refer to TR-4271: Best Practices and Implementation Guide for NetApp SMI-S Agent

76 Network Overview Within the test environment, a pair of Cisco Nexus 3048TP 1GbE switches were utilised to provide network connectivity for storage, host management, virtual machine and CIMC network traffic. The Customer can opt to leverage their existing 1GbE network switch infrastructure to minimise hardware acquisition costs. The Illustration below describes the connection topology of the complete environment and associated network traffic types using 1 GbE 38. Although 10GbE networking is supported by the NetApp platform, a 1GbE network is leveraged to minimise the total cost of the solution. Figure GbE Network Connectivity Topology 38 Alternatively the NetApp FAS2552 may be connected to 10GbE Switching. For the purpose of the CVS, 1GbE networking was leveraged to reduce the overall solution cost for the target solution scale. 75

77 Network Components Category Description / Decision Switch Cisco Nexus 3048TP 48 Port Switches or Customer Defined Switches. Refer to Appendix for Network Switch Requirements Connectivity 1 GbE per port Uplink from the Top of Rack Switches to upstream switching fabric Switch Port Configuration Hyper-V Team A (Host Management & VM Switch) LACP Trunked (VM Data VLANS) Native VLAN = management VLAN Hyper-V Team B (Storage & Live Migration) LACP Trunked (Live Migration) Native VLAN = storage VLAN Jumbo Frames enabled Management Interfaces: Access Port (out of bound management VLAN) VLAN Information VLAN Name Table 49: Network Key Decisions VLAN ID (ID reference only) Description Out of band Management VLAN 10 Management VLAN Host Management VLAN 20 Hyper-V Host Management VLAN Infrastructure Server VLAN 25 Infrastructure Server VLAN Hyper-V Live Migration VLAN 33 Hyper-V Live Migration VLAN Note this VLAN is non-routable HVD_VLAN_1 VLAN 40 Hosted Virtual Desktop VLAN HVDs HVD_VLAN_2 VLAN 42 Hosted Virtual Desktop VLAN HVDs HSD_VLAN_1 VLAN 80 Hosted shared Desktop VLAN SMB3_VMStorage VLAN 130 SMB 3.0 Storage VLAN Note this VLAN is non-routable SMB_UserData VLAN 64 CIFS SMB file sharing for User profile Data Table 50: VLAN Requirements The pair of Cisco Nexus 3048TP switches can optionally provide Layer 3 routing capability to the solution components. Switch ports will be configured as trunk ports with the native VLAN defined for the Hyper-V management interfaces. Each Cisco C240 M3 SFF server will have the 4 on-board network adapter ports cross patched to the two Nexus 3048TP switches. The On-board Network adapters from the Cisco C240 M3 SFF servers will be configured as Hyper-V Load Balancing and Fail Over Teams to provide bandwidth aggregation, and/or traffic failover to maintain connectivity in the event of a network component failure. The switch ports for each Hyper-V NIC Team will be configured in LACP mode. 76

78 DHCP Category Version, Edition Servers (IPv4 Options) Failover Description / Decision Windows Server 2012 R2 DHCP Role enabled If the customer does not have a suitable redundant DHCP service available two servers described in this design may have the DHCP Role enabled to reduce the cost of licenses. These servers will then provide IP addressing requirements to the virtual desktops. Refer to the Appendix for DHCP Scope details. Failover Enabled Table 51: DHCP Requirements DHCP. Two servers will also host Microsoft DHCP Services for the IP addressing requirements of the virtual desktops. DHCP Relay will be configured on the Cisco Nexus 3048TP switches, allowing client DHCP discover packets to be forwarded to their respective DHCP servers. DHCP scopes will be deployed as highly available in load balanced mode, using the capabilities of the Windows Server 2012 R2 DHCP Role. 77

79 Hardware Layer Design The hardware layer defines the type and amount of physical resources that are required to support the Citrix Validated Solution. Physical Architecture Overview This Citrix Validated Solution is built using the Cisco C240 M3 SFF Servers, Cisco Nexus 3048TP switches and NetApp FAS2552 Hybrid Storage Array; these components define the overall hardware architecture. The illustration below describes the physical hardware component view for the hosted shared desktop platform delivered by Citrix XenDesktop and supporting up to 700 hosted shared desktops. HSD Physical Hardware View Figure 40. Hardware required to support 700 HSD users 78

80 The illustration below describes the physical hardware component view for the Hosted Virtual Desktop platform delivered by Citrix XenDesktop and supporting up to 700 desktops. HVD Physical Hardware View Figure 41. Hardware required to support 700 Windows 7 HVD users Physical Component Overview Hardware Component Compute Network adapters Storage Network Switching Component Information/Revision Cisco UCSC-C240-M3 SFF Rack Mounted servers Dual Intel(R) Xeon(R) CPU E GHz Memory 16GB DDR3 1866MHz RDIMM MegaRAID 9271CVDisk Drives: (Per Cisco C240 M3 SFF Server) 2 x 300GB 6Gb SAS 10K RPM SFF HDD Intel On-board 1Gbps Ethernet Adapter (4 port) NetApp FAS2552 Hybrid Storage Array 2 x Cisco Nexus 3048TP Switches Table 52: Hardware Components 79

81 Server Hardware The Cisco C240 M3 SFF Server is an enterprise class, high-density rack mounted server. Microsoft Hyper-V Server 2012 R2 operating system (Hyper-V) will be installed on a RAID 1 mirror. The key design decisions for the server hardware are as follows: Decision Point Server Hardware Model UCSC-C240-M3 SFF 39 Description / Decision Compute Cisco UCSC-C240-M3 SFF Rack Mounted Servers Dual Intel(R) Xeon(R) CPU E GHz Memory 16GB DDR MHz RDIMM Dual 650W power supply(s) HSD Servers: (Includes Infrastructure VMS on shared hosts) Total of 128GB RAM per server node HVD Servers: (Includes Infrastructure VMS on shared hosts) Total of 256GB RAM per server node Firmware Revisions 2.0(3d) BIOS Local Storage Power Settings: Optimised for Maximum Performance refer to the Appendix for details Storage Controller: MegaRAID 9271CVDisk Drives: (Per Cisco C240 M3 SFF Server) 2 x 300GB 6Gb SAS 10K RPM SFF HDD Table 53: Cisco C240 M3 SFF Server Hardware Storage Hardware The NetApp FAS2552 Hybrid Storage Array is an affordable storage foundation that reduces the complexity associated with performance and capacity growth. Every system includes unified support for NAS and SAN workloads and can be configured to meet specific price/performance and latency goals. The key decisions for the storage hardware are as follows: Decision Point Hardware Model FAS2552A-001-R6 Description / Decision Storage Operating System Clustered Data ONTAP Form Factor ECC memory NVMEM/NVRAM Disk Drives 2U/24-drive 36GB 4GB 4 x 200GB SSD and 20 x 600GB 10K SAS Table 54: NetApp FAS2552 Hybrid Storage Array Hardware 39 Note: Other Cisco UCS C-series models can be leveraged for compute however due to varying hardware specifications, HVD and HSD densities will also differ. Refer to the following URL for the latest Cisco server models 80

82 Bill of Materials - Hosted Shared Desktops The following table describes the required bill of materials for a single Hosted Shared Desktop Cisco C240 M3 SFF Server (Total RAM of 128GB): Server Hardware Part Number Description Quantity UCS-SPR-C240-P1 UCS-CPU-E52660B UCS-MR-1X162RZ-A UCS-RAID9271CV-8I UCS C240 M3 SFF 2xE5-2660v2 2x16GB 9271CV 2x650W SD RAILS 2.20 GHz E v2/95w 10C/25MB Cache/DDR3 1866MHz 16GB DDR MHz RDIMM/PC /dual rank/x4/1.5v MegaRAID 9271CV with 8 internal SAS/SATA ports with Supercap CAB-C13-C14-2M Power Cord Jumper C13-C14 Connectors 2 Meter Length 2 UCSC-PSU-650W 650W power supply for C-series rack servers 2 UCS-SD-16G 16GB SD Card module for UCS Servers 1 UCSC-RAIL-2U 2U Rail Kit for UCS C-Series servers 1 N20-BBLKD UCS 2.5 inch HDD blanking panel 22 UCSC-HS-C240M3 Heat Sink for UCS C240 M3 Rack Server 2 UCSC-PCIF-01F Full height PCIe filler for C-Series 4 A03-D300GA2 300GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted Table 55: BOM for HSD server node 2 Server Support and Maintenance Part Number Description Quantity CON-SNTP- SRC240P1 SMARTnet Premium 24x7x4 UCS C240 M3 SFF 2xE5-2660B - 1 YEAR Table 56: Hardware Maintenance for HSD 1 81

83 Bill of Materials - Hosted Virtual Desktops The following table describes the required bill of materials for a single Hosted Virtual Desktop Cisco C240 M3 SFF Server (Total RAM of 256GB): Server Hardware Part Number Description Quantity UCS-SPR-C240-P1 UCS-CPU-E52660B UCS-MR-1X162RZ-A UCS-RAID9271CV-8I UCS C240 M3 SFF 2xE5-2660v2 2x16GB 9271CV 2x650W SD RAILS 2.20 GHz E v2/95w 10C/25MB Cache/DDR3 1866MHz 16GB DDR MHz RDIMM/PC /dual rank/x4/1.5v MegaRAID 9271CV with 8 internal SAS/SATA ports with Supercap CAB-C13-C14-2M Power Cord Jumper C13-C14 Connectors 2 Meter Length 2 UCSC-PSU-650W 650W power supply for C-series rack servers 2 UCS-SD-16G 16GB SD Card module for UCS Servers 1 UCSC-RAIL-2U 2U Rail Kit for UCS C-Series servers 1 N20-BBLKD UCS 2.5 inch HDD blanking panel 22 UCSC-HS-C240M3 Heat Sink for UCS C240 M3 Rack Server 2 UCSC-PCIF-01F Full height PCIe filler for C-Series 4 A03-D300GA2 300GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted Table 57: BOM for HVD server node 2 Server Support and Maintenance Part Number Description Quantity CON-SNTP- SRC240P1 SMARTnet Premium 24x7x4 UCS C240 M3 SFF 2xE5-2660B - 1 YEAR Table 58: Hardware Maintenance for HVD server node 1 82

84 Bill of Materials - Storage The following table describes the required bill of materials for NetApp storage required to support either Hosted Shared Desktops or Hosted Virtual Desktops up to 700 users. Storage Hardware Part Number Description Quantity FAS2552A-001-R6 FAS2552 High Availability System 2 FAS R6-C FAS2552,4x200GB,20x600GB,Mixed,-C 1 DOC-2552-C Documents,2552,-C 1 X80101A-R6-C Bezel,FAS2552,R6,-C 1 X5526A-R6-C Rackmount Kit,4-Post,Universal,-C,R6 1 X6566B-05-R6 Cable,Direct Attach CU SFP+ 10G,0.5M 2 X1985-R6-C 12-Node Cluster Cable Label Kit,-C 1 X6557-EN-R6-C Cbl,SAS Cntlr-Shelf/Shelf-Shelf/HA,0.5m,EN,-C 2 X6560-R6-C Cable,Ethernet,0.5m RJ45 CAT6,-C 1 X1558A-R6-C Power Cable,In-Cabinet,48-IN,C13-C14,-C 2 X R6 Cable,Twinax CU,SFP+,5M,X1962/X1963/X SW A-CIFS-C SW-2,CIFS,2552A,-C 2 SW A-FCP-C SW-2,FCP,2552A,-C 2 SW A-FLEXCLN-C SW-2,Flexclone,2552A,-C 2 SW A-SRESTORE-C SW-2,SnapRestore,2552A,-C 2 SW A-ISCSI-C SW-2,iSCSI,2552A,-C 2 SW A-NFS-C SW-2,NFS,2552A,-C 2 OS-ONTAP-CAP3-1P-C OS Enable,Per-0.1TB,ONTAP,Ultra-Stor,1P,-C 8 OS-ONTAP-CAP2-1P-C OS Enable,Per-0.1TB,ONTAP,Perf-Stor,1P,-C 120 SW-2-CL-BASE SW-2,Base,CL,Node 1 Table 59: BOM for NetApp Storage Storage Support and Maintenance Part Number Description Quantity CS-O2-4HR SupportEdge Premium 4hr Onsite 1 Table 60: Storage Hardware Maintenance 83

85 SECTION 3: APPENDICES 84

86 Appendix A. Further Decision Points This section defines elements of the Citrix Validated Solution which need further discussions with the customer and are customer-specific: Decision Point Naming Convention Database Information Decision / Description Component nomenclature will need to be defined by the customer during the Analysis phase of the project Microsoft SQL Version Server name Instance name Port Database name Resource capacity (CPU Memory Storage) Microsoft Volume Licensing Microsoft RDS Licensing (Terminal Server CALS) Windows Pagefile User Logon Active Directory Domain services User Personalisation Microsoft licensing of the target devices is a requirement for the Citrix Validated Solution and will be based on the customer s existing Microsoft licensing agreement. At least two Microsoft RDS License servers should be defined when using RDS workloads within the customer environment including the mode of operation: per user per device Once defined these configuration items will be deployed via Active Directory GPO. The final applications used and workload usage patterns required by the customer will influence the decision for the requirements and sizing of the Windows Pagefile. Further customer validation may be required, dependant on the sizing of the Pagefile and its associated storage footprint. Al host density numbers quoted within this document are limited by the logon storm during testing and not steady state. CVS testing uses a 1 hour logon period for all users of the tests that are executed as part of the validation. Further analysis may be required for customers with aggressive user logon time frames to their desktops. In this scenario additional resources may be required. This may impact Citrix StoreFront, host Density or other related infrastructure. The Active Directory Forest and domain will need to be discussed with the Customer to ensure sufficient capacity exists to support any additional authentication requirements the proposed solution may impose. Group Policy is likely to be deployed to suit the requirements of the customer. Assuming the existing deployment meets best practices, the GPOs described within this Citrix Validated Solution can be integrated into the customer environment or configurations may be added directly to existing GPOs. Reference to Minimum Security Baselines in the form of GPOs will be the customer s responsibility. GPOs described in this document in all cases must be integrated into the customer existing Active Directory. User Profile Management will need to be further defined to meet customer expectations and application specific requirements. This includes folder redirection using GPO objects. Currently this document only describes minimal requirements, that were used for testing and validation purposes Please refer to the following link for further details: Table 61: Further Decision Points 85

87 Appendix B. Server Inventory The following table provides the suggested list of servers, virtual machine and storage configuration. HSD Servers (Support up to 700 x User Desktop Sessions) Qty OS Server role Type CPU RAM Disk NIC Storage NetApp FAS2552 Hybrid Storage Array 1 Clustered Data ONTAP Unified Storage Array FAS GB 4 x 200GBSSD 20 x 600GBSAS 4 x Onboard 1GbE Physical Servers (Hyper-V Hosts) 5 MS Hyper-V Server 2012 R2 Hyper-V Host (Infrastructure) Cisco C240 M3 SFF 2 x 10 - Core 128GB C:\300 GB On-board 4 port, 1GbE Guest Virtual Machines 2 Windows Server 2012 R2 Standard Citrix Desktop Delivery Controllers VM 4 vcpu 8GB C:\100GB 1 vnic 2 Windows Server 2012 R2 Standard Citrix StoreFront VM 2 vcpu 4GB C:\100GB 1 vnic 1 Windows Server 2012 R2 Standard Citrix License Management (& SMI-S) DHCP Role VM 2 vcpu 4GB C:\100GB 2 vnic 1 Windows Server 2012 R2 Standard Virtual Machine Manager DHCP Role VM 2 vcpu 8GB C:\100GB D:\150GB 2 vnic 26 Windows Server 2012 R2 Standard or Windows Server 2008 R2 Standard XenApp RDS VM 8 vcpu 18GB C:\100GB 1 vnic Assume customer will leverage existing SQL Server environment. Sample configuration (Optional) 40 2 Windows Server 2012 R2 Standard SQL Server 2012 Standard VM 2 vcpu 8GB C:\100GB D:\150GB 1 vnic Table 62: Server Inventory for HSD 40 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 86

88 HVD Servers (Support up to 700 x Win 7 Virtual Desktops) Qty OS Server role Type CPU RAM Disk NIC Storage NetApp FAS2552 Hybrid Storage Array 1 Clustered Data ONTAP Unified Storage Array FAS GB 4 x 200GBSSD 20 x 600GBSAS 4 x Onboard 1GbE Physical Servers (Hyper-V Hosts) 6 MS Hyper-V Server 2012 R2 Hyper-V Host (Infrastructure) Cisco C240 M3 SFF 2 x 10 - Core 256GB C:\300 GB On-board 4 port, 1GbE Guest Virtual Machines 2 Windows Server 2012 R2 Standard Citrix Desktop Delivery Controller VM 4 vcpu 8GB C:\100GB 1 vnic 2 1 Windows Server 2012 R2 Standard Windows Server 2012 R2 Standard Citrix StoreFront VM 2 vcpu 4GB C:\100GB 1 vnic Citrix License VM 2 vcpu 4GB C:\100GB 2 vnic 1 Windows Server 2012 R2 Standard Citrix License Management (& SMI-S) DHCP Role VM 2 vcpu 8GB C:\100GB 1 vnic 1 Windows Server 2012 R2 Standard Virtual Machine Manager DHCP Role VM 2 vcpu 8GB C:\100GB D:\150GB 2 vnic 700 Windows 7 Enterprise x64 SP1 Hosted Virtual Desktop VM 2 VCPU 2.5GB C:\100GB 1 vnic Assume customer will leverage existing SQL Server environment. Sample configuration (Optional) 41 2 Windows Server 2012 R2 Standard SQL Server 2012 Standard VM 2 vcpu 8GB C:\100GB D:\150GB 1 vnic Table 63: Server Inventory for HVD on Windows 7 41 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 87

89 Appendix C. Windows 8.1 Hosted Virtual Desktops This section refers to the configuration pertaining to Windows 8.1 Hosted Virtual Desktops. Overview The Illustration below depicts the combined physical and logical view of the scale out architecture for the Windows 8.1 HVD platform using the Cisco C240 M3 SFF servers. Figure 42. Logical View of the HVD Solution with Windows 8.1 up to 500 desktops Pod of 500 Windows 8.1 HVD Users The logical and physical components that make up the platform to deliver a 500 user Hosted Virtual Desktop solution (Windows 8.1) are described below: Figure 43. VM Allocation for HVD on Windows

90 Component Qty # of Citrix XenDesktop Enterprise Users Up to 500 # of XenDesktop Sites 1 # of XenDesktop Delivery Controllers 2 # of StoreFront Servers 2 # of Citrix/Microsoft License Server 42 1 # of MS SCVMM Servers 1 # of Storage Management Server 1 # of SQL 2012 Standard Servers (DB Mirror in Active/Passive) 43 2 # of Cisco C240 Server Nodes running MS Hyper-V 2012 R2 5 # of NetApp FAS2552 Storage 1 # of Windows 8.1 Enterprise HVD (virtual desktops) 500 Table User HVD on Windows 8.1 Pod Detail 42 Optional. License services can be optionally deployed onto existing servers to conserve resources. 43 Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 89

91 Server Inventory The following table provides the suggested list of servers, virtual machine and storage configuration. HVD Servers (Support up to 500 x Win 8.1 Virtual Desktops) Qty OS Server role Type CPU RAM Disk NIC Storage NetApp FAS2552 Hybrid Storage Array 1 Clustered Data ONTAP Unified Storage Array FAS GB 4 x 200GBSSD 20 x 600GBSAS 4 x Onboard 1GbE Physical Servers (Hyper-V Hosts) 5 MS Hyper-V Server 2012 R2 Hyper-V Host (Infrastructure) Cisco C240 M3 SFF 2 x 10 - Core 256GB C:\300 GB On-board 4 port, 1GbE Guest Virtual Machines 2 Windows Server 2012 R2 Standard Citrix Desktop Delivery Controller VM 4 vcpu 8GB C;\100GB 1 vnic 2 Windows Server 2012 R2 Standard Citrix StoreFront VM 2 vcpu 4GB C:\100GB 1 vnic 1 Windows Server 2012 R2 Standard Citrix License Management (& SMI-S) DHCP Role VM 2 vcpu 8GB C:\100GB 2 vnic 1 Windows Server 2012 R2 Standard Virtual Machine Manager DHCP Role VM 2 vcpu 8GB C:\100GB D:\150GB 2 vnic 500 Windows 8.1 Enterprise x64 Hosted Virtual Desktop VM 2 VCPU 2.5GB C:\100GB 1 vnic Assume customer will leverage existing SQL Server environment. Sample configuration (Optional) 44 2 Windows Server 2012 R2 Standard SQL Server 2012 Standard VM 2 vcpu 8GB C:\100GB D:\150GB 1 vnic Table 65: Server Inventory for HVD on Windows Optional. Existing SQL environment can also be leveraged to provide database capability to conserve resources. 90

92 Appendix D. Network Switch Requirements This section defines the network port requirements based on the number of Cisco C240 M3 servers that will be deployed. Although 10GbE networking is supported by the NetApp platform, a 1GbE network is leveraged to minimise the total cost of the solution to ensure its fit for purpose for the target market and target scale. Existing 1GbE network switching infrastructure can be utilised to further minimise the integration and hardware acquisition costs associated with deploying this solution provided the following requirements are considered. Switch Requirements Requirements Minimum Recommendation Comments MTU Jumbo Frames Required for storage traffic efficiencies 1GbE NIC Ports 5 ports per Server Node Refer to the below Network Port Density Table for the scale out model VLAN Support 802.1Q tagging Capability to create VLANs Stacking or Redundant Capabilities Uplink to Core or Upstream Switching Yes Multi-Gigabit or better Uplinks Switches should be redundant Sufficient upstream bandwidth to Core network Table 66: Network Switch Requirements Network Port Densities The below table provides a sample configuration and port density requirements as the platform is scaled out up to 700 users: Configuration Number of HSD/HVD Users Value # of HSD Users # of Windows 7 HVD Users Hardware Specifics # of Cisco C240 M3 Nodes # of 1GbE Ports (Hyper-V) # of Ports FAS Total # of 1GbE NIC Ports # of 48-port ToR Switches Table 67: 1GbE Switch and NIC port requirements 91

93 Appendix E. IP Addressing The following tables provide a SAMPLE configuration and should be completed on final deployment of this design. Hyper-V Hosts: IP Address Host Name (Example Only) Description TBD Hyper-V01 Hyper-V Host TBD Hyper-V02 Hyper-V Host TBD Hyper-V03 Hyper-V Host TBD Hyper-V04 Hyper-V Host TBD Hyper-V05 Hyper-V Host TBD Hyper-V06 Hyper-V Host Table 68: Hyper-V Nodes IP Addressing NetApp FAS2552: IP Address Logical Interface Description /24 cluster_mgmt Cluster Mgmt /24 cluster1-01:mgmt1 Controller 1 Mgmt /24 cluster1-02:mgmt1 Controller 2 Mgmt /24 svm1_lif1 SVM1 Data LIF /24 svm1_lif2 SVM1 Data LIF /24 svm1_lif3 SVM1 Mgmt. LIF /24 svm2_lif1 SVM2 Data LIF /24 svm2_lif2 SVM2 Data LIF /24 svm2_lif3 SVM2 Mgmt. LIF Table 69: NetApp IP Addressing Control Layer Guest VMS: IP Address Server Name (Example Only) Description TBD DDC01 Desktop Controller TBD DDC02 Desktop Controller TBD SF01 Access Controller TBD SF02 Access Controller TBD MGT01 CTX / MS License server / SMI-S Management Server TBD VMM01 Virtual Machine Manager server Table 70: Control Layer Guest VM IP Addressing 92

94 Sample HSD DHCP Scope: IP Address Range Scope Name VLAN ID Gateway DNS Servers ~250 Addresses \24 HSD VLAN 1 HSD VLAN 1 TBD TBD Table 71: Sample DHCP Scope information Sample HVD DHCP Scopes: IP Address Range Scope Name VLAN ID Gateway DNS Servers ~500 Addresses \23 HVD VLAN 1 HVD VLAN 1 TBD TBD ~500 Addresses \23 HVD VLAN 2 HVD VLAN 2 TBD TBD Table 72: Sample DHCP Scope information 93

95 Appendix F. Service Accounts & Groups The following tables described the User Groups and Service Accounts required to deploy the Citrix Validated Solution. It is anticipated the final configuration will include more groups or accounts to meet customer specific role based administrative delegation and security requirements. Role Groups Group Role Description Name (Example Only) Permissions/ACL XenDesktop Administrators XenDesktop Server Administrators System Center & Hyper-V Administrators XenDesktop-Site-Admins XenDesktop-Server-Admins VMM-Full-Admins XenDesktop Site Administrators XenDesktop Site Administrators Local Administrator: XenDesktop Controllers Local Administrator: Hyper-V Hosts SCVMM Table 73: Group Recommendations Service Accounts Account Description Name (Example Only) Permissions/ACL SCVMM Service Account SCVMM Run as Account XenDesktop host connection to VMM server SMI-S provider account 45 svc.scvmm svc.scvmm-runas Member of: Group: VMM-Full-Admins Member of: Group: VMM-Full-Admins Table 74: Service Account Recommendations 45 The user name and password provided should be the same credentials as the SMI-S local user account on the NetApp SMI-S server 94

96 Appendix G. XenDesktop Policies The Policies described below where used throughout validation testing and are provided for reference only. These must be reviewed for customer/environmental suitability. Test Environment Policy Settings Policy Setting Configuration State / Value ICA\Audio quality Medium - optimised for speech ICA\Auto connect client drives Disabled ICA\Auto-create client printers Do not Auto-create client printers ICA\Automatic installation of in-box printer drivers Disabled ICA\client driver redirection Prohibited ICA\client microphone redirection Prohibited ICA\Desktop wallpaper Prohibited ICA\Legacy graphics mode Enabled ICA\Menu animation Prohibited ICA\Multimedia conferencing Prohibited ICA\Target frame rate 10 fps ICA\View window content while dragging Prohibited Adobe Flash Delivery\Flash acceleration Disabled Table 75: XenDesktop Policies 95

97 Appendix H. Cisco C240 M3 SFF Server BIOS Settings The following table described the BIOS settings used throughout validation testing: Processor Setting Value BIOS (Boot Order) Configured Boot Order: CD/DVD HDD TPM Support Disabled Reboot Host immediately Disabled Hyper-Threading Enabled Number of Cores All Execute Disable Enabled Intel VT Enabled Intel VT-d Enabled Intel VT-d Coherency support Enabled Intel VT-d ATS support Enabled CPU Performance Enterprise Hardware Prefetcher Disabled Adjacent Cache Line Prefetcher Disabled DCU Streamer Prefetch Disabled DCU Streamer IP Prefetch Disabled Direct Cache Access Support Enabled Power Technology Custom Enhanced Intel Speedstep Technology Disabled Intel Turbo Boost Technology Disabled Processor Power state C6 Disabled processor Power state C1 Enhanced Disabled Frequency Floor Override Enabled P-STATE Coordination HW ALL Table 76: Cisco C240 M3 SFF Server BIOS Settings (Processor) 96

98 Memory Setting Value Select Memory RAS Maximum Performance DRAM Clock Throttling Performance NUMA Enabled Low Voltage DDR Mode Performance Mode Channel Interleaving Auto Rank Interleaving Auto DRAM Refresh Rate Auto Patrol Scrub Disabled Demand Scrub Disabled Altitude 300 M (Default) Table 77: Cisco C240 M3 SFF Server BIOS Settings (Memory) 97

99 Appendix I. Storage Calculations The storage calculations provided in this section are to be used as a guideline only. Shared infrastructure/virtual desktops" hosts storage requirements will vary from the table(s) due to differences in workload. Customer actual requirements may also dictate different workload patterns e.g. based on virtual desktop uptime, application and memory utilisation (e.g. Pagefile usage). The below storage calculations exclude the storage capacity required to support SQL server and databases. Should SQL database services be deployed on this platform, at a minimum an additional 150GB for databases and 25GB for SQL Server Operating System files will be required. A minimum of 2 x NetApp FAS2552 Volumes (FlexVol) are required for improved storage performance. The storage calculations are based on the following: 2 x XenDesktop Catalogs (either HSD or HVD). Separate NetApp FlexVol for each XenDesktop Catalog e.g.: o o Volume1 (350 desktops) Volume2 (350 desktops) 2 x Master Images per Catalog - this caters for 4 unique Master images for the desktop type being implemented Maximum of 350 desktops per Catalog NetApp Sizing Guidance Virtual desktop sizing varies depends on: Number of the seats VM workload (applications, VM size, and VM OS) Connection broker Hypervisor type Provisioning method Storage future growth Disaster recovery requirements User home directories There are many factors that affect storage sizing. NetApp has developed a sizing tool system performance modeler (SPM) to simplify the process of capacity and performance sizing for NetApp systems. NetApp recommends using the NetApp SPM tool to size the virtual desktop solution. Contact NetApp partners and NetApp sales engineers who have the access to SPM. When using the NetApp SPM to size a solution, it is recommended to separately size the VDI workload and the CIFS profile/home directory workload. 98

100 Infrastructure Share Example Share Name: \\<SVM1>\\infra (Volume Infrastructure) The following table provides guidelines to the storage calculations used to size the volume for the infrastructure VMs: Storage Requirement GB Description 50 2 x (Delivery Controller 25GB OS disk) Infrastructure VMS (OS disks) 50 2 x (StoreFront Server 25GB OS disk) 40 1 x (VMM Server 40GB OS disk) 40 1 x (Management Server 40GB disk) SCVMM Library (Media repository) x (VMM Library) D:\ Total Storage Required 390GB Actual Allocation from the FAS2552 = 500GB (consisting of 2 x 250GB file shares) Table 78: Storage Sizing for Infrastructure VMS HSD Pooled - Windows Server 2008 R2 or 2012 R2 The following table provides guidelines to the storage calculations used to size the volumes for Hosted Shared Desktop VMs supporting 700 Users: Example Share Name: \\<SVM1>\\HSD1 (Volume1) for Catalog 1 Storage Requirement GB Description Master and Base Images 400 Assumes 2 x Master images 2 x (1 x Master Image + 3 Snapshots at ~50GB each) 14 HSD VMs (Differencing Disks) x (Assume HSD has 25GB difference disk) Total Storage Required 750GB Actual Allocation from the FAS2552 = 2,000GB Table 79: Storage Sizing for HSD VMS on Catalog 1 Example Share Name: \\<SVM1>\\HSD2 (Volume2) for Catalog 2 Storage Requirement GB Description Master and Base Images 400 Assumes 2 x Master images 2 x (1 x Master Image + 3 Snapshots at ~50GB each) 14 HSD VMs (Differencing Disks) x (Assume HSD has 25GB difference disk) Total Storage Required 750GB Actual Allocation from the FAS2552 = 2,000GB Table 80: Storage Sizing for HSD VMS on Catalog 2 99

101 HVD Pooled - Windows 7 Example Share Name: \\<SVM1>\\HVD1 (Volume1) for Catalog 1 Storage Requirement GB Description Master and Base Images 400 Assumes 2 x Master images 2 x (1 x Master Image + 3 Snapshots at ~50GB each) 350 HVD VMs (Differencing Disks) 1, x (Assume HVD has 4GB difference disk) Total Storage Required 1,800GB Actual Allocation from the FAS2552 = 2,000GB Table 81: Storage Sizing for Windows 7 HVD Pooled VMS on Catalog 1 Example Share Name: \\<SVM1>\\HVD2 (Volume2) for Catalog 2 Storage Requirement GB Description Master and Base Images 400 Assumes 2 x Master images 2 x (1 x Master Image + 3 Snapshots at ~50GB each) 350 HVD VMs (Differencing Disks) 1, x (Assume HVD has 4GB difference disk) Total Storage Required 1,800GB Actual Allocation from the FAS2552 = 2,000GB Table 82: Storage Sizing for Windows 7 HVD Pooled VMS on Catalog 2 HVD Pooled - Windows 8.1 Example Share Name: \\<SVM1>\\HVD1 (Volume1) for Catalog 1 Storage Requirement GB Description Master and Base Images 400 Assumes 2 x Master images 2 x (1 x Master Image + 3 Snapshots at ~50GB each) 250 HVD VMs (Differencing Disks) 1, x (HVD VM 6 GB OS difference disk) Total Storage Required 1,900GB Actual Allocation from the FAS2552 = 2,000GB Table 83: Storage Sizing for Windows 8.1 HVD Pooled VMS on Catalog 1 Example Share Name: \\<SVM1>\\HVD2 (Volume2) for Catalog 2 Storage Requirement GB Description Master and Base Images 400 Assumes 2 x Master images 2 x (1 x Master Image + 3 Snapshots at ~50GB each) 250 HVD VMs (Differencing Disks) 1, x (HVD VM 6 GB OS difference disk) Total Storage Required 1,900GB Actual Allocation from the FAS2552 = 2,000GB Table 84: Storage Sizing for Windows 8.1 HVD Pooled VMS on Catalog 2 100

102 HVD - Persistent Desktops If the Customer is deploying persistent desktops it will not be possible to make any accurate storage calculations, since the customer workloads and environmental conditions may dictate very different workloads that can be realistically tested. DECISION POINT File Sharing and User Data Storage Requirement GB Description User Data 700 1GB allocated for each users profile data up to 700 users. Failover Cluster File share witness 1 File share witness is presented to the cluster to ensure Quorum is maintained when an even number of cluster nodes exist. VMM Library share 150 Storage for SCVMM Library Total Storage Required 851GB Actual Allocation from the FAS2552 = 1000GB Table 85: Storage Sizing for file sharing data+ 101

103 Appendix J. Test Results Validation The operational layer focuses on performance monitoring and availability management that was utilised during testing of the Citrix Validated Solution environment 46. Microsoft System Center Operations Manager was used for monitoring the infrastructure, end to end while Citrix Desktop Director and EdgeSight were used for monitoring Citrix specific components. Additionally, Liquidware Labs, Stratusphere UX was used to verify the End User Experience: during the Login VSI test workload execution. End User Experience Monitoring Comparative analysis was reviewed based on the results from both Login VSI VSI MAX and the Stratusphere UX Score. HSD Test Results The illustrations below shows a report from Stratusphere UX Diagnostic tool displayed as a scatter chart. Note all sessions are in the Best quadrant The output is from a HSD host under full load (~700 HSD Sessions) Figure 44. Stratusphere UX User Experience Rating 46 A full operational readiness, and operations management guide is out of scope for this CVS 102

104 The illustration below shows a real time view from Citrix Director of the above test after a 60 minute login period of the system under full test workload execution: Figure 45. Total number of user sessions and Average Logon Duration under full test load The illustrations below depict a real time view of performance statistics captured from the NetApp FAS2552 Hybrid Disk Array during HSD testing. Figure 46 illustrates the average CPU across the dual controllers during the login storm, steady state, and logoff. Figure 47 illustrates the network throughput and IOPS load generated during the same time period. Figure 46. Storage Controller CPU utilisation during HSD testing Figure 47. Storage Controller Network Throughput & IOPS during HSD testing 103

105 HVD Test Results The illustration below shows a report from Stratusphere UX Diagnostic tool and shows the UX Score Rating The output is from a HVD host under full load (~700 HVD virtual Desktop Sessions). Note all sessions are in the Best quadrant. Figure 48. Stratusphere UX User Experience Rating The illustration below shows a real time view from Citrix Director of the above test after a 60 minute login period of the system under full test workload execution: Figure 49. Total number of user sessions and Average Logon Duration under full test load The illustrations below depict a real time view of performance statistics captured from the NetApp FAS2552 Hybrid Disk Array during HVD testing. 104

106 Figure50 illustrates the average CPU across the dual controllers during the login storm, steady state, and logoff. Figure 51 illustrates the network throughput and IOPS load generated during the same time period. Figure 50. Storage Controller CPU utilisation during HVD testing Figure 51. Storage Controller Network Throughput & IOPS during HVD testing 105

107 Logon Storm The graph below describes a Windows 7 HVD test under full load. Note the following from the output: The graph clearly shows the slow ramp at the beginning of a logon storm (green line representing the Good use experience). At the peak of the logon storm note the line representing the small number of users experiencing Fair user experience due to the aggressive logon storm. Note after the logon storm all users return to Good experience. Figure 52. Stratusphere UX Logon performance Office 2013 Microsoft Office 2013 was tested and benchmarked against Office 2010 using the following workloads: Microsoft Server 2012 R2 HSD workload Microsoft Windows 8.1 HVD workload Workload User Density Impact Microsoft Server 2012 R2 HSD Density decrease by approximately 10% Microsoft Windows 8.1 HVD Density decrease by approximately 10% Table 86: Office

Citrix XenDesktop 7.6 on Citrix XenServer 6.5 with FlexPod Express. Solution Design

Citrix XenDesktop 7.6 on Citrix XenServer 6.5 with FlexPod Express. Solution Design Citrix XenDesktop 7.6 on Citrix XenServer 6.5 with FlexPod Express Solution Design Citrix Validated Solutions March 25 th 2015 Prepared by: Citrix APAC Solutions TABLE OF CONTENTS Section 1: Executive

More information

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Solution Design

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Solution Design Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware Solution Design Citrix Validated Solutions July 10 th 2014 Prepared by: APAC Solutions TABLE OF CONTENTS Section

More information

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Nutanix Virtual Computing Platform. Solution Design

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Nutanix Virtual Computing Platform. Solution Design Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Nutanix Virtual Computing Platform Solution Design Citrix Validated Solutions June 25 th 2014 Prepared by: Citrix APAC Solutions TABLE OF CONTENTS

More information

Parallels Remote Application Server. Parallels Remote Application Server on Nutanix Enterprise Cloud Platform Design VMware ESX & vcenter

Parallels Remote Application Server. Parallels Remote Application Server on Nutanix Enterprise Cloud Platform Design VMware ESX & vcenter Parallels Remote Application Server Parallels Remote Application Server on Nutanix Enterprise Cloud Platform Design VMware ESX & vcenter Contents 01...Project Overview 02...Architecture Overview 02...Parallels

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

Cost Efficient VDI. XenDesktop 7 on Commodity Hardware

Cost Efficient VDI. XenDesktop 7 on Commodity Hardware Cost Efficient VDI XenDesktop 7 on Commodity Hardware 1 Introduction An increasing number of enterprises are looking towards desktop virtualization to help them respond to rising IT costs, security concerns,

More information

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions Citrix XenDesktop Modular Reference Architecture Version 2.0 Prepared by: Worldwide Consulting Solutions TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 3 Design Planning... 9 Design Examples...

More information

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Deployment Guide

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Deployment Guide Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware Deployment Guide Citrix Validated Solutions July 10 th 2014 Prepared by: APAC Solutions TABLE OF CONTENTS Section

More information

Virtual Desktop Infrastructure (VDI) made Easy

Virtual Desktop Infrastructure (VDI) made Easy Virtual Desktop Infrastructure (VDI) made Easy HOW-TO Preface: Desktop virtualization can increase the effectiveness of information technology (IT) teams by simplifying how they configure and deploy endpoint

More information

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 High-Level Design

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 High-Level Design Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 High-Level Design Citrix Validated Solutions 17 th December 2013 Prepared by: Citrix Consulting Revision History Revision Change Description Updated

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide 605: Design and implement a desktop virtualization solution based on a mock scenario Hands-on Lab Exercise Guide Contents Overview... 2 Scenario... 5 Quick Design Phase...11 Lab Build Out...12 Implementing

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant

Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant Replacement solution for aging heritage branch infrastructures (Co-op and Britannia) New unified app delivery platform for the bank to

More information

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Storage Infrastructure and Solutions

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

5.6 Microsoft Hyper-V 2008 R2 / SCVMM 2012. 5,000 Users. Contributing Technology Partners:

5.6 Microsoft Hyper-V 2008 R2 / SCVMM 2012. 5,000 Users. Contributing Technology Partners: 5.6 Microsoft Hyper-V 28 R2 / SCVMM 212 5, Users Contributing Technology Partners: Table of Contents EXECUTIVE SUMMARY... 1 ABBREVIATIONS AND NAMING CONVENTIONS... 2 KEY COMPONENTS... 3 SOLUTIONS ARCHITECTURE...

More information

Deploying XenApp 7.5 on Microsoft Azure cloud

Deploying XenApp 7.5 on Microsoft Azure cloud Deploying XenApp 7.5 on Microsoft Azure cloud The scalability and economics of delivering Citrix XenApp services Given business dynamics seasonal peaks, mergers, acquisitions, and changing business priorities

More information

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array

Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array Desktop Virtualization with VMware Horizon View 5.2 on Dell EqualLogic PS6210XS Hybrid Storage Array A Dell Storage Reference Architecture Enterprise Storage Solutions Cloud Client Computing December 2013

More information

Vblock Solution for Citrix XenDesktop and XenApp

Vblock Solution for Citrix XenDesktop and XenApp www.vce.com Vblock Solution for Citrix XenDesktop and XenApp Version 1.3 April 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0 Dell Virtual Remote Desktop Reference Architecture Technical White Paper Version 1.0 July 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Citrix XenApp Hosted Shared Desktop on Microsoft Hyper-V Server 2012 High-Level Design

Citrix XenApp Hosted Shared Desktop on Microsoft Hyper-V Server 2012 High-Level Design Citrix XenApp Hosted Shared Desktop on Microsoft Hyper-V Server 2012 High-Level Design Citrix Validated Solutions 10 th June 2013 Prepared by: Citrix Consulting Revision History Revision Change Description

More information

App Orchestration Setup Checklist

App Orchestration Setup Checklist App Orchestration Setup Checklist This checklist is a convenient tool to help you plan and document your App Orchestration deployment. Use this checklist along with the Getting Started with Citrix App

More information

XenDesktop Service Template

XenDesktop Service Template XenDesktop Service Template XenDesktop 7.1 Service Template Technology Preview for System Center Virtual Machine Manager The Citrix XenDesktop System Center - Virtual Machine Manager (VMM) Service Template

More information

Virtual Desktop Acquisition Cost Analysis citrix.com

Virtual Desktop Acquisition Cost Analysis citrix.com Virtual Desktop Acquisition Cost Analysis 2 Desktop virtualization is much more than a technology solution. It is transforming the way organizations of all sizes are enabling their workforces while simplifying

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

CVE-401/CVA-500 FastTrack

CVE-401/CVA-500 FastTrack CVE-401/CVA-500 FastTrack Description The CVE-400-1I Engineering a Citrix Virtualization Solution course teaches Citrix engineers how to plan for and perform the tasks necessary to successfully integrate

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged

More information

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support

More information

Transforming Call Centers

Transforming Call Centers Transforming Call Centers XenDesktop 7.5 Design Guide on Hyper-V 2012R2 Table of Contents About FlexCast Services Design Guides 3 Project overview 3 Objective 3 Assumptions 4 Conceptual architecture 5

More information

Provisioning Server Service Template

Provisioning Server Service Template Provisioning Server Service Template Provisioning Server 7.1 Service Template Technology Preview for System Center - Virtual Machine Manager The Citrix Provisioning Server System Center - Virtual Machine

More information

Windows Server 2012 2,500-user pooled VDI deployment guide

Windows Server 2012 2,500-user pooled VDI deployment guide Windows Server 2012 2,500-user pooled VDI deployment guide Microsoft Corporation Published: August 2013 Abstract Microsoft Virtual Desktop Infrastructure (VDI) is a centralized desktop delivery solution

More information

Enhancing the HP Converged Infrastructure Reference Architectures for Virtual Desktop Infrastructure

Enhancing the HP Converged Infrastructure Reference Architectures for Virtual Desktop Infrastructure Enhancing the HP Converged Infrastructure Reference Architectures for Virtual Desktop Infrastructure Incorporating scalable, available SSD technologies into Citrix XenDesktop and VMware View deployments

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Pure Storage: All-Flash Performance for XenDesktop

Pure Storage: All-Flash Performance for XenDesktop Pure Storage: All-Flash Performance for XenDesktop 2 Executive Summary The high costs and performance challenges associated with traditional disk-based storage have inhibited the broad adoption of desktop

More information

Citrix XenDesktop 7.6 Feature Pack 2 Blueprint

Citrix XenDesktop 7.6 Feature Pack 2 Blueprint Citrix XenDesktop 7.6 Feature Pack 2 Blueprint TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 4 Detailed Architecture... 6 Next Steps... 17 Glossary... 18 Appendix: Profile Policy Details...

More information

Mobilizing Windows apps

Mobilizing Windows apps Mobilizing Windows apps XenApp 7.5 Design Guide on vsphere 5.5 Table of Contents About FlexCast Services Design Guides 3 Project overview 3 Objective 3 Assumptions 4 Conceptual architecture 5 Detailed

More information

VMware Horizon 6 on Coho DataStream 1000. Reference Architecture Whitepaper

VMware Horizon 6 on Coho DataStream 1000. Reference Architecture Whitepaper VMware Horizon 6 on Coho DataStream 1000 Reference Architecture Whitepaper Overview This document is intended as a reference architecture for implementing Virtual Desktop Infrastructure (VDI) solutions

More information

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist Part 1 - What s New in Hyper-V 2012 R2 Clive.Watson@Microsoft.com Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure

More information

Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice

Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice Technical Report Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice Rob Briggs March 2013 TR-4134i TABLE OF CONTENTS 1 Introduction... 5 1.1 Using this Document...5

More information

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

Flash Storage Optimizing Virtual Desktop Deployments

Flash Storage Optimizing Virtual Desktop Deployments Flash Storage Optimizing Virtual Desktop Deployments Ashok Rajagopalan UCS Product Management May 2014 In Collaboration with Intel Old Fashioned VDI (circa 2012) was Financially Unattractive to Most Average

More information

msuite5 & mdesign Installation Prerequisites

msuite5 & mdesign Installation Prerequisites CommonTime Limited msuite5 & mdesign Installation Prerequisites Administration considerations prior to installing msuite5 and mdesign. 7/7/2011 Version 2.4 Overview... 1 msuite version... 1 SQL credentials...

More information

White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware

White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware White paper Fujitsu vshape Virtual Desktop Infrastructure (VDI) using Fibre Channel and VMware The flexible and scalable vshape VDI FC VMware solution combines all aspects of a virtual desktop environment,

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View Page 1 of 16 Introduction A Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end user experience and

More information

How To Build A Call Center From Scratch

How To Build A Call Center From Scratch Design Guide Transforming Call Centers XenApp 7.5 Design Guide on vsphere 5.5 Table of Contents About FlexCast Services Design Guides 3 Project overview 3 Objective 3 Assumptions 4 Conceptual architecture

More information

XenDesktop Implementation Guide

XenDesktop Implementation Guide Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Implementation Guide Pooled Desktops (Local and Remote) www.citrix.com Contents Contents... 2 Overview... 4 Initial Architecture... 5 Installation

More information

Greatexam.1Y0-401.Premium.VCE.205q. Vendor: Citrix. Exam Code: 1Y0-401. Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: 15.

Greatexam.1Y0-401.Premium.VCE.205q. Vendor: Citrix. Exam Code: 1Y0-401. Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: 15. Greatexam.1Y0-401.Premium.VCE.205q Number: 1Y0-401 Passing Score: 925 Time Limit: 120 min File Version: 15.071 http://www.gratisexam.com/ Vendor: Citrix Exam Code: 1Y0-401 Exam Name: Designing Citrix XenDesktop

More information

White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration

White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration White paper Fujitsu Virtual Desktop Infrastructure (VDI) using DX200F AFA with VMware in a Full Clone Configuration This flexible and scalable VDI VMware solution combines all aspects of a virtual desktop

More information

White paper. Microsoft and Citrix VDI: Virtual desktop implementation scenarios

White paper. Microsoft and Citrix VDI: Virtual desktop implementation scenarios White paper Microsoft and Citrix VDI: Virtual desktop implementation scenarios Table of contents Objective Microsoft VDI offering components High definition user experience...3 A very cost-effective and

More information

"Charting the Course... Implementing Citrix NetScaler 11 for App and Desktop Solutions CNS-207 Course Summary

Charting the Course... Implementing Citrix NetScaler 11 for App and Desktop Solutions CNS-207 Course Summary Course Summary Description The objective of this course is to provide the foundational concepts and teach the skills necessary to implement, configure, secure and monitor a Citrix NetScaler system with

More information

Remote PC Guide Series - Volume 1

Remote PC Guide Series - Volume 1 Introduction and Planning for Remote PC Implementation with NETLAB+ Document Version: 2016-02-01 What is a remote PC and how does it work with NETLAB+? This educational guide will introduce the concepts

More information

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options

More information

Citrix XenApp-7.6 Administration Training. Course

Citrix XenApp-7.6 Administration Training. Course Citrix XenApp-7.6 Administration Training Course Course Duration : 20 Working Days Class Duration : 3 hours per day Fast Track: - Course duration 10days (Per day 8 hours) Get Fee Details Module 1: Citrix

More information

Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop

Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop Atlantis HyperScale VDI Reference Architecture with Citrix XenDesktop atlantiscomputing.com Table of Contents Executive Summary... 3 Introduction... 4 Solution Overview... 5 Why use Atlantis HyperScale

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop Last update: 24 June 2015 Version 1.2 Reference Architecture for Citrix XenDesktop Contains performance data and sizing recommendations

More information

XenDesktop 7.5 on Amazon Web Services (AWS) Design Guide

XenDesktop 7.5 on Amazon Web Services (AWS) Design Guide XenDesktop 7.5 on Amazon Web Services (AWS) Design Guide July 14, 2014 Revision History Revision Change Description Updated By Date 0.1 Document Created Peter Bats April 17, 2014 1.0 Final Draft Peter

More information

5,100 PVS DESKTOPS ON XTREMIO

5,100 PVS DESKTOPS ON XTREMIO 5,1 PVS DESKTOPS ON XTREMIO With XenDesktop 5.6 and XenServer 6.1 A Test Report December 213 ABSTRACT This report documents the consistent low latency performance of XtremIO under the load of 5,1 concurrent

More information

Design Guide: Remote Access to Windows Apps XenApp 7.6 Feature Pack 2 vsphere 6

Design Guide: Remote Access to Windows Apps XenApp 7.6 Feature Pack 2 vsphere 6 Remote Access to Windows Apps FlexCast Services Design Guide Design Guide: Remote Access to Windows Apps XenApp 7.6 Feature Pack 2 vsphere 6 TABLE OF CONTENTS About FlexCast Services Design Guides... 2

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

XenApp 7.7 Deployment ISO. 5 th January 2016

XenApp 7.7 Deployment ISO. 5 th January 2016 5 th January 2016 Document Details Document Name Author DG Version 1.0 Date 5th January 2016 Status Released Document History Date Modification Details 5/01/2016 N/A First Release Contents 1. Introduction...

More information

Remote access to enterprise PCs

Remote access to enterprise PCs Remote access to enterprise PCs About FlexCast Services Design Guides Citrix FlexCast Services Design Guides provide an overview of a validated architecture based on many common scenarios. Each design

More information

Microsoft and Citrix: Joint Virtual Desktop Infrastructure (VDI) Offering

Microsoft and Citrix: Joint Virtual Desktop Infrastructure (VDI) Offering Microsoft and Citrix: Joint Virtual Desktop Infrastructure (VDI) Offering Architectural Guidance July 2009 The information contained in this document represents the current view of Microsoft Corporation

More information

Component Details Notes Tested. The virtualization host is a windows 2008 R2 Hyper-V server. Yes

Component Details Notes Tested. The virtualization host is a windows 2008 R2 Hyper-V server. Yes We will be reviewing Microsoft s Remote Desktop Services (RDS), which has undergone significant reworking since it was released as Windows 2008 Terminal Services. In the original release of Microsoft Windows

More information

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services What You Will Learn Windows Server on WAAS reduces the cost and complexity

More information

Lab Validations: Optimizing Storage for XenDesktop with XenServer IntelliCache Reducing IO to Reduce Storage Costs

Lab Validations: Optimizing Storage for XenDesktop with XenServer IntelliCache Reducing IO to Reduce Storage Costs Lab Validations: Optimizing Storage for XenDesktop with XenServer IntelliCache Reducing IO to Reduce Storage Costs www.citrix.com Table of Contents 1. Introduction... 2 2. Executive Summary... 3 3. Reducing

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

Introduction to VMware EVO: RAIL. White Paper

Introduction to VMware EVO: RAIL. White Paper Introduction to VMware EVO: RAIL White Paper Table of Contents Introducing VMware EVO: RAIL.... 3 Hardware.................................................................... 4 Appliance...............................................................

More information

CNS-207 Implementing Citrix NetScaler 10.5 for App and Desktop Solutions

CNS-207 Implementing Citrix NetScaler 10.5 for App and Desktop Solutions CNS-207 Implementing Citrix NetScaler 10.5 for App and Desktop Solutions The objective of Implementing Citrix NetScaler 10.5 for App and Desktop Solutions is to provide the foundational concepts and skills

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

CMB-207-1I Citrix XenApp and XenDesktop Fast Track

CMB-207-1I Citrix XenApp and XenDesktop Fast Track 1800 ULEARN (853 276) www.ddls.com.au CMB-207-1I Citrix XenApp and XenDesktop Fast Track Length 5 days Price $5995.00 (inc GST) This fast-paced course covers select content from training courses CXA-206

More information

Citrix Training. Course: Citrix Training. Duration: 40 hours. Mode of Training: Classroom (Instructor-Led)

Citrix Training. Course: Citrix Training. Duration: 40 hours. Mode of Training: Classroom (Instructor-Led) Citrix Training Course: Citrix Training Duration: 40 hours Mode of Training: Classroom (Instructor-Led) Virtualization has redefined the way IT resources are consumed and services are delivered. It offers

More information

Deployment Guide: Unidesk and Hyper- V

Deployment Guide: Unidesk and Hyper- V TECHNICAL WHITE PAPER Deployment Guide: Unidesk and Hyper- V This document provides a high level overview of Unidesk 3.x and Remote Desktop Services. It covers how Unidesk works, an architectural overview

More information

Stratusphere Solutions

Stratusphere Solutions Stratusphere Solutions Deployment Best Practices Guide Introduction This guide has been authored by experts at Liquidware Labs in order to provide a baseline as well as recommendations for a best practices

More information

Deploying Citrix XenDesktop 5 with Citrix XenServer 5.6 SP2 on Hitachi Virtual Storage Platform

Deploying Citrix XenDesktop 5 with Citrix XenServer 5.6 SP2 on Hitachi Virtual Storage Platform 1 Deploying Citrix XenDesktop 5 with Citrix XenServer 5.6 SP2 on Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark September 2011 Month Year Feedback Hitachi Data Systems welcomes

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Enterprise Cloud Services HOSTED PRIVATE CLOUD

Enterprise Cloud Services HOSTED PRIVATE CLOUD Enterprise Cloud Services HOSTED PRIVATE CLOUD Delivering Business Value From DataCenter & Cloud Technologies Redefine Your Business Introduction Driven by a team with over 100 years of combined experience

More information

Ignify ecommerce. Item Requirements Notes

Ignify ecommerce. Item Requirements Notes wwwignifycom Tel (888) IGNIFY5 sales@ignifycom Fax (408) 516-9006 Ignify ecommerce Server Configuration 1 Hardware Requirement (Minimum configuration) Item Requirements Notes Operating System Processor

More information

Cisco Solution for EMC VSPEX Server Virtualization

Cisco Solution for EMC VSPEX Server Virtualization Reference Architecture Cisco Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 Virtual Machines Enabled by Cisco Unified Computing System, Cisco Nexus Switches, Microsoft Hyper-V, EMC

More information

VMware Workspace Portal Reference Architecture

VMware Workspace Portal Reference Architecture VMware Workspace Portal 2.1 TECHNICAL WHITE PAPER Table of Contents Executive Summary.... 3 Overview.... 4 Hardware Components.... 5 VMware vsphere.... 5 VMware Workspace Portal 2.1.... 5 VMware Horizon

More information

Interact Intranet Version 7. Technical Requirements. August 2014. 2014 Interact

Interact Intranet Version 7. Technical Requirements. August 2014. 2014 Interact Interact Intranet Version 7 Technical Requirements August 2014 2014 Interact Definitions... 3 Licenses... 3 On-Premise... 3 Cloud... 3 Pulic Cloud... 3 Private Cloud... 3 Perpetual... 3 Self-Hosted...

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski Hyperscale Use Cases for Scaling Out with Flash David Olszewski Business challenges Performanc e Requireme nts Storage Budget Balance the IT requirements How can you get the best of both worlds? SLA Optimized

More information

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS Number: 1Y0-A14 Passing Score: 800 Time Limit: 90 min File Version: 42.2 http://www.gratisexam.com/ CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS Exam Name: Implementing

More information

Implementing Cisco Data Center Unified Computing (DCUCI)

Implementing Cisco Data Center Unified Computing (DCUCI) Certification CCNP Data Center Implementing Cisco Data Center Unified Computing (DCUCI) 5 days Implementing Cisco Data Center Unified Computing (DCUCI) is designed to serve the needs of engineers who implement

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

Windows Server 2012 授 權 說 明

Windows Server 2012 授 權 說 明 Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card Performance Validation A joint Teradici / Dell white paper Contents 1. Executive overview...2 2. Introduction...3

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.5 EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Simplify management and decrease TCO Streamline Application

More information

Microsoft Private Cloud Fast Track

Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease

More information

Virtualizing your Datacenter

Virtualizing your Datacenter Virtualizing your Datacenter with Windows Server 2012 R2 & System Center 2012 R2 Part 2 Hands-On Lab Step-by-Step Guide For the VMs the following credentials: Username: Contoso\Administrator Password:

More information

Table of contents. Technical white paper

Table of contents. Technical white paper Technical white paper Provisioning Highly Available SQL Server Virtual Machines for the HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x Table of contents Executive

More information

High Availability for Citrix XenDesktop and XenApp

High Availability for Citrix XenDesktop and XenApp Worldwide Consulting Solutions WHITE PAPER Citrix XenDesktop High Availability for Citrix XenDesktop and XenApp Planning Guide for High Availability www.citrix.com Overview... 3 Guidelines... 5 Hardware

More information