5.6 Microsoft Hyper-V 2008 R2 / SCVMM 2012. 5,000 Users. Contributing Technology Partners:



Similar documents
Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

CVE-401/CVA-500 FastTrack

Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

5,100 PVS DESKTOPS ON XTREMIO

Course: CXD-202 Implementing Citrix XenDesktop Administration

Bosch Video Management System High Availability with Hyper-V

WHITE PAPER 1

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

CXD Citrix XenDesktop 5 Administration

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Pure Storage: All-Flash Performance for XenDesktop

Provisioning Server High Availability Considerations

Virtual SAN Design and Deployment Guide

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS

Provisioning Server Service Template

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

Cloud Optimize Your IT

CXS Citrix XenServer 6.0 Administration

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

White Paper. Recording Server Virtualization

CMB-207-1I Citrix XenApp and XenDesktop Fast Track

Citrix Training. Course: Citrix Training. Duration: 40 hours. Mode of Training: Classroom (Instructor-Led)

High Availability for Citrix XenDesktop and XenApp

Dell High Availability Solutions Guide for Microsoft Hyper-V

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Index C, D. Background Intelligent Transfer Service (BITS), 174, 191

Introduction. Options for enabling PVS HA. Replication

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

XenDesktop 4 Product Review

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Greatexam.1Y0-401.Premium.VCE.205q. Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: 15.

NET ACCESS VOICE PRIVATE CLOUD

Citrix XenServer 6 Administration

Virtual Desktop Infrastructure (VDI) made Easy

CloudBridge. Deliver the mobile workspace effectively and efficiently over any network. CloudBridge features

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Windows Server 2008 R2 Hyper-V Live Migration

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

Cisco Desktop Virtualization with UCS: A Blueprint for Success

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

CMB 207 1I Citrix XenApp and XenDesktop Fast Track

Vblock Solution for Citrix XenDesktop and XenApp

High Availability with Windows Server 2012 Release Candidate

Private cloud computing advances

This white paper has been deprecated. For the most up to date information, please refer to the Citrix Virtual Desktop Handbook.

Citrix Desktop Virtualization Fast Track

Exam : Installing and Configuring Windows Server 2012

ADVANCED NETWORK CONFIGURATION GUIDE

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

70-417: Upgrading Your Skills to MCSA Windows Server 2012

Microsoft Hyper-V chose a Primary Server Virtualization Platform

EMC Celerra Unified Storage Platforms

Delivering 5000 Desktops with Citrix XenDesktop

Pivot3 Reference Architecture for VMware View Version 1.03

CMB-207-1I Citrix Desktop Virtualization Fast Track

Table of contents. Technical white paper

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

XenDesktop Service Template

Windows Server 2008 R2 Hyper-V Live Migration

Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Solution Design

In addition to their professional experience, students who attend this training should have technical knowledge in the following areas.

Desktop Virtualization. The back-end

Communication ports used by Citrix Technologies. July 2011 Version 1.5

Bosch Video Management System High availability with VMware

App Orchestration Setup Checklist

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

PassTest. Bessere Qualität, bessere Dienstleistungen!

Tintri VMstore with Hyper-V Best Practice Guide

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Evaluation of Enterprise Data Protection using SEP Software

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide

Citrix Provisioning Services Administrator s Guide Citrix Provisioning Services 5.1 SP2

Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice

Windows Server ,500-user pooled VDI deployment guide

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

FOR SERVERS 2.2: FEATURE matrix

XenDesktop 7 Database Sizing

NexGen N5 Hybrid Flash Array with XenDesktop

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform

The Benefits of Virtualizing

How To Build A Call Center From Scratch

System Requirements. Version 8.2 November 23, For the most recent version of this document, visit our documentation website.

Private Cloud Migration

Deployment Blueprint for Virtual Desktop Infrastructure on Microsoft Windows Server 2008 R2 SP1 and NetApp Unified Storage Solutions

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

High Availability for Citrix XenApp

Desktop Virtualization Made Easy Execution Plan

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Transforming Call Centers

Vblock Specialized System for Extreme Applications with Citrix XenDesktop 7.1

Transcription:

5.6 Microsoft Hyper-V 28 R2 / SCVMM 212 5, Users Contributing Technology Partners:

Table of Contents EXECUTIVE SUMMARY... 1 ABBREVIATIONS AND NAMING CONVENTIONS... 2 KEY COMPONENTS... 3 SOLUTIONS ARCHITECTURE... 4 USERS AND LOCATIONS... 6 Enterprise Campus Datacenter (3,775 users)... 6 Large Branch Office (525 Users)... 6 Small Branch Office (15 users)... 6 Remote Access Users (6 Users)... 7 NETWORK INFRASTRUCTURE... 7 Network Design... 7 Datacenter and Remote Office LAN Network Architecture Overview... 7 WAN Network Architecture Overview... 8 STORAGE INFRASTRUCTURE... 11 Storage Planning... 11 Storage Deployment... 13 Infrastructure Storage... 13 VDI Storage... 13 COMMON INFRASTRUCTURE... 15 Infrastructure Deployment Methodology... 15 Physical Common Infrastructure... 16 Virtualized Common Infrastructure... 16 Common Infrastructure Services... 17 Overview... 17 DNS... 17 DHCP... 17 Hyper-V and SCVMM Infrastructure... 18 XenDesktop Infrastructure... 19 MODULAR VDI INFRASTRUCTURE... 24 Infrastructure Deployment Methodology... 24 Modular Block Overview... 25 Modular Block Sizing... 26 Modular Deployment Design... 26 Modular Block Infrastructure... 27 Hyper-V Virtualization for VDI... 27 SCVMM for VDI... 29 SCVMM Host... 29 SCVMM Cluster... 3 SCVMM Library File Server... 32 SCVMM SQL... 32 Provisioning Services (PVS) for VDI... 32 PVS Server Networking... 33 PVS Farm... 34 PVS File Servers... 36 PVS SQL... 36 User Profile Management... 36 Multi-Site Infrastructure... 37 Branch Offices... 37 Remote Access Users... 37 TEST METHODOLOGY... 38 TEST MILESTONES... 38 TEST TOOLS... 38 ii

Session Launching... 39 Performance Capturing... 39 In-Session Workload Simulation... 39 RESULTS... 4 PERFORMANCE RESULTS... 4 Performance Results Boot Storm... 4 PVS for VDA Performance (Boot Storm)... 4 SCVMM for VDA Performance (Boot Storm)... 41 Performance Results Test Run... 42 Hyper-V for VDA Performance (Test Run)... 42 PVS for VDA Performance (Test Run)... 42 SCVMM for VDA Performance (Test Run)... 43 SCVMM for VDA Library Server File Server Performance (Test Run)... 44 Multi-Site Performance (Test Run)... 44 ADDITIONAL TESTING: PVS WITH PERSONAL VDISK (PVD)... 45 Objective... 46 Results... 46 LESSONS LEARNED... 47 CONCLUSIONS... 48 APPENDIX... 5 DOCUMENTS REFERENCES... 5 HARDWARE CONFIGURATION... 5 Active Directory Physical Domain Controller Configuration:... 5 Hyper-V Cluster Pool Specifications... 51 Hyper-V Host for Virtual Desktops Configuration (Host #1):... 51 Hyper-V Host for Virtual Desktops Configuration:... 53 PVS Servers Configuration for each Modular Block:... 54 SCVMM Host for Virtual Desktops Configuration:... 54 HARDWARE SPECIFICATIONS... 55 Servers... 55 Storage Systems... 56 Network Switches... 56 Junipers Network Appliances... 56 AGEE Network Appliances... 56 SDX Network Appliances... 56 BRVPX Network Appliances... 57 Repeater Network Device... 57 MULTI-SITE PERFORMANCE (TEST RUN)... 57 BR VPX - Performance / Utilization... 6 SDX VPX Instance for BR VPX... 62 iii

Executive Summary This reference architecture documents the design, deployment, and validation of a Citrix Desktop Virtualization solution that leveraged the best-of-breed hardware and software vendors in a real-world environment. This design included Microsoft Hyper-V and SCVMM 212, HP blade servers, NetApp storage arrays, and Cisco networking. Five modular blocks (or 5,5 virtual desktops) consisted of users divided into the following groups: 3,775 in the Datacenter, 675 in 2 Branch Offices, and 6 remote users. This Desktop Virtualization reference architecture was built with the following goals: Leverage a modular design to allow for linear scalability and growth by adding additional modular blocks Design an environment to be highly available and resilient Architect a virtual desktop solution that can support users in different geographic locations, such as branch office workers and remote access users This Citrix Desktop Virtualization reference architecture was tested using the industry standard LoginVSI benchmark at the medium workload. Below are the high-level notable findings from the deployment: Citrix XenDesktop 5.6 delivered a resilient solution with Hyper-V and SCVMM 212 at 5,-hosted VDI desktops. SCVMM has the capability to support 2, desktops per host. When deploying a clustered SCVMM 212 server, we found that supporting 1, VMs in our environment was the optimal configuration for minimal impact on deployment time. Citrix Personal vdisk (PvD) in a Hyper-V clustered environment provided the benefits of desktop personalization while avoiding the increased server utilization of dedicated desktops. Hyper-V failover clustering proved to be a robust infrastructure that was highly available when a node failed during testing. The Citrix modular block architecture was validated to provide linear scalability of the VM architecture. This allows an environment to scale to large numbers by duplicating simple modular blocks. HP blade servers were able to easily support a large-scale deployment of virtual desktops offering a balance of power, efficiency, and performance for the end customer. This reference architecture provides design considerations and guidelines for a VDI deployment. 1

Abbreviations and Naming Conventions AG AGEE BR CIDR CSV HDX GPO ISL KMS NS NTP PvD PVS SCVMM U VDA VDI vdisk VPX XDC Access Gateway Access Gateway Enterprise Edition Branch Repeater Classless Inter-Domain Routing Cluster Shared Volume High Definition Experience Group Policy Object Inter-Switch Link Key Management Server NetScaler Network Time Protocol Personal vdisk Citrix Provisioning Services Microsoft System Center Virtual Machine Manager Citrix User Profile Manager Virtual Desktop Agent Virtual Desktop Infrastructure Virtual Disk (Provisioning Services Streamed VM image) Virtual Appliance XenDesktop Controller 2

Key Components Software VDI Desktop Broker Citrix XenDesktop 5.6 VDI Desktop Provisioning Citrix PVS 6.1 w/ Hotfix 3 Endpoint Client Citrix Receiver for Windows 3.1 User Profile Management Citrix User Profile Manager 4.1 VDI Personalization Citrix Personal vdisk 5.6.5 Workload Generator Login VSI 3.6 Virtual Desktop OS Microsoft Windows 7 SP1 x86 Hypervisor Management Microsoft SCVMM 212 Update Rollup 2 Database Server Microsoft SQL 28 R2 Server Operating System Microsoft Windows Server 28 R2 SP1 VDI Hypervisor Microsoft Windows Server 28 R2 SP1 with Hyper-V Role Hardware Blade Servers HP BL46c G6 Infrastructure HP BL46c G7 Virtual Desktop Hypervisors Network Appliances WAN Optimization Citrix Branch Repeater 88, SDX, and VPX Remote Access Citrix NetScaler / Access Gateway Enterprise Network Devices Backbone Switch Cisco Nexus 71K 8x32/1G Workgroup Switches HP H3C 582, 581 Firewall/Routers Juniper Router SRX 24 Storage Systems - NetApp FAS Series Storage FAS324 Infrastructure / User Profile Storage FAS327 Virtual Desktop Storage FAS327 PVS and SCVMM Infrastructure Additional detail can be found in the appendix under Hardware Specifications. 3

Solutions Architecture Designing a Solutions Architecture to achieve the required scale involved a significant amount of planning for the systems and components in the environment. The first step in creating the conceptual architecture was to determine the number of users in the environment and the required services. Each of the sections below contains a description of the major elements of the environment, as well as sizing and design considerations for those elements. Figure 1: Full Environment with VDI and Remote Access Services 4

We also wanted to build a common block architecture that could be used to scale the solution more easily. The modular block is a concept that will be used throughout this document. A modular block is defined as a set of virtual desktops along with all of the components required to run that particular set of desktops. The high-level architecture shown in Figure 2 illustrates the components involved. Common infrastructure is the infrastructure put into place to run the entire project, or even datacenter. It consists primarily of services that are probably already in place in an environment, including AD, DNS, NTP, DHCP and other services. In our environment, this included a cluster of hypervisors to provide those services and an instance of SCVMM to manage that cluster. The shared XenDesktop Infrastructure components are those that are required for virtual desktops to execute in the environment, but that can be scaled and used for any number of virtual desktops deployed. For example, a Citrix License Server must exist in an environment, but this can easily be used for a deployment of any size. In this case, it included provisioning servers for the desktops as well. The modular blocks, in this environment, consist of the hypervisor hosts for the virtual desktops along with the System Center Virtual Machine Manager (SCVMM) management servers for those desktops. Each block of 1,1 desktops can be simply added in to the common and shared infrastructure as needed. The multi-site infrastructure also scales across multiple blocks, and will scale up as additional users are added and as the appliances allow. This Reference Architecture includes the following components: Users and Locations Network Infrastructure Storage Infrastructure Common Infrastructure Modular VDI Infrastructure Multi-site Infrastructure 5

Figure 2: High-Level Infrastructure Architecture Users and Locations Enterprise Campus Datacenter (3,775 users) The datacenter location required virtual desktop services with centralized system and desktop image management. The components selected to deliver these services were Citrix XenDesktop virtual desktops streamed by Provisioning Services 6.1, virtualized on Hyper-V, managed by Microsoft SCVMM 212 with Update Rollup 2 (UR2), and with shared storage hosted on NetApp FAS32 series Storage. NetScaler AGEE and Branch Repeater SDX appliances were selected to provide remote access and acceleration services for all remote branch and telecommuting users. Large Branch Office (525 Users) Users in the large branch office location needed secure and accelerated remote access to the datacenter-based virtual desktops. While having virtual desktops at a branch office of this size is a possibility, one of the requirements was ease of management and redundancy of infrastructure. The easiest way to meet that requirement was to have all virtual desktops be maintained in the datacenter. Components selected to provide connection acceleration services for these remote desktops utilize Citrix Branch Repeater technology: BR 88- series appliances at the branch location and BR-SDX appliances at the Datacenter. Small Branch Office (15 users) Users in the small branch office location also needed secure and accelerated remote access to datacenter-based virtual desktops. We selected Citrix Branch Repeater components, providing for a BR-VPX (virtual appliance) at the branch location to connect to the existing BR-SDX appliances in the Datacenter. 6

Remote Access Users (6 Users) A Citrix NetScaler with Access Gateway appliance was chosen to provide secure remoteaccess services because of its simple integration with a Citrix XenDesktop VDI infrastructure. Remote-access users connect to a NetScaler/Access Gateway appliance using the Citrix Receiver application, just like all users connect to the infrastructure. Network Infrastructure The next consideration for this environment is the network architecture and design. The network architecture included, but was not limited to, creation of IP address requirements, VLAN configurations, required IP services, and server network configurations. Considerations regarding IP allocation, IP block assignment, and WAN Routing were extremely important in ensuring that the design maintained its modularity while still being able to scale appropriately. Network Design When planning the network environment, one must determine how many nodes are needed at the beginning of the project and how many might be added throughout the lifetime of the project. Using this information, we can begin to plan the IP address blocks. It is desirable to employ a modular approach to network VLAN design. Traffic separation is efficient for VDI IP considerations and alleviating bandwidth traffic concerns. If possible, create a separate VLAN for certain types of traffic. For example: a Storage VLAN for storage traffic (that is, iscsi, NFS, or CIFS), DMZ s for certain external incoming traffic types, a server management VLAN (which may include Lights-Out capabilities and log-gathering mechanisms), and Guest VLANs for virtual desktops. This type of design approach keeps Layer-2 broadcasts to a minimum while not overutilizing the CPU and memory resources of network devices. Design Considerations: To provision 1, desktops and accommodate growth in chunks of 2 Desktops, using multiple blocks of /24 networks (254 hosts) aggregated is a more flexible approach than utilizing larger /23 (512 hosts) IP blocks. To grow the VDI IP Network environment in blocks of 4-5 users at a time, consider a larger/23 network. Allocate blocks of IP addresses according to what can be served logically from the Virtual Desktop administrator s perspective (gradual growth and scalability) along with what can be provisioned within the company s IT network-governance policies. To account for overhead and future headcount growth, as well as covering IP needs for services, allocate additional IP addresses as you grow. Planning the network design with growth and a buffer considered, blocks can be aggregated in chunks of /24 s, /23 s, and /22 s (124 hosts) as needed. In addition, CIDR supernetting of the IP blocks can be utilized as required. Datacenter and Remote Office LAN Network Architecture Overview A main Core switch, the Cisco Systems Nexus 71K Core Switch with eight cards and 32 1GbE ports, provided the Datacenter multilayer switching and routing services. This 7

switch also provided all routing, switching, and security services for the rest of the environment. H3C/HP 582 1GbE switches served other 1GbE service ports. Also, 1GbE ports were served by H3C/HP 581 1GbE switches with 1GbE uplinks. For Branch Office sites, workgroup switching and routing were required. The 1GbE ports required were provided by H3C/HP 581 1GbE switches, which incorporated 1GbE uplinks to the core infrastructure. WAN Network Architecture Overview The planning for the multisite WAN test environment included planning for VLAN and other network-based services; supporting both virtualized and physical systems; providing for WAN, DMZ, and firewall connectivity. Planning also included placing network service appliances, such as Branch Repeater and NetScaler systems, in correct, logical network locations. The solutions environment WAN routing at the datacenter was provided by a single Cisco core switch, as mentioned above. Providing appliance-to-appliance ICA optimization for the XenDesktop virtual desktops access required for the environment. To meet this requirement, we deployed BR-SDX appliances at the Datacenter and BR appliances (88- series and virtual appliances) at each of the Branch Office locations. Branch Site Layer-3 routing and edge services were provided by Juniper SRX24 full service Router/Firewall devices. A Branch Repeater 88-series appliance was selected for the large branch office (525 users), and a Branch Repeater virtual appliance (VPX) was selected for the 15 users at Branch Office 2. In the Datacenter, a Branch Repeater SDX appliance Model 1355 was used to allow for a single connection point for remote Branch Repeater connections at the Datacenter. WAN simulation and load generation, including WAN-byte traversal visibility, was provided by Apposite Linktropy 1GbE based WAN simulator appliances inserted between the remote sites and the Datacenter site. No reduction in bandwidth was introduced in the test environment. For remote access users, ICA Proxy and VPN services were required. To meet this requirement, a NetScaler appliance with an Access Gateway Enterprise Edition license was deployed in the datacenter. LACP 82.3AD was used for all ISL s between all devices. Network Design Considerations: Each network appliance is limited by the number of connections; most network appliances list the maximum number of TCP connections that they support. In the Citrix VDI environment, the ICA connection capacity of the Remote Access and WAN Acceleration devices need to be considered. It is necessary to match this capacity with the site user requirements, while including a buffer for future site growth. To optimize storage communications in the environment, we recommend using a dedicated VLAN for server to storage connections. The virtual desktop VLANs were created to match our Provisioning Services server farms: 22 IPs in /21 subnets. IP addresses were provided for both Legacy and 8

Synthetic NICs on 22 virtual desktop VMs per farm; as a result, two /21 VLANs were created across two modular VDI blocks. A storage VLAN was created for our environment and was sized large enough to provide IP addresses for all of our VDI hypervisor blades. It was configured so that each blade used two storage NICs bound via MPIO. Consider separating heavy network traffic in a dedicated VLAN so that it does not interfere with other traffic. In our environment, the virtual desktop PXE Boot traffic was separated based on the PVS servers servicing each modular VDI block. Uplinks between all network switches at Layer 2 should employ 82.3ad LACP Ethernet aggregation for increased throughput and resiliency. To determine the overall bandwidth needed, it s necessary to know the bandwidth required per session. The number of possible remote site sessions is directly proportional to the bandwidth. 9

Figure 3: Multi-Site Enterprise with 5K Users Network Concept 1

Storage Infrastructure Shared Storage is one of the most critical components in a large scale VDI environment. Scale, end user satisfaction and overall performance greatly depend on the storage systems deployed and their capabilities. Hardware and software features employed in the design of the storage layer architecture also impact these areas. As shown below, the Storage Layer touches and shares I/O with every other common block in the VDI server architecture. Storage Planning Figure 4: Storage Infrastructure Storage planning consists of two major sections: capacity planning and performance planning. If you have chosen a network attached storage (NAS) implementation, you must also account for additional network impact. Storage capacity planning is the projection of disk space assignment and allocation on the storage appliances as well as the projection of the required space based on known requirements. Single Server Scalability tests with application workloads specific to your company s operation can help you start your sizing diligence. This is a very common first step in any VDI storage implementation, as everyone s storage needs are unique. Storage performance planning takes into account your physical disk assignment and allocation while balancing the required disk IOPS for acceptable end user response. Storage processor CPU utilization (while running at full load) within the storage device is also an important data point. Every individual environment has its own requirements and bill of materials. Unfortunately, there is no concept of one size fits all in storage calculations. When planning for a network attached storage (NAS) implementation in your environment, an added consideration is the Ethernet network load and utilization that is associated with using NAS in large-scale virtual deployments. This often dictates employing 1GbE networking for the Storage layer of the network. For the Storage device, this may mean bonding multiple 1GbE network interfaces at the device level in order to aggregate to a higher bandwidth capability for each storage subsystem. 11

Sizing Considerations: When planning your NetApp RAID groups, the size and continuity of the RAID group member sizes change as you add more shelves of disk. There are two suggestions to consider: o Never exceed 24 disk members per RAID group. This is per guidance from o NetApp support Whenever possible keep all of your RAID group sizes even for even continuous data stripe length in your aggregates Consider the disk size, media type, spindle motor speed and disk cache of the disks employed in the array when calculating the projected storage implementation. Free space percentage of the provisioned storage units should be included in the capacity consumption. Free space is needed to allow sufficient seek time. In addition, some features of your storage subsystem, such as snapshots, deduplication, and other features, may require planning for additional space. Consider the amount of resource the Storage Processors utilize regarding the memory and CPU capacity to serve the disk shelves IO load. Design Considerations: Storage performance is greatly affected by the number of aggregate spindles in the storage array. With regards to NetApp storage, a very critical detail is the sizing of the RAID Groups in proportion to the amount of disks available for use. For NetApp, when employing Single Parity RAID4, there is a limitation of 16 disks per Raid Group. When employing 64Bit based RAID DP (Double Parity RAID 6), the disk design is limited to 24 Disks per Raid Group before performance limitations are imposed on the design. All RAID groups should be as close to the same exact size as possible. In addition, when calculating storage availability, remember that there is a RAID penalty: usable capacity goes down when using spindles for the RAID parity. When using iscsi, the overall Ethernet overhead of the protocol must be taken into consideration. This load of Ethernet encapsulation and unencapsulation can be extremely high when aggregated at this scale. We recommend the use of a dedicated TCP offload engine card (i.e. TOE card) to maximize performance. Implement the best practices of the storage vendor chosen in regard to LUN types, PIT copy practices, drivers, firmware, operating system releases, and other details that affect the host-to-storage relationship. Monitor the switch connected directly to the NAS devices, because that switch is a main point of interest when troubleshooting storage connectivity issues. 12

Storage Deployment It was found that a single NetApp FAS327 storage could reliably host storage for at least 22 XenDesktop 5.6 virtual desktop VMs virtualized on Hyper-V using the iscsi storage protocol. Infrastructure Storage NetApp FAS324 storage were utilized for the common infrastructure storage in this environment. These storage systems hosted iscsi LUNs for the common infrastructure Hyper-V Cluster, SQL database storage, and User Profile storage. They also hosted NFS volumes for test client storage. Sizing Considerations: The LUN for the Infrastructure Server VMs should be large enough to host several large fixed-vhd VMs. Note that the need for fixed-vhd is brought about by the combination of Hyper-V 28 R2 SP1 and a NetApp storage. All VHDs should be fixed VHD. Dynamic-VHDs should not be used when using iscsi storage for a Windows 28 R2 SP1 server) The DataCenter Infrastructure VM LUN was assigned 1TB to provide support 2 virtualized infrastructure server VMs at 4GB each with 35% free space, based on Microsoft and NetApp best practices. The SCVMM SQL Server LUN was assigned 23 GB to host five SCVMM 212 databases (each database was allocated 3GB) with 35% free space. The PVS SQL Server LUN was assigned 16GB to store three PVS farm databases, and also included 35% free space. User Profile LUNs were assigned 1GB each and were shared via iscsi. In addition, each LUN was assigned to a specific Modular Block. This size is based on small user profiles (less than 5MB per user) present in our environment. Users accessed these LUNs as Windows Shares off of an infrastructure Windows File Server and not directly on the storage via iscsi. Design Considerations: Windows Server 28 R2 SP1-based Hyper-V VMs have.bin files equal to the amount of RAM assigned to the VM. The.BIN files produce minimal IOPS activity, and should be assigned to thin-provisioned and/or lower-cost SAN storage if available. VDI Storage NetApp FAS327s were utilized for the VDI storage. Sizing Information for this environment: VDA Storage VDA LUNs were created for each 8-Node Hyper-V Cluster. Each cluster hosted 55 virtual desktop VMs and was allocated 3.23 TB, which included 28% free space. 13

VDA LUN space calculation: VDA RAM 1GB = Hyper-V.BIN File Size (When possible, put this on thin-provisioned storage) VDA Write Cache 4GB (this must to be large enough to contain the Page File as well as difference data) Total VM space 5GB each Total Space required 55 VMs * 5GB = 2525GB LUN Size* 2525GB + 28% = 3234GB *For the purpose of consistency among all of the storage we were using, the VDA LUN size was chosen to be 3234GB, which contains approximately 28% free space. In addition, 1GB RAM was selected per amount utilized as well as for maximum scalability. VDA RAM size needs to be determined for your specific user environment. PVS File Server Storage Each of the 3 PVS File Servers were assigned a 55 GB iscsi LUN that were shared as Windows File Server shares and mounted by the PVS server farms from a UNC location. Each PVS iscsi LUN was assigned to a PVS File Server dedicated to a 2,2 virtual desktop PVS Farm. Each PVS LUN was allocated 55GB. PVS LUN Size calculation: Host virtual desktop vdisks Backup location for virtual desktop write cache during failure of shared storage (up to 16MB per virtual desktop) Total Size (2 x 4GB) = 8GB 22*16/1 = 323GB 8GB + 323GB and with 35% free space = 544GB (Rounded up to 55GB) SCVMM Library Storage The SCVMM Library iscsi LUN was assigned 675GB to provide centralized storage for the virtual desktop template and also contain the backup of critical environment VMs. The LUN size chosen was to provide enough storage for at least ten 4GB VMs with free space. Cluster Quorum LUN A 2GB Cluster Quorum LUN was assigned to each Failover Cluster to serve as Quorum Witness storage LUN. Design Considerations Before creating the storage architecture for a large scale environment, you need to collect storage utilization data such as Disk Space and IOPS utilization for each of the environment component types mentioned above, For monitoring performance in VDI environments, knowing all desired storage system specifications is critical. It is particularly important to monitor utilization of 14

the Processor, network interfaces, Disk Space, and Disk Performance of the storage device and its components. It is recommended to use network interface cards (NIC) that provide maximum performance and caching capabilities. In this test environment, we deployed Intel 1GbE cards (NetApp X1117A-r6) and this resulted in higher network throughput with lower system CPU load due to higher buffer memory. NetApp LUN-type and format-allocation units are important for best performance of the cluster s shared storage. In this test environment, Windows_28 LUN-type and 64KB allocation-unit size were found to be the best model. Common Infrastructure The next step of creating the Solutions Architecture was the planning and preparation of the Common Infrastructure. Figure 5: Common Infrastructure The common infrastructure was made up of the systems that provided core services to the entire environment. These systems were comprised of a mixture of physical and virtualized systems. Infrastructure Deployment Methodology The infrastructure Operating System, Features, Roles, Software, IP information, and other configuration settings were centrally managed and deployed with HP Insight Control server. HP Insight Control allows for streamlined and consistent deployment to the large number of servers. The Common Infrastructure functions were hosted on HP BL46c G6 blade servers, which are managed by the Insight Control. The entire infrastructure consisted of HP Blades. We started by first creating VLANs, then HP Server Virtual Connect Profiles, and last, deployment of the Operating Systems, roles and features, software and settings. 15

Physical Common Infrastructure Resiliency and performance requirements of many infrastructure services mandated the servers to be physical rather than virtual. Additionally, two physical Domain Controllers were required to maintain the functionality of the Failover Cluster that supported the virtual Domain Controllers per Microsoft Best Practices. All physical infrastructure servers ran on HP BL46 G6 servers. Microsoft offers best practices around SCVMM leveraging both physical servers as well as for virtualizing the environment. Microsoft offers the option of running VMM as a highly available VM instead of relying on physical clusters. As there are a number of existing documents and testing that explore virtualized SCVMM servers, we decided to use the physical server option. Both designs are valid and supported by Microsoft. The following software/services ran on physical machines for the Common Infrastructure: Active Directory / DNS / DHCP / NTP Microsoft Windows 28 R2 SP1 Enterprise Edition with Hyper-V Role (This is the same code-base as Microsoft Hyper-V Server 28 R2 SP1, but includes the GUI management capabilities. We fully expect our test results to match results from that operating system choice as well.) Microsoft SCVMM 212 R2 Microsoft SQL 28 R2 Virtualized Common Infrastructure In order to make the best use of system resources and to take advantage of virtualization, some common infrastructure components were virtualized. Active Directory virtualized systems added resiliency to the already existing physical services and spread the active directory load during the Boot Storm and logon processes. All virtual machines in the infrastructure ran on Windows Server 28 R2 SP1 with the Hyper-V Role hosted on HP BL46c G6 server blades. The following software/services were virtualized to support the Common Infrastructure: Active Directory / DNS XenDesktop Controllers (Desktop Delivery Controllers, or DDCs) Citrix License Server Design Considerations: Ensure that virtualized systems participate in the NTP process. Disabling the Hyper-V host time sync for the Virtual Guest services. This was applied on all virtualized Active Directory Domain Controllers and XenDesktop Controllers (using host time sync was a duplication of effort, since the hosts were already participating in the NTP time sync process). The following sections outline the architecture of the common infrastructure environment. 16

Common Infrastructure Services Overview Active Directory o Two Active Directory DCs in the Datacenter Two Physical: these servers provided DC, DHCP, DNS and NTP services Hyper-V 28 R2 o One Hyper-V Cluster with 6 Nodes SCVMM 212 SQL 28 R2 XenDesktop o Two XenDesktop Controllers Citrix Licensing o One Citrix License Server DNS DNS services are critical for both Active Directory services and XenDesktop communications. DNS services in both the Datacenter and the Branch Offices were used to fulfill name resolution requests and support local Active Directory requests. Active Directory Integrated Zones were created for both forward- and reverse-lookup zones. A reverse-lookup zone was created for each VLAN/Subnet. This was required to allow 2-way communications between XenDesktop and the VDAs. Design Considerations: For Windows and Microsoft Office activation, if a Key Management System (KMS) is employed, DNS entries for the KMS service need to be added. DHCP One DHCP server resided in the Datacenter and one in each of the Branch Offices. These DHCP servers were used to provide IP addresses and specific configuration options used by PVS to allow the virtual desktops to locate and boot from the PVS server. Additional specific Scope Options used: Option 66 Boot Server Host Name Set to IP address of the PVS TFTP server Option 67 Bootfile Name Set to ardbp32.bin PVS boot file name Design Considerations: It is possible to configure TFTP HA for a PVS environment. The following guide provides more information: http://support.citrix.com/article/ctx134945 17