Citrix XenApp Hosted Shared Desktop on Microsoft Hyper-V Server 2012. High-Level Design

Similar documents
Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 High-Level Design

AccessData Corporation AD Lab System Specification Guide v1.1

Restricted Document. Pulsant Technical Specification

Microsoft Exchange 2010 on VMware Design and Sizing Examples

Citrix XenApp 6.5 Basic Administration

Caching Software Performance Test: Microsoft SQL Server Acceleration with FlashSoft Software 3.8 for Windows Server

NEXGEN N5 AND VORMETRIC DATA SECURITY

State of Wisconsin Division of Enterprise Technology (DET) Distributed Database Hosting Service Offering Definition (SOD)

Identify Major Server Hardware Components

State of Wisconsin. File Server Service Service Offering Definition

CNS-205: Citrix NetScaler 11 Essentials and Networking

How To Install An Orin Failver Engine On A Network With A Network Card (Orin) On A 2Gigbook (Orion) On An Ipad (Orina) Orin (Ornet) Ornet (Orn

Serv-U Distributed Architecture Guide

Serv-U Distributed Architecture Guide

Configuring, Monitoring and Deploying a Private Cloud with System Center 2012 Boot Camp

CXA-300-1I: Advanced Administration for Citrix XenApp 5.0 for Windows Server 2008

State of Wisconsin DET Agency Managed Virtual Services Service Offering Definition

Administration of SQL Server

Ten Steps for an Easy Install of the eg Enterprise Suite

An Oracle White Paper January Oracle WebLogic Server on Oracle Database Appliance

Microsoft Certified Database Administrator (MCDBA)

risk2value System Requirements

The Relativity Appliance Installation Guide

ABELMed Platform Setup Conventions

CXA Citrix XenApp 6.5 Basic Administration

Datasheet. PV4E Management Software Features

CSC IT practix Recommendations

Citrix XenApp Hosted Shared Desktop on Microsoft Hyper-V Server 2012 High-Level Design

CXA-204-1I Basic Administration for Citrix XenApp 6

CNS-205 Citrix NetScaler 10.5 Essentials and Networking

Level 1 Technical. RealPresence Web Suite and Web Suite Pro. Contents

Microsoft Exchange 2013 on VMware Design and Sizing Guide

Information Services Hosting Arrangements

Understand Business Continuity

SBClient and Microsoft Windows Terminal Server (Including Citrix Server)

Networking Best Practices

All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.

Chorus UFB Services Agreement Bitstream Services: Service Description for UFB Handover Connection

Welcome to Remote Access Services (RAS)

SANsymphony-V Storage Virtualization Software Installation and Getting Started Guide. February 5,

Licensing the Core Client Access License (CAL) Suite and Enterprise CAL Suite

StoneFly M-Series DR Backup Appliance

Pexip Infinity and Cisco UCM Deployment Guide

Cloud Services Frequently Asked Questions FAQ

StoneFly Z-Series DR Backup Appliance

MaaS360 Cloud Extender

SaaS Listing CA Cloud Service Management

CMB-207-1I Citrix XenApp and XenDesktop Fast Track

Firewall/Proxy Server Settings to Access Hosted Environment. For Access Control Method (also known as access lists and usually used on routers)

Volume Licensing reference guide. Windows Server 2012 R2

Managed Firewall Service Definition. SD007v1.1

Licensing Windows Server 2012 for use with virtualization technologies

Best Practice - Pentaho BA for High Availability

Implementing SQL Manage Quick Guide

Implementing ifolder Server in the DMZ with ifolder Data inside the Firewall

Deployment Overview (Installation):

1)What hardware is available for installing/configuring MOSS 2010?

E2E Express 3.0. Requirements

Blue Link Solutions Terminal Server Configuration How to Install Blue Link Solutions in a Terminal Server Environment

Intransa VideoAppliance VA1020 Series Server/Storage Appliance V1.3

CallRex 4.2 Installation Guide

webnetwork Pre-Installation Configuration Checklist

Webalo Pro Appliance Setup

Licensing Windows Server 2012 R2 for use with virtualization technologies

Interworks Cloud Platform Citrix CPSM Integration Specification

BackupAssist SQL Add-on

System Business Continuity Classification

Best Practices for Optimizing Performance and Availability in Virtual Infrastructures

McAfee Enterprise Security Manager. Data Source Configuration Guide. Infoblox NIOS. Data Source: September 2, Infoblox NIOS Page 1 of 8

OnX is uniquely positioned to help your organization rapidly gain the necessary skills to enable the successful deployment of SDN.

How to deploy IVE Active-Active and Active-Passive clusters

Nutanix Tech Note. Virtualizing Microsoft SQL Server on Converged Infrastructure

Traffic monitoring on ProCurve switches with sflow and InMon Traffic Sentinel

Installation Guide Marshal Reporting Console

Release Notes. Dell SonicWALL Security 8.0 firmware is supported on the following appliances: Dell SonicWALL Security 200

FINRA Regulation Filing Application Batch Submissions

2. When logging is used, which severity level indicates that a device is unusable?

System Business Continuity Classification

Microsoft Exchange 2010 on VMware Availability and Recovery Options

Instant Chime for IBM Sametime Quick Start Guide

Design Considerations for Citrix XenApp/XenDesktop 7.6 Disaster Recovery

Introduction LIVE MAPS UNITY PORTAL / INSTALLATION GUIDE Savision B.V. savision.com All rights reserved.

Password Reset for Remote Users

Bitrix Intranet. Product Requirements

Transcription:

Citrix XenApp Hsted Shared Desktp n Micrsft Hyper-V Server 2012. High-Level Design Citrix Validated Slutins 17 th Octber 2013 Prepared by: Citrix Cnsulting

Revisin Histry Revisin Change Descriptin Updated By Date 1.0 Dcument created, updated and distributed t Citrix and Nimble Strage teams. APAC Citrix Cnsulting 17-Oct-2013-2 -

Table f Cntents 1. Executive Summary... 5 1.1 Audience... 5 1.2 Purpse... 5 1.3 Reference Architecture... 5 2. Architecture Overview... 6 2.1 Citrix Virtual Desktp Types... 6 2.2 The Pd Cncept... 6 2.3 Justificatin and Validatin... 6 2.4 High Level Slutin Overview... 7 2.5 Assumptins... 8 3. Lgical Architecture... 9 3.1 Lgical Cmpnent Overview... 9 4. Physical Architecture... 11 4.1 Physical Cmpnent Overview... 11 4.2 Physical Cmpnent Design... 12 BM Cisc UCS Cmpute Hardware... 12 BM Cisc Nexus Netwrk Hardware... 13 BM Nimble Strage Array Hardware... 14 4.3 Hardware Supprt and Maintenance... 14 Cisc UCS and Nexus... 14 Nimble Strage... 14 5. High-Level Design... 15 5.1 Netwrk / Cisc Nexus... 15 Overview... 15 Key Decisins... 15 Design... 16 5.2 Cisc UCS... 17 Overview... 17 Key Decisins... 17 Design... 18 5.3 Nimble Strage... 19 Overview... 19 Key Decisins... 21 Design... 22 5.4 Micrsft Hyper-V Server 2012... 25 Overview... 25 Key Decisins... 26 Design... 27 5.5 SMB File Services... 29 Overview... 29 Key Decisins... 30 Design... 31 5.6 Citrix Prvisining Services... 34 Overview... 34 Key Decisins: PVS... 35 Key Decisins: DHCP... 36 Design... 36 5.7 Citrix XenApp... 38 Overview... 38 Key Decisins... 39 Design... 40 5.8 Citrix XenApp Wrker (VM Guest wrklad)... 41 Overview... 41 Key Decisins... 41 Design... 43 5.9 Citrix Web Interface... 44 Overview... 44-3 -

Key Decisins... 44 Design... 44 5.10 Citrix License Server... 45 Overview... 45 Key Decisins... 45 Design... 45 5.11 Citrix NetScaler SDX... 46 Overview... 46 Key Decisins... 46 Design... 47 5.12 Citrix EdgeSight... 48 Overview... 48 Key Decisins... 48 Design... 49 5.13 User Prfile Management Slutin... 50 Overview... 50 Key Decisins... 50 Design... 50 5.14 Active Directry... 51 Overview... 51 Key Decisins... 51 Design... 51 5.15 Database Platfrm... 53 Overview... 53 Key Decisins... 53 Design Cnsideratins... 53 Appendix A. Decisin Pints... 54 Appendix B. Server Inventry... 56-4 -

1. Executive Summary 1.1 Audience This reference architecture dcument is created as part f a Citrix Validated slutin (CVS) and is intended t describe the detailed architecture and cnfiguratin f the cmpnents cntained within. Readers f this dcument shuld be familiar with Citrix XenApp, its related technlgies and the fundatinal cmpnents; Cisc UCS, Cisc Nexus, Nimble Strage and Micrsft Hyper-V Server 2012. 1.2 Purpse The purpse f this dcument is t prvide high-level design infrmatin that describes the architecture fr the Citrix XenApp Hsted Shared Desktp (HSD) Citrix Validated Slutin built n Cisc UCS cmpute, Cisc Nexus switching and Nimble Strage array. 1.3 Reference Architecture In rder t facilitate rapid and successful deplyments f Citrix XenApp Hsted Shared Desktps, Citrix Cnsulting APAC have prcured, built and tested a slutin built n Cisc UCS, Cisc Nexus and Nimble Strage hardware. The Citrix Validated Slutin prvides prescriptive guidance n Citrix, Cisc and Nimble Strage design, cnfiguratin and deplyment settings thereby allwing custmers t quickly deliver Hsted Shared Desktp wrklads using Citrix XenApp. Extensive testing was perfrmed using Lgin VSI t simulate real-wrld wrklads and determine ptimal cnfiguratins fr the integratin f cmpnents that make up the verall slutin. - 5 -

2. Architecture Overview The Citrix Validated Slutin and its cmpnents were built and validated t supprt up t 1,000 individual user sessins running XenApp Hsted Shared Desktps n Windws Server 2008 R2 Remte Desktp Sessin Hsts perating as virtual machine instances n Micrsft Hyper-V Server 2012. This architecture is a single, self-supprting mdular cmpnent identified as a Pd, allwing custmers t cnsistently build and deply scalable envirnments. 2.1 Citrix Virtual Desktp Types Althugh this Citrix Validated Slutin dcument references Citrix XenApp Hsted Shared Desktps it als makes reference t Hsted Virtual Desktps (HVD). Bth types f virtual desktps are discussed belw fr reference. Fr mre infrmatin, refer t Citrix FlexCast delivery methds http://flexcast.citrix.cm/ Hsted Shared Desktp (HSD). A Windws Remte Desktp Sessin Hst using Citrix XenApp t deliver Hsted Shared Desktps in a lcked dwn, streamlined and standardised manner with a cre set f applicatins. Using a published desktp n XenApp, users are presented a desktp interface similar t a Windws 7 lk and feel. Each user runs in a separate sessin n the XenApp server. Hsted Virtual Desktp (HVD) aka Hsted VDI. A Windws 7 desktp instance running as a virtual machine where a single user cnnects t the machine remtely. Cnsider this as 1:1 relatinship f ne user t ne desktp. There are differing types f the hsted virtual desktp mdel (existing, installed, pled, dedicated and streamed). This dcument will primarily discuss the Citrix Validated Slutin fr Hsted Shared Desktps. 2.2 The Pd Cncept The term pd is referenced thrughut this slutin design. In the cntext f this dcument a pd is a knwn entity, an architecture that has been pre-tested and validated. A pd cnsists f the required hardware and sftware cmpnents required t deliver Citrix XenApp capacity fr up t 1,000 Hsted Shared Desktp user sessins. The pd prescribes the physical and lgical cmpnents required t scale ut the number f HSD desktps in increments f 1,000 users r part theref. 2.3 Justificatin and Validatin The cnstruct f this Citrix Validated Slutin is based n many decisins that were made during validatin testing. Testing was carried ut using the Lgin VSI virtual Sessin Indexer (VSI), an industry standard tl fr user / sessin benchmarking. Lgin VSI allws cmparisns f platfrms and technlgies under the same repeatable lad. The Medium VSI wrklad is expected t apprximate the average ffice wrker during nrmal activities and was the wrklad used thrughut testing. - 6 -

2.4 High Level Slutin Overview The diagram belw depicts the Citrix XenApp Hsted Shared Desktp technlgy stack. Figure 1. Slutin Stack Hsted Shared Desktps (HSD). This slutin will fcus n the delivery f virtual desktps based n Micrsft Windws 2008 R2 Remte Desktp Services sessin hst wrklads pwered by Citrix XenApp 6.5. Micrsft Hyper-V Server 2012 (Hyper-V). The hypervisr selected t hst the virtualised desktp and server instances fr this slutin is Micrsft Hyper-V Server 2012. Hyper-V will be deplyed nt the Cisc UCS blades and cnfigured t bt frm iscsi SAN. Citrix Prvisining Services (PVS). Citrix XenApp server wrklads will be streamed using Prvisining Services 6.1 using a predefined vdisk image cntaining the ptimised perating system and Tier-1 applicatin set. - 7 -

Citrix XenApp. Tier-2 1 applicatins which may include line f business r custmer specific applicatins that are nt embedded as part f the Disk image may be delivered using Citrix XenApp r Micrsft App-V 2. Citrix Web Interface. Virtualised Web Interface servers will be deplyed t prvide applicatin and desktp resurce enumeratin. The Web Interface servers will be lad balanced using Citrix NetScaler appliances. Citrix NetScaler SDX 11500. NetScaler SDX appliances cnfigured with high availability virtual instances (HA) will be deplyed t prvide remte access capability t the Hsted Shared Desktps and server lad balancing f Citrix services. Citrix EdgeSight. Citrix EdgeSight will prvide mnitring and alerting capabilities int the Citrix XenApp and applicatin stack. Cisc UCS. The hardware platfrm f chice fr this slutin is Cisc UCS cnsisting f the UCS 5108 chassis and UCS B200 M3 blades. Secnd generatin Fabric Intercnnect (6248UP) and Fabric Extenders (2204XP) are utilised. The Hyper-V servers will be hsted n Cisc UCS hardware. Cisc Nexus. Secnd generatin Cisc Nexus 5548UP switches are used t prvide cnverged netwrk cnnectivity acrss the slutin using 10GbE. Nimble Strage. Hypervisr perating system disks will be delivered via bt frm iscsi SAN. Shared strage in the frm f iscsi munted vlumes and Clustered Shared Vlumes (CSVs) fr virtual disk images. Supprting Infrastructure. The fllwing cmpnents are assumed t exist within the custmer envirnment and are required infrastructure cmpnents: Micrsft Active Directry Dmain Services A suitable Micrsft SQL database platfrm t supprt the slutin database requirements. Licensing servers t prvide Citrix and Micrsft licenses are assumed t exist. CIFs SMB File sharing. This can be prvisined as part f the slutin using Windw Server Failver Clustering with the General Use file server rle enabled. Please refer t sectin, SMB File Services. This design dcument will fcus n the desktp virtualisatin cmpnents which include the desktp wrklad, desktp delivery mechanism, hypervisr, hardware, netwrk and strage platfrms. 2.5 Assumptins The fllwing assumptins have been made: Required Citrix and Micrsft licenses and agreements are available. Required pwer, cling, rack and data centre space is available. N netwrk cnstraints that wuld prevent the successful deplyment f this design. Micrsft Windws Active Directry Dmain services are available. Micrsft SQL Database platfrm is available. 1 The slutin design f fr Tier-2 applicatins delivered by Citrix XenApp is ut f scpe fr this dcument. 2 The slutin design f Micrsft App-V cmpnents is ut f scpe fr this dcument. - 8 -

3. Lgical Architecture 3.1 Lgical Cmpnent Overview The lgical cmpnents that make up the requirements t deliver a 1,000 user XenApp Hsted Shared Desktp slutin are described belw: Figure 2. Hsted Shared Desktps - Lgical Cmpnent View - 9 -

The fllwing Citrix cmpnents are required: Citrix XenApp - Hsted Shared Desktp virtualisatin platfrm. Citrix Prvisining Services - wrklad delivery platfrm. Citrix User Prfile Management - user persnalisatin. Citrix Web Interface - XenDesktp and XenApp resurce enumeratin. Citrix License Server - pled management f Citrix licenses. Citrix NetScaler SDX 11500 remte access t the desktp instances and server lad balancing capabilities fr the Citrix Web Interface servers and ther Citrix services. Citrix EdgeSight Citrix and applicatin mnitring. - 10 -

4. Physical Architecture 4.1 Physical Cmpnent Overview This Citrix Validated Slutin is built n a Cisc Unified Cmputing System (UCS), Cisc Nexus switches and a Nimble Strage array, these cmpnents define the verall hardware architecture. Figure 3. defines the Cisc and Nimble Strage array hardware cmpnents required t prvide the 1,000 Hsted Shared Desktp pd delivered by Citrix XenApp. Figure 3. Physical Cmpnent View Resurce Cmpnents Patches/Revisins Cmpute Strage Cisc UCS Manager 2.1(1b) Cisc UCS 5108 Blade Server Chassis Cisc UCS B200 M3 B-Series Blades - (B200M3.2.0.3.0.051620121210) Dual Intel 2.50 GHz E5-2640 Xen Prcessrs 128GB RAM Cisc Virtual Interface Card 1240 Cisc UCS 6248UP Series Fabric Intercnnects 5.0(3)N2 Cisc UCS 2204XP Series Fabric Extender Nimble Strage array Sftware versin: 1.4.7.0-45626-pt (current versin at the time f testing) Head Shelf, Mdel: CS240G-X4-11 -

Resurce Cmpnents Patches/Revisins Netwrk Remte Access & Server Lad Balancing Dual HA Cntrllers Internal Disks: 12 x 2000GB NL-SAS drives 4 x 600GB SSD drives iscsi fr all data paths Cisc Nexus 5548UP Series Switch - System versin: 5.1(3)N2(1) Citrix NetScaler SDX 11500 appliances. Virtual instances cnfigured in HA. 4.2 Physical Cmpnent Design BM Cisc UCS Cmpute Hardware Table 1. Hardware Cmpnents Part Number Descriptin Quantity N20-Z0001 CISCO Unified Cmputing System 1 N20-C6508 UCSB-B200-M3 UCS-MR-1X162RY-A UCS-CPU-E5-2640 UCSB-MLOM-40G-01 CISCO UCS 5108 Blade Svr AC Chassis/0 PSU/8 fans/0 fabric extender 2 CISCO UCS B200 M3 Blade Server w/ CPU, memry, HDD, mlom/mezz 10 CISCO 16GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v 80 CISCO 2.50 GHz E5-2640/95W 6C/15MB Cache/DDR3 1333MHz 20 CISCO VIC fr UCS blade servers capable f up t 40GbE 10 N20-BBLKD CISCO UCS 2.5 inch HDD blanking panel 20 UCSB-HS-01-EP CISCO Heat Sink fr UCS B200 M3 server 20 UCS-IOM-2208XP CISCO UCS 2204XP I/O Mdule (4 External, 16 Internal 10Gb Prts) 4 N20-PAC5-2500W CISCO 2500W AC pwer supply unit fr UCS 5108 8 CAB-AC-16A-AUS CISCO Pwer Crd, 250VAC, 16A, Australia C19 8 N20-FAN5 CISCO Fan mdule fr UCS 5108 16 N01-UAC1 CISCO Single phase AC pwer mdule fr UCS 5108 2 N20-CAK CISCO Access. kit fr 5108 Blade Chassis incl Railkit, KVM dngle 2 N20-FW010 CISCO UCS 5108 Blade Server Chassis FW package 2 UCS-FI-6248UP CISCO UCS 6248UP 1RU Fabric Int/N PSU/32 UP/ 12p LIC 2 UCS-FI-DL2 CISCO UCS 6248 Layer 2 Daughter Card 2 UCS-BLKE-6200 CISCO UCS 6200 Series Expansin Mdule Blank 4 UCS-FAN-6248UP CISCO UCS 6248UP Fan Mdule 8-12 -

Part Number Descriptin Quantity UCS-ACC-6248UP CISCO UCS 6248UP Chassis Accessry Kit 4 N10-MGT010 CISCO UCS Manager v2.0 2 CAB-9K10A-AU CISCO Pwer Crd, 250VAC 10A 3112 Plug, Australia 4 UCS-PSU-6248UP-AC CISCO UCS 6248UP Pwer Supply/100-240VAC 4 SFP-H10GB-CU5M CISCO 10GBASE-CU SFP+ Cable 5 Meter 14 Table 2. Cisc UCS Cmpute Hardware BM Cisc Nexus Netwrk Hardware Part Number Descriptin Quantity N5K-C5548UP-FA CISCO Nexus 5548 UP Chassis, 32 10GbE Prts, 2 PS, 2 Fans CAB-9K10A-AU CISCO Pwer Crd, 250VAC 10A 3112 Plug, Australia 4 N55-D160L3-V2 CISCO Nexus 5548 Layer 3 Daughter Card, Versin 2 2 N5KUK9-513N1.1 CISCO Nexus 5000 Base OS Sftware Rel 5.1(3)N1(1) 2 N55-PAC-750W N5548P-FAN CISCO Nexus 5500 PS, 750W, Frnt t Back Airflw(Prt-Side Outlet) CISCO Nexus 5548P and 5548UP Fan Mdule, Frnt t Back Airflw N5548-ACC-KIT CISCO Nexus 5548 Chassis Accessry Kit 2 N55-M-BLNK CISCO Nexus 5500 Mdule Blank Cver 2 N55-BAS1K9 CISCO Layer 3 Base License fr Nexus 5500 Platfrm 3 2 SFP-10G-SR= CISCO 10GBASE-SR SFP Mdule 4 Table 3. Cisc Nexus Switch Hardware 2 4 4 3 Layer 3 ruting functin n the Cisc Nexus switch is prvided by N55-BAS1K9 - Cisc Layer 3 Base License fr Nexus 5500 Platfrm. Ruting can either be terminated n the Nexus 5548UP level r further upstream utilising existing netwrk infrastructure. - 13 -

BM Nimble Strage Array Hardware Part Number Descriptin Quantity CS240G-X4 CS240G-X4 Strage Array w/10gbe 24TB Raw, 16-33TB Usable, 2.4TB Flash Cache, 2x10GigE + 2x1GigE, High Perf Ctlr Table 4. Nimble Strage Array Hardware 1 4.3 Hardware Supprt and Maintenance Cisc UCS and Nexus Part Number Descriptin Quantity CON-SNT-2C6508 CISCO UC SUPPORT 8X5XNBD 5108 Blade Server Chassis 2 CON-SNT-B200M3 CISCO UC SUPPORT 8X5XNBD UCS B200 M3 Blade Server 10 CON-SNT-FI6248UP CISCO UC SUPPORT 8X5XNBD UCS 6248UP 1RU Fabric Intercnnect/2PSU/2 2 CON-SNT-C5548UP CISCO SUPPORT 8X5XNBD Nexus 5548UP 2 Nimble Strage Part Number Descriptin Quantity SLA-CS240-4HR-1YR 4 Hur Serv/Sftw Supprt fr 240; 24x7, 1 Yr, Nt available in all areas * 1-14 -

5. High-Level Design 5.1 Netwrk / Cisc Nexus Overview Nexus A and Nexus B identify the pair f Cisc Nexus 5548UP switches that will be deplyed as part f the slutin frming the netwrk switching cmpnents f the architecture Figure 4. illustrates high-level cnnectivity fr the individual cmpnents: Key Decisins Figure 4.Netwrk Cmpnent Cnnectivity Decisin Pint Descriptin / Decisin Nexus Switch Firmware Layer 3 Ruting Optinal 4 Number f Switches Tw: Cisc Nexus 5548UP Series Switch - System versin: 5.1(3)N2(1) Nexus 5548UP A Nexus 5548UP B 4 Layer 3 ruting n the Cisc Nexus switch prvided by N55-BAS1K9 - Cisc Layer 3 Base License fr Nexus 5500 Platfrm. This can either be terminated n the Nexus 5548UP level r utilise existing netwrk infrastructure. - 15 -

Table 5. Cisc Nexus Key Decisins Design Slutin vlan requirements: Vlan Name Vlan ID Descriptin Hstmgmt_vlan vlan 20 Hst Management vlan. Infraserver_vlan vlan 25 Infrastructure server vlan. iscsi_vlan_a vlan 31 iscsi strage fr Fabric A vlan. iscsi_vlan_b vlan 32 iscsi strage fr Fabric B vlan. hyperv-live-migratin vlan 33 Hyper-V VM live migratin vlan. xa001_vlan vlan 80 XenApp wrker vlan. Pvs-Streaming_vlan vlan 35 PVS streaming vlan. vpc-native-vlan vlan 2 Native vlan fr vpc untagged packets. VM-Cluster-HB-vlan vlan 34 VM File Server Cluster Heartbeat 5 Table 6. Cisc Nexus/UCS vlan Requirements At a high-level, the pair f Nexus switches will prvide Layer 2 redundancy using virtual prt channel cnfiguratins between the switch pair and the Fabric Intercnnects (vpcs). Layer 3 ruting is expected t be carried ut by the custmer s existing aggregatin r cre switching layer infrastructure. Optinally, Layer 3 can be cnfigured n the Nexus 5548UP switch pair 6 using Ht Standby Ruter Prtcl (HSRP) t add Layer 3 redundancy capability. Cnnectivity t the Nimble Strage array is by individual switch prts n each Nexus switch with netwrk redundancy and failver being prvided at the Nimble Strage array level in cnjunctin with the Nexus switch pair and the Micrsft Windws Server 2012 native Multipath I/O driver. 5 Required if using the virtual file server cluster ptins, refer t the sectin SMB File Services. 6 Layer 3 ruting n the Cisc Nexus switch prvided by N55-BAS1K9 - Cisc Layer 3 Base License fr Nexus 5500 Platfrm. - 16 -

5.2 Cisc UCS Overview Cisc UCS is cmprised f many physical and lgical entities, managed by Cisc UCS Manager. Cisc UCS prvides the next-generatin data centre platfrm that unifies cmputing, netwrking, strage access, and virtualisatin resurces int a single unified system. Key Decisins Decisin Pint Service Prfiles Netwrk Strage Service Prfile Templates UUID Suffix Pl MAC Address Pl iscsi Initiatr IP Pls IQN Pls QS Plicies Descriptin / Decisin Allws servers t be stateless, with lgical entities prvided as part f a prfile that applies: identity cnnectivity HBA, NIC firmware and ther assignments based n templates that can be assigned t a Server hardware prfile. The Cisc Virtual Interface Card 1240 will be cnfigured within the service prfile t present the fllwing Netwrks t each Hyper-V hst: Live Migratin (CSV I/O Redirectin Netwrk) VM Traffic (multiple vlans) Hst Management (Cluster Heartbeat) iscsi strage (IPSAN) The Cisc UCS B200 M3 blades will be diskless and cnfigured t bt frm iscsi strage presented frm the Nimble Strage array. Tw Service prfile templates are required: Service Prfile Infrastructure Hyper-V hsts: HyperV_Infra_BtiSCSI Service Prfile HSD Hyper-V hsts: HyperV_HSD_BtiSCSI Single UUID Suffix Pl: Hyper-V-Hsts Tw MAC Pls: Fabric-A Fabric-B Tw IP Pls are required, HyperV-iSCSI-Initiatr-Pls: IP Range Fabric-A IP Range Fabric-B Fur Pls are required: Fr Service Prfile HyperV_HSD_BtiSCSI: iscsi-hsd-fabric-a iscsi-hsd-fabric-b Fr Service Prfile HyperV_Infra_BtiSCSI: iscsi-infra-fabric-a iscsi-infra-fabric-b Tw QS Plicies are required: - 17 -

Decisin Pint Descriptin / Decisin LiveMigratin iscsi Bt Plicies vnic Templates BIOS Plicies Name: Bt-iSCSI Hyper-V Live Migratin netwrk: HV-LiveMig-Fab-A HV-LiveMig-Fab-B Hyper-V hst management netwrk: HV-MGMT-Fab-A HV-MGMT-Fab-B VM Data fr HyperV_HSD_BtiSCSI Service Prfile: HV-VM-HSD-Fab-A HV-VM-HSD-Fab-B VM Data fr HyperV_Infra_BtiSCSI Service Prfile: HV-VM-INF-Fab-A HV-VM-INF-Fab-B ISCSI Traffic: HV-iSCSI-Fab-A HV-iSCSI-Fab-B VM Cluster Heart Beat: FS-VM-CHB-Fab-A FS-VM-CHB-Fab-B Hyper-V_BIOS Table 7. Cisc UCS Key Decisins Design Tw Cisc UCS 5108 Blade Server Chassis will be deplyed t supprt 10 Cisc UCS B200 M3 B-Series Blades that will define the Server 2012 Hyper-V hsts. Cisc UCS 6248UP Series Fabric Intercnnects will prvide the cnnectivity t the Cisc UCS 2204XP Series Fabric Extender fitted t each 5108 Blade Server Chassis. Cisc UCS Manager will be used t create the Service Prfiles defining the virtual and lgical entities required t cnfigure each cmpnent. Each Hyper-V hst server will be cnfigured with multiple paths t the Nimble Strage array using iscsi. Separate vlans frm Fabric A and Fabric B using the Micrsft Windws Server 2012 native Multipath I/O driver will be used. Least queue depth lad balancing methd fr iscsi data traffic will be utilised as per Nimble Strage recmmendatin and best practice. - 18 -

5.3 Nimble Strage Overview The strage platfrm utilised fr this slutin is a Nimble Strage array with internal disk drives nly (n additinal expansin shelves). At a high level, the Nimble Strage array prvides the fllwing features: CASL architecture. Patented Cache Accelerated Sequential Layut CASL features include: Dynamic Caching using SSDs t cache data and metadata in flash reads Write-Optimized Data Layut Applicatin-Tuned Blck Size Universal Cmpressin Efficient, Instant Snapshts Efficient Replicatin Zer-Cpy Clnes Array Hardware. Features include: Dual Cntrller Architecture Dual Pwer Supplies Capacitr backed Nn Vlatile Randm Access Memry ensures that all writes t the array nt yet cmmitted t disk are safely prtected in the event f an unexpected pwer utage. Nimble Strage arrays utilise RAID 6 which prvides dual parity fr disk prtectin. Each Nimble Strage cntrller shelf and expansin shelf huses a single ht spare drive. Dedicated Ethernet 10GbE fr data traffic Dedicated Ethernet 1 Gb fr management traffic - 19 -

Figure 5. belw prvides a high level verview f the Nimble Strage architecture and describes a typical HSD Hyper-V Hst. Figure 5. Nimble Strage - System Overview - 20 -

Key Decisins Decisin Pint Descriptin / Decisin Hardware Details Nimble array CS240G-X4 Sftware Versin 1.4.7.0-45626-pt iscsi Initiatr Grups Required, per vlume Strage Types Thin Prvisining (vlume reserve) iscsi Hyper-V Bt vlumes iscsi hyper-v CSV vlumes iscsi vlumes fr CIFS file sharing Enabled Netwrks Dedicated 10GbE interfaces will be used fr data traffic Dedicated 1 Gb interfaces will be used fr management traffic Tw discreet vlans will be used t separate data traffic (iscsi Fabric A and iscsi Fabric B) thrugh the fabric t the 10GbE interfaces n the array iscsi discvery will be cnfigured using the 2 data addresses MTU Jumb Frames will be enabled fr bth data interfaces n each cntrller at 9000 MTU Perfrmance Plicies Multipath I/O MPIO Lad Balancing Methd SMTP Server Aut Supprt SNMP Perfrmance Plicies: Hyper-V CSV fr Clustered Shared vlumes Cmpress: On Cache: On Default fr Bt vlumes Cmpress: On Cache: On Windws File Server fr CIFS vlumes Cmpress: On Cache: On Native Windws Server 2012 MPIO driver. Least Queue Depth (LQD) An SMTP server will be specified t allw the array t send email alerts. Send Aut Supprt data t Nimble Strage supprt will be checked t allw the array t uplad data t Nimble technical supprt. Prxy Server: Optinal Enabled as per custmer requirements Table 8. Nimble Strage Key Decisins - 21 -

Design The Nimble Strage array CS240G-X used within this design prvides a highly available, redundant cntrller slutin within a single 3U enclsure. The Nimble Strage array is a cnverged strage and backup system in ne, which cntains sufficient internal disk capacity prviding the required perfrmance t meet the demands f the slutin. Each cntrller in the Nimble Strage array high availability pair is cnnected with dual data paths t the netwrk, which allws the strage system t perate in the event f cmpnent failure. A single failure f a data path will nt result in a cntrller failver. Frm the hypervisr hst server perspective, a multipath I/O driver will be used t ensure the ptimum path t the strage layer. The Nimble Strage array nly supprts blck based strage using iscsi. This CVS design dcument discusses the use f a Micrsft Windws-based File server fr the purpse f hsting SMB file shares fr data such as user data, user prfiles, ISO and media repsitry and Citrix Prvisining Services vdisk image files. The fllwing sectins cntain recmmended cnfiguratin parameters fr the lgical strage entities. Required Vlumes: Vlume Name Perfrmance Plicy Vlume Size Descriptin Qurum-Infra01 Default 2GB Hyper-V Infrastructure Failver Cluster disk witness Qurum-Infra02 Default 2GB File Server VM Failver Cluster disk witness hypervnimxxx Default (150GB x 16) 2,500GB Hyper-V Bt vlumes, where xxx represents the server s rdinal number. infra-pvs01 Windws File Server 1,000GB PVS CIFS Share fr vdisk strage. infra_is01 Windws File Server 1,500GB Media and ISO repsitry. infra_cifs01 Windws File Server 1,000GB UPM data and Redirected Flders. Assuming 1GB data per user. hsd-csv01 Hyper-V CSV 2,500GB 7 XenApp VM strage, PVS write cache drives and hypervisr. infra-csv01 Hyper-V CSV 2,500GB Infrastructure VM virtual disks. TOTAL (withut SQL strage) 11,000GB ~11TB Table 9. Required Nimble Strage vlumes Vlume Parameters: Vlume Name Vlume Reserve Vlume Quta Vlume Warning Descriptin 7 Minimum strage requirement: The ttal strage size is based n a 20GB persistent drive and 16 GB Hypervisr verhead (VM Memry) per XenApp server. This drive will cntain the Windws Pagefile, PVS write cache and redirected lgs. - 22 -

Vlume Name Vlume Reserve Vlume Quta Vlume Warning Descriptin hypervnimxxx 0% 100% 80% infra-pvs01 0% 100% 80% infra_is01 0% 100% 80% infra_cifs01 0% 100% 80% hsd-csv01 0% 100% 80% infra-csv01 0% 100% 80% 100% Thin Prvisined Allws 100% usage f disk Warn at 80% utilisatin 100% Thin Prvisined Allws 100% usage f disk Warn at 80% utilisatin 100% Thin Prvisined Allws 100% usage f disk Warn at 80% utilisatin 100% Thin Prvisined Allws 100% usage f disk Warn at 80% utilisatin 100% Thin Prvisined Allws 100% usage f disk Warn at 80% utilisatin 100% Thin Prvisined Allws 100% usage f disk Warn at 80% utilisatin Vlume Snapsht Parameters: Table 10. Nimble Vlume cnfiguratin Vlume Name Snapsht Reserve Snapsht Quta Snapsht Warning Descriptin Unlimited N snapsht reserves hypervnimxxx 0% (selectin bx checked) 0% N snapsht quta limits N warnings Unlimited N snapsht reserves infra-pvs01 0% (selectin bx checked) 0% N snapsht quta limits N warnings Unlimited N snapsht reserves infra_is01 0% (selectin bx checked) 0% N snapsht quta limits N warnings Unlimited N snapsht reserves infra_cifs01 0% (selectin bx checked) 0% N snapsht quta limits N warnings Unlimited N snapsht reserves hsd-csv01 0% (selectin bx checked) 0% N snapsht quta limits N warnings infra-csv01 0% Unlimited (selectin bx 0% N snapsht reserves N snapsht quta limits - 23 -

Vlume Name Snapsht Reserve Snapsht Quta Snapsht Warning Descriptin checked) N warnings Table 11. Nimble Vlume snapsht cnfiguratin Initiatr Grups Initiatr Grup Vlume Access Initiatr Names HSD Hsts ig-hypervnim001 hypervnim001 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:001 iqn.2013-07.cm.micrsft.b.hypervnim:001 ig-hypervnim002 hypervnim002 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:002 iqn.2013-07.cm.micrsft.b.hypervnim:002 ig-hypervnim003 hypervnim003 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:003 iqn.2013-07.cm.micrsft.b.hypervnim:003 ig-hypervnim004 hypervnim004 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:004 iqn.2013-07.cm.micrsft.b.hypervnim:004 ig-hypervnim005 hypervnim005 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:005 iqn.2013-07.cm.micrsft.b.hypervnim:005 ig-hypervnim006 hypervnim006 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:006 iqn.2013-07.cm.micrsft.b.hypervnim:006 ig-hypervnim007 hypervnim007 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:007 iqn.2013-07.cm.micrsft.b.hypervnim:007 ig-hypervnim008 hypervnim008 hsd-csv01 iqn.2013-07.cm.micrsft.a.hypervnim:008 iqn.2013-07.cm.micrsft.b.hypervnim:008 Infrastructure Hsts ig-hypervnim101 hypervnim101 infra-csv01 Qurum-Infra01 iqn.2013-07.cm.micrsft.a.hypervnim:101 iqn.2013-07.cm.micrsft.b.hypervnim:101 ig-hypervnim102 hypervnim102 infra-csv01 Qurum-Infra01 iqn.2013-07.cm.micrsft.a.hypervnim:102 iqn.2013-07.cm.micrsft.b.hypervnim:102 File Server Cluster ndes infra_cifs01 ig-cifscluster01 infra-pvs01 infra_is01 iqn.2013-07.cm.micrsft:<fileservername01> iqn.2013-07.cm.micrsft:<fileservername02> Qurum-Infra02 Table 12. Nimble Strage Initiatr Grups - 24 -

5.4 Micrsft Hyper-V Server 2012 Overview Micrsft Hyper-V Server 2012 is utilised t prvide the hypervisr hsting platfrm t the virtualised XenApp Hsted Shared Desktp and Infrastructure server instances required t supprt the 1,000 HSD slutin. Figure 6. belw depicts the physical cnnectivity between Cisc UCS blade chassis, Cisc 6248UP Fabric Intercnnects, Cisc Nexus 5548UP switches and the Nimble Strage CS240G- X4 array: Cnverged netwrk with a ttal f 4 x 10GbE server prts per Cisc UCS Chassis (2 x 10GbE cnnectins per Fabric Extender). 2 x 10GbE uplink cnnectins between the Fabric Intercnnect and Nexus switch layer. 2 x 10GbE cnnectins per Nimble Strage CS240G-X4 array t supprt iscsi data traffic. Figure 6. Hyper-V Hst Cnfiguratin - 25 -

Key Decisins Cnfiguratin Decisin Versin Micrsft Hyper-V Server 2012 Hardware Settings 10 x Cisc UCS B200 M3 blades acrss 2 x UCS 5108 chassis: 2 x Intel Xen 2.50 GHz E5-2640 CPUs(12 cres, 24 with HT enabled) 128GB RAM DDR3-1600-MHz Cisc VIC 1240 Diskless blades bt frm iscsi SAN Strage Settings Bt frm iscsi SAN Shared strage using iscsi (CSV) Netwrk Settings Cluster Shared Vlumes Hyper-V Switch Failver clustering Cisc VIC 1240 presenting 8 x vnics t each hst: 1 x iscsi Fabric-A (Bt and shared strage) 1 x iscsi Fabric-B (Bt and shared strage) Netwrk Team: (Cluster Heartbeat path 1) Team-Hst-Management (Active Passive, Switch Independent Mde) Hst-Management-Fabric-A Hst-Management-Fabric-B Netwrk Team: (Cluster Heartbeat path 2, internal cluster netwrk) Team-Live-Migratin (Active Passive, Switch Independent Mde) Live-Mig-Fabric-A Live-Mig-Fabric-B Netwrk Team: Team-VM-Data (Active Passive, Switch Independent Mde) VM-Data-Fabric-A (Trunk Prts) VM-Data-Fabric-B (Trunk Prts) The requirement fr a dedicated CSV netwrk is cnsidered a lw pririty fr the slutin. The Live Migratin f Guest VMs is als cnsidered a lw pririty; each cmpnent f the architecture is redundant and can allw at least a single cmpnent failure withut lss f service. Therefre a dedicated Live Migratin netwrk was deemed unnecessary; the Live Migratin netwrk will be shared with CSV I/O redirectin traffic (in the unlikely event I/O redirectin ccurs). VM-Switch01: Assciated Hyper-V interface: Team-VM-Data Infrastructure Hyper-V hsts (2 x hsts): Failver Clustering enabled Cluster name: clust-infra001 Nde and Disk Majrity High Availability is required Availability Sets are required: DHCP Rle PVS Rle XenApp Cntrller Rle File Services Rle (refer t sectin SMB File Services) Hsted Shared Desktp Hyper-V hsts (8 x hsts): - 26 -

Cnfiguratin Decisin Failver Clustering enabled Cluster name: clust-hsd001 Nde Majrity (adjust vting cnfiguratin) High Availability is required Scale-ut Recmmendatin Additinal pds shuld be deplyed t scale ut HSD capacity, thus additinal Hyper-V hst/failver clusters will be subsequently added. System Center 2012 - Virtual Machine Manager (VMM) - Hardware Settings Windws Server 2012 Standard 4 vcpus 16GB RAM 150GB disk fr Operating System (C:\) 1 vnic fr prductin traffic Table 13. Hyper-V Key Decisins Design Virtual Machine Manager. System Center 2012 - Virtual Machine Manager (VMM) will be deplyed as the management slutin fr the virtualised envirnment. VMM will prvide the management interface t the virtualised Hyper-V envirnment fr VM Templates, lgical netwrks Hyper-V hsts, failver clusters and ther related services. Chassis Assignment. Hyper-V hsts will be cnfigured such that hsts with even numbers have their primary netwrk teams cnfigured t the active NIC n Fabric-A and hsts with dd numbers have their primary netwrk teams cnfigured t the active NIC n Fabric-B. iscsi traffic will be rughly distributed acrss bth Fabrics at all times using the Least Queue Depth MPIO lad balancing methd. This is t ensure even distributin f traffic acrss the Fabric and minimise impact in the event f a Fabric failure. Figure 7. defines the Hyper-V hsts t physical chassis assignment which makes up the pd f a 1,000 user HSD desktp sessins (48 x XenApp virtual machine instances): Figure 7. Hyper-V Hst t Chassis Assignment Strage Envirnment. As the Cisc UCS B200 M3 blades are diskless, the Hyper-V hsts will be cnfigured t bt frm SAN via iscsi. Shared strage used t hst the virtual machine disk images will be munted by the Hyper-V hsts as Clustered Shared Vlumes via iscsi ver dedicated vlans. - 27 -

Netwrk Envirnment. Each Hyper-V hst will utilise a Cisc UCSB-MLOM-40G-01 Virtual Interface Card. The virtual interface card (VIC) will present multiple virtual NICs t the hst, which will be mapped t I/O mdules installed within the UCS Chassis. Each UCS 5108 chassis is equipped with tw 2204XP I/O Mdules (Fabric Extenders). These will have tw cnnectins frm each I/O mdule t the upstream 6248UP Fabric Intercnnects. The Fabric Intercnnects will have upstream cnnectins t the Nexus 5548UP switches that prvide cnnectivity t the cre switching infrastructure. Micrsft Failver Clusters. Failver clusters will be deplyed fr bth the infrastructure Hyper-V hsts and the HSD Hyper-V hsts. Each failver cluster will be deplyed with tw separate paths fr the cluster heartbeat and a shared netwrk fr CSV I/O redirectin and Live Migratin traffic. Availability Sets will be used t identify virtual machines that SCVMM will keep n separate hsts, e.g. DHCP servers, virtual file server ndes fr redundancy. The Infrastructure Failver cluster will utilise a Nde and Disk Majrity Qurum cnfiguratin t ensure nly a single nde can wn 2 f the 3 qurum resurces e.g. nde and disk. In the event the slutin is scaled beynd the 1,000 HSD desktps, an additinal nde will be added t the infrastructure cluster; at this time the disk witness will nt be required. Active Directry Integratin. Each Hyper-V hst will be lgically lcated in a dedicated Organisatinal Unit with specific Grup Plicy Objects applied as apprpriate t the Hyper-V rle. - 28 -

5.5 SMB File Services Overview The Citrix Validated Slutin fr Hsted Shared Desktps has a dependency fr Windws SMB file shares t hst varius strage cmpnents f the design, specifically: Prvisining Services vdisk stre User Persnalisatin data Prfile Management and redirected User flders ISO and media repsitry Since the Nimble Strage array nly supprts blck based strage using iscsi, this design discusses the use f a Micrsft Windws based file server fr the purpse. The file server must be deplyed in a high availability mde fr redundancy. Nimble Strage prvide further guidelines and recmmendatins as described in this dcument, http://inf.nimblestrage.cm/bpg-windws-file-sharing.html This sectin discusses design requirements and integratin pints fr a Micrsft Server 2012 Failver Cluster running the General Use file server rle. Figure 8. belw describes the architecture: Figure 8. File Server Architecture Imprtant Nte: The Server perating systems discussed in this design fr Citrix XenApp and Citrix Prvisining Services are currently supprted using Micrsft Server 2008 R2 and as such d nt supprt SMB 3.0 features ffered frm a Micrsft Server 2012 file share therefre, Transparent Failver, which enable Windws file shares in Failver Clustering cnfiguratin t be cntinuusly available is nt supprted. - 29 -

Key Decisins Cnfiguratin Highly available File Server slutin Strage Vlumes Client Access Name SMB Shares Cluster Netwrks Hardware Settings Decisin 2 nde Failver Cluster running the General Use file server rle with a qurum witness disk. iscsi shared strage. Nimble Vlumes: PVS vdisk Stre Prfile Management and redirected User flders ISO media repsitry Disk Qurum \\Infra-cifs01 PVS-vDisk Stre: \\infra-cifs01\pvs-stre Path=F:\PVS-Stre Prfile Management and redirected User flders \\infra-cifs01\hsd-upm Path=E:\HSD-UPM \\infra-cifs01\hsd-userdata ISO media repsitry: Path=E:\HSD-UserData \\infra-cifs01\iso Path=G:\ISO Cluster Management: Management Client Access Cluster Cmmunicatins (Path 1) Cluster Cmmunicatins: Cluster Cmmunicatins (Path 2) iscsi Fabric-A: iscsi data traffic fabric A (Path 1) iscsi Fabric-B: iscsi data traffic fabric B (Path 2) There are tw ways the file server slutin can be deplyed, either utilising physical r virtualized instances: Optin1: The Hyper-V infrastructure Failver Cluster may be built with the full versin f Micrsft Server 2012 Datacenter Editin and the General Use file server rle cnfigured alngside the Hyper-V rle. Optin2: 2 x virtualised Micrsft Server 2012 VM cluster ndes within a Failver Cluster running the General Use file server rle: Hyper-V VM guest Windws Server 2012 Standard editin - 30 -

Cnfiguratin Decisin 4 vcpus 16GB RAM 150GB disk fr Operating System (C:\) 1 vnic fr cluster management (Hst teamed netwrk) 1 vnic fr cluster cmmunicatins (Hst teamed netwrk) 1 vnic fr iscsi Fabric-A traffic 1 vnic fr iscsi Fabric-B traffic Hypervisr additinal requirements t supprt the virtual cluster (virtual machines instances): UCS Service prfile update: HyperV_Infra_BtiSCSI Fur additinal vnics: 1. HV-VM-iSCSI-A 2. HV-VM-iSCSI-B New hst Team Team-VM-ClusterHB, Interface members: 3. VM-Cluster-A 4. VM-Cluster-B Failver Clustering Failver Clustering enabled Nde and Disk Majrity Table 14. Windws Server 2012 SMB File Services Key Decisins Design This high level design discusses tw ptins fr the deplyment f a highly available General Use file server cluster delivering SMB shares. Optin 1: The Hyper-V Failver Cluster clust-infra001 hsting the infrastructure virtual machines may be cnfigured with Micrsft Server 2012 Datacenter Editin thereby allwing the Failver cluster rle General Use file server t be deplyed. HA will be maintained as per the cluster cnfiguratin. Each nde in the Infrastructure cluster will be granted access t the vlumes used fr presenting the SMB 3.0 shares. Nimble Strage initiatr grups will be used t manage access t the vlumes. Optin 2: The Hyper-V Failver Cluster clust-infra001 hsting the infrastructure virtual machines will hst tw additinal virtual machines cnfigured as a 2-nde Failver cluster hsting the General Use file server rle. The perating system deplyed t the VMs will be Micrsft Server 2012 Standard Editin. Availability sets will be cnfigured t ensure each f the nde remains n separate physical hsts fr redundancy. Each nde in the file server cluster will be granted access t the vlumes used fr presenting the SMB shares. Nimble Strage initiatr grups will be used t manage access t the vlumes. - 31 -

T supprt the requirements f the virtualised Failver cluster (running as guest VMs) the underlying Hyper-V hsts will require additinal Cisc UCS service prfile and Hyper-V cnfiguratin. These changes are described at a high level in the fllwing tables. Cisc UCS Updates fr Optin 2: Decisin Pint Service Prfile Templates Netwrks vnics vlans Descriptin / Decisin Service prfile template t be amended: Service Prfile Infrastructure Hyper-V hsts: HyperV_Infra_BtiSCSI The Cisc Virtual Interface Card 1240 will be cnfigured within the service prfile t present the fllwing Netwrks t each Hyper-V hst: Live Migratin (CSV I/O Redirectin Netwrk) VM Traffic (multiple vlans) Hst Management (Cluster Heartbeat) iscsi strage (IPSAN) iscsi fr virtual machine strage Dedicated cluster netwrk fr virtual machine clusters Hyper-V Live Migratin netwrk: HV-LiveMig-Fab-A HV-LiveMig-Fab-B Hyper-V hst management netwrk: HV-MGMT-Fab-A HV-MGMT-Fab-B VM Data fr HyperV_Infra_BtiSCSI Service Prfile: HV-VM-INF-Fab-A HV-VM-INF-Fab-B ISCSI Traffic: HV-VM-iSCSI-A HV-VM-iSCSI-B HV-iSCSI-Fab-A HV-iSCSI-Fab-B VM Cluster Heart Beat FS-VM-CHB-Fab-A FS-VM-CHB-Fab-B Additinal vlan required fr the virtual machine cluster heartbeat: vlan ID: 34 Table 15. Cisc UCS updates fr the virtualised Failver Cluster (running as guest VMs) - 32 -

Hyper-V Updates fr Optin 2: Netwrk Team Decisin Pint Descriptin / Decisin New hst Team: Team-VM-Cluster, Interface members: 1. VM-Cluster-A 2. VM-Cluster-B Hyper-V Switch New: VM-Switch-ClusterHB Assciated Hyper-V interface: Team-VM-Cluster New vlan ID 34 New VM-Switch-iSCSI-Fabric-A Assciated Hyper-V interface: HV-VM-iSCSI-A Native vlan New VM-Switch-iSCSI-Fabric-B Assciated Hyper-V interface: HV-VM-iSCSI-B Native vlan Table 16. Hyper-V updates fr the virtualised Failver Cluster (running as guest VMs) - 33 -

5.6 Citrix Prvisining Services Overview The Citrix Prvisining Services (PVS) envirnment is designed as a single farm and ne initial Site. A single Site is used t hst three Prvisining servers fr the XenApp wrklads fr up t tw Hsted Shared Desktp pds r supprt fr 2,000 HSD desktps. DHCP n Windws Server 2012. The Citrix Validated Slutin uses the DHCP failver feature prviding the ability t have tw DHCP servers serve IP addresses and ptin cnfiguratin t the same subnet r scpe, prviding uninterrupted availability f DHCP service t clients. The tw DHCP servers will be cnfigured t replicate lease infrmatin between themselves, allwing ne server t assume respnsibility fr servicing f clients fr the entire subnet when the ther server is unavailable, withut using split scpes. Figure 9. belw describes the high-level cmpnents: Figure 9. Citrix Prvisining Services Farm and related infrastructure - 34 -

Key Decisins: PVS Cnfiguratin Decisin Versin Servers Bt Services Citrix Prvisining Services 6.1: CPVS61016v2 Three Prvisining Services servers will be deplyed, tw are required t maintain high availability at all times. The third allws fr maintenance f a single server while maintaining high availability. PXE Bradcast fr TFTP Services DHCP services deplyed n standalne Micrsft Windws Server 2012 standard editin servers leveraging new high availability features fr DHCP Hardware Settings 3 x virtualised PVS servers Hyper-V VM guest Windws Server 2008 R2 Standard SP1 4 vcpus 16GB RAM 8 (allws fr caching f ~ 4 vdisk images) 100GB disk fr Operating System (C:\) 1 vnic fr prductin traffic 1 vnic fr streaming traffic Strage Settings Netwrk Settings PVS Write Cache Settings PVS Farm Cnfiguratin Database and Service accunt infrmatin PVS vdisk Stre hsted n a Windws SMB file share assciated with a vlume presented by the Nimble array. PVS servers will be multi hmed with 2 x vnics as fllws: 1 vnic fr prductin traffic (Synthetic) 1 vnic fr streaming traffic (Synthetic) PVS target devices will be multi hmed with 2 x vnics as fllws: 1 vnic fr prductin traffic (Synthetic) 1 vnic fr streaming traffic (Emulated) Lcal disk n the target device; a 20GB (E: drive) persistent virtual disk will be assciated t each target device. Sizing guideline based n the HSD and applicatin wrklad tested 9 : Write cache size after 24 hurs f testing is ~2GB x 7 days f uptime = 14 GB + redirected lgs + EdgeSight data = ~15GB with 25% spare strage capacity Farm Name: Refer t the Appendix: DECISION POINT License server: LIC001 Mirrred database running n Micrsft SQL Server 2008 R2 Refer t the Appendix: DECISION POINT Database infrmatin Service Accunt infrmatin Additinal permissins: SELF Write Public Infrmatin Failver ver partner infrmatin 8 Recmmended minimum memry requirement fr Citrix PVS servers. Caters fr up t 4 x vdisk images 9 Sizing guidelines are based n the applicatin set tested as part f the scalability testing cnducted within the CVS labs. This value is a guideline and the actual metrics may differ depending n unique custmer applicatins and requirements. - 35 -

Cnfiguratin PVS vdisk Stre Device Cllectins Decisin A single stre is shared by the 3 x PVS servers within the PVS Site. The vdisk stre will be cnfigured t utilise the Windws SMB file share. Path: \\infra-cifs01\pvs-stre A single device cllectin created fr the XenApp HSD VMs. vdisk Prperties Access Mde - Standard image Cache Type - Cache n Target device hard drive (VM vhdx file residing n iscsi shared strage) Enable Active Directry machine accunt passwrd management Enabled Micrsft Vlume Licensing: Refer t the Appendix: DECISION POINT Table 17. Citrix Prvisining Services Key Decisins Key Decisins: DHCP Cnfiguratin Decisin Versin, Editin Servers Hardware Settings (IPv4 Optins) Failver Windws Server 2012 DHCP Rle Tw Windws Server 2012 servers will be deplyed with the DHCP rle enabled: DHCP001 DHCP002 Hyper-V Guest VMS: Windws Server 2012 Standard 2 vcpus 4GB RAM 150GB disk fr Operating System (C:\) 1 vnic fr prductin traffic Failver Enabled. Table 18. DHCP Scpe Key Decisins Design PVS Design PVS Farm. The Citrix Prvisining Services (PVS) envirnment is designed as a single farm and ne initial Site. A single Site is used t hst three Prvisining servers fr the XenApp wrklads, tw servers are required t maintain high availability at all times. The third allws fr maintenance f a single server while still maintaining high availability. A windws SMB file share will be used fr the strage f vdisks. PVS Target Device Netwrk. The Hyper-V Legacy Netwrk adapter will be used t PXE bt the target devices and stream the Prvisining Services vdisk. The streaming netwrk will be a dedicated layer 2 nn-rutable netwrk. Each target device will be cnfigured with an additinal vnic (synthetic) fr prductin traffic. All target devices will receive IP addressing requirements frm DHCP. PVS Farm Database. The Farm database will be hsted n a Micrsft SQL 2008 R2 platfrm using synchrnus database mirrring. - 36 -

DHCP Design DHCP. Tw DHCP servers will hst Micrsft DHCP Services fr the IP addressing requirements. DHCP Relay will be cnfigured n the Cisc Nexus 5548UP switches, allwing client DHCP discver packets t be frwarded t their respective DHCP servers. DHCP scpes will be deplyed as highly available in lad balanced mde, using the capabilities f the DHCP Rle. Active Directry Design Active Directry Integratin. Each server will be lgical lcated in a dedicated Organisatinal Unit with specific Grup Plicy Objects applied as apprpriate t the rle. - 37 -

5.7 Citrix XenApp Overview This validated slutin specifically defines Hsted Shared Desktps as per the Citrix FlexCast delivery mdel. Frm a XenApp perspective, a single published applicatin using the server desktp will be used t prvide the HSD desktp. These XenApp servers will cntain preinstalled cre applicatins (Tier-1 applicatins) delivered t the user frm within that desktp. The XenApp Server will be cnfigured such that the published desktp has the Windws 7 Themes and lk and feel. Figure 10. Citrix XenApp Farm and related infrastructure - 38 -

Key Decisins Cnfiguratin Versin, Editin Hardware Settings XenApp Cntrllers Hardware Settings XenApp Wrker Micrsft Remte Desktp Services licensing Datacentre Decisin Citrix XenApp 6.5 with Rllup 2 XA650W2K8R2X64R02.msp Virtualised XenApp Cntrller servers (sessin hst servers): Hyper-V VM guest Windws Server 2008 R2 Standard Editin SP1 4 x vcpus 8GB RAM 100GB disk fr Operating System (C:\) 1 x vnic (Prductin Traffic) Virtualised XenApp Wrker servers: Hyper-V VM guest Windws Server 2008 R2 Standard Editin SP1 4 x vcpus 16GB RAM 100GB disk fr Operating System (C:\) 1 x vnic (Prductin Traffic) 1 x vnic (Streaming Traffic) Refer t the Appendix: DECISION POINT Micrsft RDS Licensing will be based n the custmer s Micrsft licensing type and mdel The prescribed deplyment is fr a single data centre. Number f Farms Single XenApp farm Number f Znes Single Zne Zne Data cllectrs (Cntrllers) Tw servers - Dedicated Primary and Backup Cntrller servers Data Stre Mirrred database running n Micrsft SQL Server 2008 R2: Refer t the Appendix: DECISION POINT Database infrmatin Service Accunt infrmatin Failver ver partner infrmatin Citrix Plicies Citrix Plicy Applicatin: Applied using Active Directry Grup Plicy Wrker Grups Published Applicatin Themes HSD Wrkers, based n Active Directry cntainer, cntaining the HSD wrker cmputer bjects HSD publishing the server desktp t the HSD Wrkers wrker grup Windws 7 Themes and lk & feel enabled http://supprt.citrix.cm/article/ctx133429 Table 19. Citrix XenApp Key Decisins - 39 -

Design XenApp Farm. The XenApp farm cnsists f a single zne with tw dedicated XenApp Cntrllers. These tw Cntrllers will als be utilised as the XML brkers fr the Web Interface sites. The Farm database will be hsted n a Micrsft SQL 2008 R2 platfrm using synchrnus database mirrring. The deplyment is prescribed fr a single datacentre. HSD Desktp Enumeratin. Web Interface will be utilised fr the presentatin f HSD desktps t end users. Web Interface servers that prvide the required applicatin and desktp presentatin will be lad balanced with Citrix NetScaler. Guest VM Prvisining. XenApp Cntrllers will be deplyed using virtual machine templates, while XenApp wrkers will deplyed as Prvisining Services target devices. Active Directry Integratin. Each server will be lgical lcated in a dedicated Organisatinal Unit with specific Grup Plicy Objects applied as apprpriate t the their rle. - 40 -

5.8 Citrix XenApp Wrker (VM Guest wrklad) Overview The XenApp wrker will be deplyed using Citrix Prvisining Services standard vdisk mde (read-nly, many t ne). A number f cnfiguratin settings will be applied directly t the XenApp wrker glden image using Active Directry Grup Plicies, ensuring ptimal perfrmance and cnsistent applicatin. Aside frm Applicatins a number f cmpnents were included in the glden image that may influence scalability: Antivirus with specific cnfiguratins as dcumented within this sectin. http://supprt.citrix.cm/article/ctx127030 Themes, Windws 7 lk and feel. http://supprt.citrix.cm/prddcs/tpic/xenapp65-admin/ps-csp-win7-desktpexperience.html Citrix EdgeSight mnitring, http://supprt.citrix.cm/prddcs/tpic/technlgies/edgesight-wrapper.html Figure 11. Virtual Desktp Cnfiguratin Key Decisins Virtual Machine Specificatins: Based n the system testing carried ut the fllwing table describes the mst ptimal cnfiguratin fr the XenApp wrker wrklad fr user/sessin density: # f XA VMs per hst #RAM #vcpu HSD Sessins per XA VM Ttal # f HSD Sessins per Hst 6 16GB 4 21-22 126-132 Table 20. XenApp wrker VM Specificatins Cnfiguratin Virtual Machine Specificatins Specificatins: Decisin Persistent Drive 20GB (the requirement fr a Pagefile further defines the size f this drive. Refer t the Appendix: DECISION POINT System Drive 100GB (PVS vdisk) - 41 -

Cnfiguratin Pagefile Decisin Refer t the Appendix: DECISION POINT Nt required based n the applicatin set and wrklad carried ut during validatin testing. Table 21. Recmmended HSD VM Specificatin Applicatin Set. The testing utilised applicatin sets representative f enterprise-level SOE applicatins. These applicatins will be embedded as part f the HSD XenApp wrker glden image. The fllwing table represents the applicatin set that frm the HSD desktp wrklad prfile: Cnfiguratin Operating System Citrix Applicatins Decisin Micrsft Windws Server 2008 R2 Standard Editin with Service Pack 1 Hyper-V Integratin Services 6.2.9200.16433 Citrix XenApp 6.5 Citrix Offline Plugin v6.7 Citrix Prfile Management v4.1.1.5 Citrix Prvisining Services Target Device x64 6.1.16.1204 Citrix Receiver v13.1.0.89 Citrix ShareFile Desktp Widget v2.22 Citrix EdgeSight fr XenApp 6 Agent 5.4 x64 5.4.1.2.2 Prductivity Applicatins Micrsft Excel Prfessinal 2010 x86 Micrsft Outlk Prfessinal 2010 x86 Micrsft PwerPint Prfessinal 2010 x86 Micrsft Wrd Prfessinal 2010 x86 Baseline Applicatins Adbe Acrbat Reader v9.1 Adbe Flash Player v11.7.700.202 Adbe Shckwave Player v11.6.6 Adbe AIR v3.7.0.1860 Apple QuickTime v7.72.80.56 Bullzip PDF Printer v7.2.0.1304 10 Cisc WebEx Meetings CutePDF Writer v3 Ggle Chrme v21.0.1180.89 Java 6 Update 21 v6.0.210 Kid-Key-Lck v1.2.1 11 Mzilla Firefx v14.0.1 Micrsft.NET Framewrk 4 Client Prfile v4.0.30319 Micrsft Internet Explrer 9 Micrsft System Center EndPint Prtectin 2012 Micrsft Silverlight v5.1.10411 Micrsft Windws Firewall 10 Applicatin required by Lgin VSI fr scalability testing. 11 Applicatin required by Lgin VSI fr scalability testing. - 42 -

Cnfiguratin Decisin Micrsft Windws Media Player v12.x Skype v5.10.116 WinZip v16.5.10095 Table 22. Pre-defined Applicatin Set Design The HSD XenApp virtual wrklads are deplyed using Citrix Prvisining Services. Citrix Prvisining Services utilises a read nly virtual disk (vdisk) referred t as standard mde (read nly mde used during Prductin mde). This vdisk can als be switched t private mde (writable mde used under Maintenance mde) when updates are required t the base image. Each time updates are applied t this image in Maintenance mde, the image must be generalised t ensure it is ready t be deplyed in its ptimal frm t many target devices. Standard mde images are unique in that they are restred t the riginal state at each rebt, deleting any newly written r mdified data. In this scenari, certain prcesses are n lnger efficient and ptimisatin f this image is required. Optimisatins and cnfiguratins can be applied at several levels: Wrklad Cnfiguratin Glden image. Changes are made directly t the glden image. These changes are cnsidered inapprpriate t be applied using GPOs r are required settings prir t generalising the image. The image must be generalised whilst it is in writable mde (Private r Maintenance mde). Once the image has been generalised it is immediately shutdwn and reverted t a read-nly mde (Prductin r Test mde) and is ready fr many t ne (many target devices t ne vdisk image) deplyment. Wrklad Cnfiguratin GPO. These changes are applied via Active Directry GPO and are cnsidered baseline cnfiguratins required in almst all instances. Typical use cases fr this GPO are Event lg redirectin, Citrix Prfile Management cnfiguratin and target device ptimisatins. In additin this GPO has Lpback prcessing enabled t allw user based settings t be applied t the HSD wrker Organisatin Unit level. User Optimisatins GPO. This Active Directry GPO cntains ptimisatins fr the user peratins within the HSD envirnment. User cnfiguratins cannt typically be deplyed as part f the image and are independent. Typical use cases fr this GPO are flder redirectin and user specific ptimisatins. - 43 -

5.9 Citrix Web Interface Overview The Web Interface prvides users with access t XenApp applicatins, Hsted Shared Desktps and XenDesktp virtual desktps. Users access their resurces thrugh a standard Web brwser using Citrix Receiver. Key Decisins Cnfiguratin Decisin Versin, Editin Web Interface Versin 5.4.2. Hardware Settings Security Lad Balancing 2 x Web Interface servers in High Availability: Hyper-V VM guest Windws Server 2008 R2 Standard SP1 2 vcpus 4GB RAM 100GB disk fr Operating System (C:\) 1 vnic A server certificate will be installed t secure authenticatin traffic: https will be required fr all web sites, ensuring that user s credentials are encrypted as they traverse the netwrk Citrix NetScaler will be deplyed t perfrm server lad balancing and health checking f the Web Interface web sites. Citrix Cmmunity Develper Netwrk ffers an AppExpert Template fr server lad balancing f the Citrix Web Interface. Refer t the fllwing Links: http://supprt.citrix.cm/prddcs/tpic/ns-main-appexpert-10-map/nsaapexpert-apptemp-wrapper-cn.html Table 23. Citrix Web Interface Key Decisins Design Citrix Web Interface servers will be lad balanced using Citrix NetScaler SDX 11500 appliances with virtual instances cnfigured in high availability mde (HA). Citrix specific service mnitrs will be utilised t mnitr the health f the Web Interface sites t ensure intelligent lad balancing decisins are perfrmed n the service. Please refer t the sectin Citrix NetScaler SDX fr mre details Active Directry Integratin. Each server will be lgical lcated in a dedicated Organisatinal Unit with specific Grup Plicy Objects applied as apprpriate t the web server rle. - 44 -

5.10 Citrix License Server Overview The Citrix License server is a required server cmpnent which prvides license service requirements t the Citrix prducts included in this dcument. Key Decisins Cnfiguratin Decisin Versin, Editin Citrix License Service versin 11.10.0. Hardware Settings 1 x virtualised License Server: Hyper-V VM guest: Windws Server 2008 R2 Standard SP1 2 vcpus 4GB RAM 100GB disk fr Operating System (C:\) 1 vnic Table 24. Citrix License Server Key Decisins Design Redundancy. Redundancy is built int the Citrix License service via the built-in 30 day grace perid. Service redundancy can be further facilitated by the underlying hypervisr; therefre a single server is prescribed. Active Directry Integratin. The License server will be lgical lcated in a dedicated Organisatinal Unit with specific Grup Plicy Objects applied as apprpriate t the web server rle. - 45 -

5.11 Citrix NetScaler SDX Overview This sectin prvides a high-level descriptin f the prpsed Citrix NetScaler SDX functinality and Access Gateway features (Secure Remte Access functinality) required. The fllwing diagram depicts the prpsed Citrix NetScaler SDX lgical architecture fr a single data centre: Key Decisins Figure 12. Citrix NetScaler SDX High Level Overview Item Decisin Appliance Type Citrix NetScaler SDX 11500 NetScaler Cnfiguratin Citrix Access Gateway Server Lad Balancing Fur Citrix NetScaler SDX appliances are required fr the fllwing: Appliance 1 & 2: 2 x appliances within the secured DMZ prviding remte access capability. Separate VPX instances will be created and cnfigured in high availability between physical appliances. Appliance 3 & 4: 2 x appliances within the internal netwrk segment prviding lad balancing capabilities. Separate VPX instances will be created and cnfigured in high availability between physical appliances t supprt lad balancing f: Citrix Web Interface Citrix XenApp XML Brkers A single Access scenari will be created fr Citrix Receiver Lad Balancing f the fllwing Citrix services: Citrix Web Interfaces Citrix XenApp XML Brker - 46 -

Item Glbal Server Lad Balancing (GSLB) Deplyment Decisin GSLB directs DNS requests t the best perfrming GSLB site in a distributed Internet envirnment. GSLB enables distributin f traffic acrss multiple sites / data centres, and ensures that applicatins r desktps are cnsistently accessible. When a client sends a DNS request, the system determines the best perfrming site and returns its IP address t the client. DECISION POINT Single data centre. Table 25. Citrix NetScaler SDX Key Decisins Design Tw pairs f Citrix NetScaler appliances will be deplyed in the apprpriate netwrk security znes and netwrk segments. Each physical appliance will be cnfigured with a single instance (initially) t supprt high availability between the physical appliances. External facing (DMZ Netwrk). The tw NetScaler SDX appliances will be cnfigured such that each virtual NetScaler instance will be in tw arm mde. A single SSL VPN virtual server will be created t supprt a single access scenari prviding access t the Hsted Shared Desktp using a standard web brwser and Citrix Receiver. Internal facing (Internal Netwrk). The tw NetScaler SDX appliances will be cnfigured such that each virtual NetScaler instance will prvide lad balancing capabilities t internal Web sites. Lad balancing will be prvided fr: Citrix Web Interface servers / Sites Citrix XML Brkers - 47 -

5.12 Citrix EdgeSight Overview Citrix EdgeSight prvides the real-time visibility necessary t test, deliver and mnitr the perfrmance f desktps and virtualised applicatins fr bth Citrix XenApp and client/end user devices. Citrix EdgeSight will prvide the fllwing capabilities: Systems mnitring f Citrix XenApp Hsted Shared Desktps. Reprting n utilisatin, licenses and ther measurable perfrmance metrics. Alert generatin based n pre-defined triggers/events. Figure 13. Citrix EdgeSight High-level Architecture Key Decisins Item Decisin EdgeSight Versin EdgeSight Versin 5.4 EdgeSight Web Server EdgeSight Agents SQL Reprting Services SQL Database SMTP and SNMP Alerting Single dedicated server: This server hsts the Web Cnsle fr cnfiguring the EdgeSight deplyment and running reprts. It is the centralised cnsle fr accessing infrmatin that EdgeSight agents have gathered. EdgeSight Agents fr XenApp will be installed n all XenApp servers fr system mnitring and alerting. Hsted n the EdgeSight Web server: EdgeSight relies n SQL Reprting Services fr cllatin and filtering f cllected data Micrsft SQL Server 2008 R2 Alerting: The SQL database will be used t stre all the statistical data that Citrix EdgeSight has gathered DECISION POINT SMTP, SNMP alerts and traps will be integrated int the - 48 -

Item Decisin custmer s existing enterprise mnitring tls. Only Citrix and applicatin specific metrics will be captured Hardware Cnfiguratin 1 x virtualised EdgeSight server: Hyper-V VM guest Windws Server 2008 R2 x64 Standard SP1 4 vcpus 16GB RAM 100GB disk fr Operating System (C:\) 300GB disk fr EdgeSight data (D:\) 1 vnic fr prductin traffic Table 26. Citrix EdgeSight Key Decisins Design A single standalne instance f Citrix EdgeSight will be deplyed and used t prvide mnitring, alerting capabilities fr Citrix and applicatin specific metrics. Active Directry Integratin. The EdgeSight server bject will be lgical lcated in a dedicated Organisatinal Unit with specific Grup Plicy Objects applied as apprpriate t the EdgeSight server rle. - 49 -

5.13 User Prfile Management Slutin Overview Prfile management is enabled thrugh a Windws service that prvides a mechanism fr capturing and managing user persnalisatin settings within the virtual desktp envirnment. Key Decisins Cnfiguratin Decisin Versin, Editin Citrix User Prfile Management versin 4.x Prfile Strage Lcatin Flder redirectin Cnfiguratin Windws SMB Share - \\infra-cifs01\hsd-upm Applied using Grup Plicy: (minimum requirements): Applicatin Data Dcuments Redirected flder lcatin: Windws SMB file share \\infra-cifs01\hsd-userdata Refer t the Appendix: DECISION POINT Prfile Management cnfiguratins will be applied using Active Directry GPOs. Table 27. Citrix Prfile Management Key Decisins Design Citrix Prfile Management cupled with standard Micrsft Windws Flder Redirectin using Active Directry GPOs will be deplyed. Strage presented via a Windws SMB file share prvided by the Nimble array will hst the User Prfiles and User Redirected Flders. All Prfile Management cnfiguratins will be deplyed using Active Directry GPOs. - 50 -

5.14 Active Directry Overview This validated slutin has a requirement t use Micrsft Active Directry Dmain Services and as such, it is an assumptin that such an envirnment already exists within the custmer s envirnment. The decisins discussed belw describe requirements frm the existing Active Directry in the frm f Organisatinal Units and Grup Plicy Objects. Supplementary requirements must als be met, t ensure sufficient capacity frm authenticating Dmain Cntrllers can handle any additinal requirements r lad placed n the system by adding further Users, Grups, machine Objects and plicy prcessing lad. DECISION POINT Key Decisins Cnfiguratin Grup Plicy Applicatin Recmmended: 12 Decisin Each infrastructure server rle will have a minimum security baseline applied (MSB) via GPO All XenApp Wrkers will have a minimum security baseline applied (MSB) via GPO XenApp Wrkers will have a Machine GPO applied specific t their applicatin delivery requirements. This GPO will have Lpback mde enabled t apply user based settings at the XenApp wrker OU level User based plicies will be applied at the XenApp wrker level Infrastructure servers such as Hyper-V hst will be deplyed in relevant OUs and MSBs applied apprpriate t their rle. Table 28. Active Directry Key Decisins Design The recmmended Grup Plicy and Organisatinal Unit strategy applied t this validated slutin is based n deplying Grup Plicy Objects in a functinal apprach, e.g. settings are applied based n service, security r ther functinal rle criteria. This ensures that security settings targeted fr specific rle services such as IIS, SQL etc. receive nly their relevant cnfiguratins. It is anticipated that the final design will be custmer dependant and based n ther factrs such as rle based administratin and ther typical elements utside the scpe f this dcument. Refer t the Appendix: DECISION POINT 12 Reference t Minimum Security Baselines in the frm f GPOs will be the custmer s respnsibility. GPOs described in this dcument in all cases will be integrated int the custmer Active Directry envirnment. - 51 -

Figure 14. Organisatinal Units and GPO Applicatin - 52 -

5.15 Database Platfrm Overview Citrix XenApp, Citrix Prvisining Services and Virtual Machine Manager require databases t stre cnfiguratin metadata and statistical infrmatin. A highly available database platfrm utilising Micrsft SQL Server is required as the platfrm f chice. The fllwing tables describe minimum requirements f the database platfrm. Key Decisins Cnfiguratin Versin, Editin Decisin Micrsft SQL Server 2008 R2 Standard editin. Client Cnfiguratin SQL Native Client 10 installed n all XenApp servers and Prvisining servers. MF20.dsn updated n all XenApp servers with mirrr db infrmatin Databases XenApp65: Mirrred: Synchrnus mirrring with Witness nde Please refer t the fllwing articles fr further details: Prvisining Services: Mirrred: Synchrnus mirrring with Witness nde Please refer t the fllwing article fr further details: http://supprt.citrix.cm/prddcs/tpic/xenapp65-install/psdatabase-ref-sql-srvr.html http://supprt.citrix.cm/prddcs/tpic/xenapp5fp-w2k3/psplanning-datastre-intr-v2.html http://supprt.citrix.cm/prddcs/tpic/prvisining-60/pvsinstall-task1-plan-6-0.html EdgeSight: Please refer t the fllwing article fr further details: VMM: http://supprt.citrix.cm/article/ctx122146 Please refer t the fllwing article fr further details: http://technet.micrsft.cm/en-us/library/gg610574.aspx http://technet.micrsft.cm/en-us/sqlserver/gg490638.aspx Design Cnsideratins Table 29. Micrsft SQL Database Key Decisins This dcument prvides design guidelines fr the actual databases used in this Citrix Validated Slutin, hwever des nt attempt t prvide design guidelines fr Micrsft SQL Server. The design and implementatin fr a highly available Micrsft SQL Server platfrm is cnsidered ut f scpe fr this high level design dcument. - 53 -

Appendix A. Decisin Pints This sectin defines the elements which need further discussins with the Custmer as these may be custmer-specific. DECISION POINT Naming Cnventin Database Infrmatin Descriptin Cmpnent nmenclature will need t be defined by the custmer during the analysis phase f the prject Micrsft SQL Versin Server name Instance name Prt Database name Resurce Capacity (CPU Memry Strage) CTX Licensing License server name Micrsft Vlume Licensing Micrsft RDS Licensing (terminal server CALS) Windws Pagefile Web Interface Active Directry Dmain services Micrsft licensing f the target devices is a requirement fr the slutin and will be based n the custmer s existing Micrsft licensing agreement. The apprpriate licensing ptin must be selected based n Micrsft KMS r MAK vlume licenses fr PVS target devices. Nte: The vdisk license mde must be set befre target devices can be activated At least tw Micrsft RDS License servers shuld be defined with the custmer envirnment including the mde f peratin: per user per device Once defined these cnfiguratin items will be deplyed via the Active Directry GPO. The final applicatins used and wrklad usage patterns required by the custmer will influence the decisin fr the requirement f a Pagefile. Further custmer validatin will be required. Dependant n the requirement fr a Pagefile and its assciated strage ftprint, the write cache drive may require additinal strage cnsideratins. Further analysis may be required fr custmers with aggressive user lgn time frames t their desktps. In this scenari additinal Web Interface servers may be required and must be added t the NetScaler lad balancing service grup. The Active Directry Frest and dmain will need t be discussed with the Custmer t ensure sufficient capacity exists t supprt any additinal authenticatin requirements the prpsed slutin may impse. Grup Plicy is likely t be deplyed t suit the requirements f the custmer. Assuming the existing deplyment meets best practices, the GPOs described within this Citrix Validated Slutin can be integrated int the custmer envirnment r cnfiguratins may be added directly t existing GPOs. Further analysis is required. Reference t Minimum Security Baselines in the frm f GPOs will be the custmer s respnsibility. GPOs described in this dcument in all cases must be integrated int the custmer Active Directry envirnment. - 54 -

DECISION POINT Citrix EdgeSight User Persnalisatin Descriptin Citrix EdgeSight requires naming cnventins fr Cmpany and departmental cnfiguratins. Dependant n the custmer preference the Citrix EdgeSight database can be deplyed t a suitable existing SQL platfrm r as a standalne instance. User Prfile Management will need t be further defined t meet custmer expectatins and applicatin specific requirements. This includes flder redirectin using GPO bjects. Currently this dcument nly describes minimal requirements, that were used fr testing and validatin purpses Please refer t the fllwing link fr further details: http://supprt.citrix.cm/article/ctx134081 Table 30. Decisin Pints - 55 -

Appendix B. Server Inventry This sectin defines the inventry f servers (physical and virtual) required t deliver the Hsted Shared Desktp slutin. Qty Server rle Type CPU RAM Disk NIC Physical Servers 2 Hyper-V Hst (Infrastructure) Physical B200-M3 2 x Hex-Cre 128GB SAN Bt 150GB VIC1240 8 Hyper-V Hst (HSD) Physical B200-M3 2 x Hex-Cre 128GB SAN Bt 150GB VIC1240 2 2 3 1 Citrix XenApp Cntrller servers (Zne Data Cllectr) Citrix Web Interface server Citrix Prvisining servers Citrix License server Virtual Servers VM 4 vcpu 8GB 100GB 1 vnic VM 2 vcpu 4GB 100GB 1 vnic VM 4 vcpu 16GB 100GB 2 vnics VM 2 vcpu 4GB 100GB 1 vnic 2 DHCP Servers VM 2 vcpu 4 GB 150GB 1 vnic 48 Citrix XenApp Wrker (HSD) servers VM 4 vcpu 16GB 100GB (PVS) 20GB (W/C) 2 vnics 1 Virtual Machine Manager VM 4 vcpu 16GB 150GB 1 vnic 1 Citrix EdgeSight server VM 4 vcpu 16GB 100GB C: 300GB D: 1 vnic Virtual Servers (Failver Cluster fr General Use file shares) 2 File Server Cluster Ndes VM 4 vcpu 16GB 150GB 4 vnics Table 31. Server Inventry - 56 -

The cpyright in this reprt and all ther wrks f authrship and all develpments made, cnceived, created, discvered, invented r reduced t practice in the perfrmance f wrk during this engagement are and shall remain the sle and abslute prperty f Citrix, subject t a wrldwide, nn-exclusive license t yu fr yur internal distributin and use as intended hereunder. N license t Citrix prducts is granted herein. Citrix prducts must be licensed separately. Citrix warrants that the services have been perfrmed in a prfessinal and wrkman-like manner using generally accepted industry standards and practices. Yur exclusive remedy fr breach f this warranty shall be timely re-perfrmance f the wrk by Citrix such that the warranty is met. THE WARRANTY ABOVE IS EXCLUSIVE AND IS IN LIEU OF ALL OTHER WARRANTIES, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE WITH RESPECT TO THE SERVICES OR PRODUCTS PROVIDED UNDER THIS AGREEMENT, THE PERFORMANCE OF MATERIALS OR PROCESSES DEVELOPED OR PROVIDED UNDER THIS AGREEMENT, OR AS TO THE RESULTS WHICH MAY BE OBTAINED THEREFROM, AND ALL IMPLIED WARRANTIES OF MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, OR AGAINST INFRINGEMENT. Citrix liability t yu with respect t any services rendered shall be limited t the amunt actually paid by yu. IN NO EVENT SHALL EITHER PARTY BY LIABLE TO THE OTHER PARTY HEREUNDER FOR ANY INCIDENTAL, CONSEQUENTIAL, INDIRECT OR PUNITIVE DAMAGES (INCLUDING BUT NOT LIMITED TO LOST PROFITS) REGARDLESS OF WHETHER SUCH LIABILITY IS BASED ON BREACH OF CONTRACT, TORT, OR STRICT LIABILITY. Disputes regarding this engagement shall be gverned by the internal laws f the State f Flrida. Level 3, 1 Julius Avenue Nrth Ryde, Sydney 2113 02-8870-0800 http://www.citrix.cm Cpyright 2012 Citrix Systems, Inc. All rights reserved. Citrix, the Citrix lg, Citrix ICA, Citrix MetaFrame, and ther Citrix prduct names are trademarks f Citrix Systems, Inc. All ther prduct names, cmpany names, marks, lgs, and symbls are trademarks f their respective wners.