Citrix XenDesktop: Best Practices with Cisco UCS

Similar documents
Citrix XenDesktop: Best Practices with Cisco UCS

Direct Attached Storage

White paper. Microsoft and Citrix VDI: Virtual desktop implementation scenarios

SolidFire SF3010 All-SSD storage system with Citrix CloudPlatform Reference Architecture

A Platform Built for Server Virtualization: Cisco Unified Computing System

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Unified Computing Systems

Citrix desktop virtualization and Microsoft System Center 2012: better together

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Citrix Enterprise Mobility Report

Backup and Recovery with Cisco UCS Solutions for SAP HANA

Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager

Five reasons why you need Citrix Essentials for Hyper-V now

Implementing Cisco Data Center Unified Computing (DCUCI)

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Solution Guide for Citrix NetScaler and Cisco APIC EM

How To Build A Cisco Ukcsob420 M3 Blade Server

Deploying XenApp 7.5 on Microsoft Azure cloud

Advanced virtualization management for Hyper-V and System Center environments.

Citrix XenServer Industry-leading open source platform for cost-effective cloud, server and desktop virtualization. citrix.com

DCICT: Introducing Cisco Data Center Technologies

IT Agility Delivered: Cisco Unified Computing System

PassTest. Bessere Qualität, bessere Dienstleistungen!

Achieving the lowest server virtualization TCO

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Transforming Call Centers

Citrix Lab Manager 3.6 SP 2 Quick Start Guide

Remote access to enterprise PCs

Virtual Desktop Acquisition Cost Analysis citrix.com

Extending Microsoft Hyper-V with Advanced Automation and Management from Citrix

Provisioning ShareFile on Microsoft Azure Storage

614W: Platform Training on XenDesktop with Cisco Unified Computing System (UCS) and NetApp

How To Build A Call Center From Scratch

Mobilizing Windows apps

Automation Tools for UCS Sysadmins

Solutions Guide. Deploying Citrix NetScaler for Global Server Load Balancing of Microsoft Lync citrix.com

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led

What Is Microsoft Private Cloud Fast Track?

Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1)

Building a better branch office.

Better virtualization of. XenApp and XenDesktop with XenServer

Deploying XenApp on a Microsoft Azure cloud

MANAGE INFRASTRUCTURE AND DEPLOY SERVICES WITH EASE USING DELL ACTIVE SYSTEM MANAGER

Solution Brief. Deliver Production Grade OpenStack LBaaS with Citrix NetScaler. citrix.com

Desktop virtualization for all - technical overview citrix.com

ANZA Formación en Tecnologías Avanzadas

The Benefits of Virtualizing Citrix XenApp with Citrix XenServer

Desktop virtualization for all

Citrix XenDesktop with FlexCast technology. Citrix XenDesktop: Desktop Virtualization For All.

VDI PERFORMANCE COMPARISON: CISCO UCS SOLUTION VS. HP SOLUTION

UCS Manager Configuration Best Practices (and Quick Start)

High availability and disaster recovery with Microsoft, Citrix and HP

UCS M-Series Modular Servers

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000

Unlock the power of server virtualization

Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh

Desktop virtualization for all

Deploying NetScaler Gateway in ICA Proxy Mode

Pure Storage: All-Flash Performance for XenDesktop

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT

Building the Virtual Information Infrastructure

Technical Guide for Adding XenDesktop 4 to an Existing XenApp 5 Environment

icrosoft TMG Replacement with NetScaler

Cisco Unified Computing System (UCS) Storage Connectivity Options and Best Practices with NetApp Storage

UCS Network Utilization Monitoring: Configuration and Best Practice

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Effective hosted desktops

Data Center Networking Designing Today s Data Center

VELOCITY. Quick Start Guide. Citrix XenServer Hypervisor. Server Mode (Single-Interface Deployment) Before You Begin SUMMARY OF TASKS

Data Center Consolidation for Federal Government

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: COMPUTE EXAM

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

Fibre Channel HBA and VM Migration

Guide to Deploying Microsoft Exchange 2013 with Citrix NetScaler

Solutions Guide. Deploying Citrix NetScaler with Microsoft Exchange 2013 for GSLB. citrix.com

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices

Advanced virtualization management for Hyper-V and System Center environments

Cisco UCS: Unified Infrastructure Management That HP OneView Still Can t Match

Next Generation Data Center Networking.

Adding Traffic Sources to a Monitoring Session, page 7 Activating a Traffic Monitoring Session, page 8 Deleting a Traffic Monitoring Session, page 9

Citrix XenClient. Extending the benefits of desktop virtualization to mobile laptop users.

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Storage XenMotion: Live Storage Migration with Citrix XenServer

Deliver the Next Generation Intelligent Datacenter Fabric with the Cisco Nexus 1000V, Citrix NetScaler Application Delivery Controller and Cisco vpath

The falling cost and rising value of desktop virtualization

How To Design A Data Centre

Cisco UCS Architecture Comparison

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

HBA Virtualization Technologies for Windows OS Environments

Cloud Networking Services

Executive summary. Introduction Trade off between user experience and TCO payoff

Microsoft SharePoint 2013 with Citrix NetScaler

HP Converged Infrastructure Solutions

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

The top 5 truths behind what the cloud is not

Transcription:

Global Alliance Architects Citrix XenDesktop: Best Practices with Cisco UCS

2 Contents Overview...3 An Overview of Cisco UCS...3 Design Considerations...5 Prerequisites...6 Pool Design...7 Management IP Pool...7 MAC Pools...7 Server Pools...9 UUID Suffix Pools...10 WWNN Pools...10 WWPN Pools... 11 Policy Design...12 BIOS Policies...12 Boot Policies... 14 Host Firmware Packages...15 Local Disk Configuration Policies...16 Maintenance Policies (Optional)... 17 Management Firmware Packages... 17 Scrub Policies (Optional)...18 Networking and Storage Design...18 VLANs...18 Quality of Service...19 vnic Template...22 VSANs...25 vhba Templates...26 Service Profile Configuration...26 Summary...33 Appendix A: Automation with Cisco UCS PowerTool for UCSM...34 Appendix B: Content of the Excel File...37 References...38 Product Versions...38

3 Overview Cisco Unified Computing System (Cisco UCS ) is an ideal platform for hosting Citrix XenDesktop as it provides the following key benefits: Increased scalability and performance for large-scale Citrix XenDesktop solutions. When measuring a platform for Citrix XenDesktop, a critical factor is the system s ability to deliver virtual machine density while maintaining exceptional end-user performance especially at large scale. As documented in the Cisco Validated Design Guides for Citrix XenDesktop and a performance report, Cisco UCS provides linear scalability to support Citrix XenDesktop deployments. Rapid provisioning of virtual desktops. Cisco UCS configurations are very dynamic in nature, providing flexibility and rapid provisioning of the infrastructure components hosting Citrix XenDesktop. This simplifies the data center operations required for easy virtual desktop solution scalability. Increased networking visibility and security for virtual desktops. With a networking-based approach to blade computing, Cisco UCS provides superior control and visibility into networking components that critically impact virtual desktops and end-user performance. Several Cisco UCS capabilities contribute, such as unified, model-based management, applying quality of service (QoS) policies internally to Cisco UCS, and delivering massive total bandwidth (up to 80 Gbps) for virtual desktops hosted on Cisco UCS. Leveraging these capabilities to maximum benefit requires a joint solution that utilizes Cisco and Citrix best practices, as described in published Cisco Validated Design Guides and other respective Cisco and Citrix product documents. This white paper highlights specific design and implementation recommendations for configuring Cisco UCS for hosting Citrix XenDesktop. Aimed at Cisco UCS and Citrix XenDesktop administrators making design decisions for the joint solution, this white paper focuses on the choices available when configuring Cisco UCS service profiles, pools, policies, and templates. In addition, the design considerations in this document apply to three hypervisors (Citrix XenServer, Microsoft Hyper-V, and VMware vsphere) for hosting Citrix XenDesktop on Cisco UCS. Specific notes are added where the recommendations differ for hypervisors. The document assumes basic familiarity with Cisco UCS, and is not meant to serve as a thorough installation, configuration, or scalability guide for Cisco UCS or Citrix XenDesktop. Technical documents that address these aspects of deployment are provided in the references section. An Overview of Cisco UCS Cisco UCS is a data center platform that unites compute, network, storage access, and virtualization into a cohesive system designed to increase business agility, and reduce data center complexity by consolidating and unifying management functions. Figure 1 provides an architectural view of Cisco UCS at a high level.

4 Upstream LAN Upstream SAN Nexus 5548UP A Nexus 5548UP B Ethernet Uplink FC Uplink FC Uplink Ethernet Uplink FC Uplink Fabric Interconnect A Chassis 1 Fabric Interconnect B FCoE Server Ports Chassis 2 FCoE Server Ports Chassis 3 FCoE Server Ports Chassis 4 FCoE Server Ports Up to 20 Chassis per Fabric Interconnect Pair Cisco UCS Figure 1. Cisco UCS architectural overview Cisco UCS 6200 and 6100 Series Fabric Interconnects provide network connectivity and management capabilities for the entire Cisco UCS system. Up to 20 server chassis are supported per fabric interconnect pair. Each Cisco UCS 5100 Series Blade Server Chassis supports up to eight halfwidth or four full-width blade servers. Each chassis contains up to two Cisco UCS Fabric Extenders (I/O modules) that function as remote line cards to the fabric interconnect. These modules extend the I/O fabric between the chassis and interconnects. Chassis management functions are provided in conjunction with the fabric interconnects, eliminating the need for separate chassis management modules. Cisco UCS B-Series Blade Servers and Cisco UCS C-Series Rack Servers are supported in Cisco UCS. Half-width Cisco UCS B200 M3 Blade Servers are recommended for hosting XenDesktop, as they provide the right balance of cost and performance. A full list of blade and rack servers can be found here.

5 Network adapters. The Cisco UCS Virtual Interface Card (VIC) 1240 and 1280 are uniquely designed for Cisco UCS. Optimized for virtualized environments, the VICs present up to 256 dynamic virtual adapters and interfaces to the hypervisor on a given blade server. These virtual interfaces can be configured as Fibre Channel or Ethernet devices, and provide great flexibility when designing a Citrix XenDesktop solution. For this reason, most Cisco UCS deployments for Citrix XenDesktop are recommended to include Cisco UCS VIC 1240 or 1280 cards. Refer to the networking design section for more information. Advanced support for virtualized environments is provided by the Cisco UCS VIC 1280. The card implements Cisco Virtual Machine Fabric Extender (VM- FEX) technology, which unifies virtual and physical networking into a single infrastructure. It provides virtual machine visibility from the physical network and a consistent network operations model for physical and virtual servers. The Cisco UCS VIC 1280 supports VMware VMDirectPath with VMware VMotion technology, enabling it to pass through the hypervisor layer and provide near-bare-metal performance for unified computing systems. Cisco UCS Manager is a crucial logical component. It manages Cisco UCS as a single logical entity through a GUI, command-line interface, or XML API that can be integrated with third-party management software. Cisco UCS Manager is implemented on the fabric interconnect as a fully clustered service in an activestandby mode. The fabric interconnects are connected by a pair of 1 GB data ports that are used for all cluster related communication. Data traffic does not pass through these ports. All configuration is performed within the Cisco UCS Manager GUI. In Figure 1, each chassis has two connections to both fabric interconnects (A and B). This provides 20 Gbps of effective throughput carrying Fibre Channel over Ethernet (FCoE) traffic between each chassis and the fabric interconnect. Up to eight connection are supported for additional bandwidth, providing up to 160 Gbps throughput. The traffic between the fabric interconnects and Cisco Nexus 5548UP Switch is not FCoE traffic, as multi-hop FCoE is not supported by Cisco UCS. Similarly, traffic from the Cisco Nexus 5548UP Switch is sent separately to the upstream local area network (LAN) and storage area network (SAN). It is recommended that the Cisco Nexus 5548UP Switch to SAN controller connection leverage the storage adapter s FCoE ports in order to optimize the port license count and number of connections required. When hosting Citrix XenDesktop, virtual machine Network File System (NFS) traffic and fibre channel traffic for hypervisors booting from the SAN can be configured over the FCoE connection. Design Considerations The primary reason for Cisco UCS being such a flexible solution for hosting Citrix XenDesktop is its intelligent design. Every aspect of a server s configuration, from firmware revisions and BIOS settings to network and storage profiles, can be assigned through the system s open, documented standards-based XML API. The easiest and most common means of accessing this API is through the Cisco UCS Manager GUI. Using Cisco UCS Manager, all server attributes are configured into an entity called a service profile.

6 A service profile consists of all attributes that are required to be attached to a blade server, such as Universal Unique Identifier (UUID), BIOS versions, virtual LAN (VLAN) and virtual SAN (VSAN) memberships, network interface card (NIC) adapter firmware revisions, world-wide node names (WWNNs) and world-wide port names (WWPNs), and more. Since these attributes are tied to a service profile, a blade server in Cisco UCS is completely stateless. Its identity comes from a service profile. This allows for the rapid provisioning of server resources required for desktop virtualization. For example, administrators can take a configured service profile template and create a desired number of identical service profiles that are attached to each blade server. Within a matter of minutes, blade servers are provisioned and can be re-provisioned when updates are made. The design decisions and implementation steps described in this document enable the creation of a service profile (based on a service profile template) that can be applied to Cisco UCS servers hosting Citrix XenDesktop. Prerequisites Before configuring pools, policies, networking, and storage templates and applying them to a service profile several prerequisites must be provided based on the specific environment. Table 1 highlights these requirements. Readers should ensure this information is available prior to proceeding with configuration. Table 1. List of prerequisites Item Description Cisco UCS Manager IP address Logical Citrix XenDesktop site and POD number VLAN IDs (Required for each VLAN. A VLAN is needed for each traffic type recommended for Citrix XenDesktop.) VSAN IDs and FCoE VLAN IDs SAN boot target information SAN zoning (for booting from a SAN) Blader server management IP addresses IP address used to connect to Cisco UCS Manager. To create resource pools in Cisco UCS, pick an appropriate site ID (X) and POD ID (Y). For example, when creating a MAC address pool, a MAC address uses the form 00:25:B5: XY: 1A:00, where X is the site ID and Y is the POD ID. Management traffic: VLAN used for hypervisor management traffic (typically the VLAN used for server management). Virtual machine data traffic: VLAN used for virtual desktop network connections (typically the VLAN used for physical desktops in the organization). Virtual machine motion: VLAN used within the server network for motion traffic (XenServer XenMotion, VMware VMotion, and Hyper-V Live Migration). IP storage: VLAN used to access the IP storage in the environment (such as accessing an array providing an NFS share). It is assumed that VSANs and FCoE VLANs are created on the upstream SAN switch and the IDs are available within Cisco UCS. Primary and secondary path WWPNs for the primary and secondary SAN boot targets. When testing hypervisor booting from SAN (BFS) configurations, it is assumed that appropriate SAN zoning configurations are performed on the upstream SAN switch. IP address range to assign to each blade server for management. The IP address range must be in the same subnet as the Cisco UCS Manager IP address. The subnet mask and default gateway also are needed.

7 Pool Design The foundations of service profiles are based on logical blocks known as pools, policies, and templates. Pools uniquely identify hardware resources. Policies enforce rules and templates facilitate reuse and rapid deployment within Cisco UCS. These pools and policies are consumed when templates are created, such as vnic and vhba templates. All entities pools, policies, and templates are combined when creating a service profile or service profile template. Several types of pools are required when creating a service profile for hosting Citrix XenDesktop. These pools include a management IP pool, as well as MAC, server, UUID suffix, WWNN, and WWPN pools. The sections that follow define design best practices for each pool type. Management IP Pool Using Cisco UCS Manager, a pool of IP addresses are configured into a Management IP pool under the Admin tab (Figure 2). These IP addresses ultimately are assigned for physical server management. When creating the pool, the IP address range must be in the same subnet as the Cisco UCS Manager IP address. Figure 2. Management IP pool creation MAC Pools The MAC addresses created from MAC pools are referenced by vnic templates. There are several ways to design an appropriate MAC pool structure. The simplest approach is to create a single MAC pool that provides MAC addresses for all vnics created to host Citrix XenDesktop. This approach is not ideal, nor is it recommended, since it makes it difficult to narrow down and easily pinpoint MAC address ownership in the event troubleshooting is required. The recommended method is to create one MAC pool for each network, to match vnic template design. As shown in Figure 3 and Figure 4, multiple MAC Pools are created for hosting Citrix XenDesktop.

8 Figure 3. MAC pool creation MAC pools are configured in the LAN tab > Pools section. A starting MAC address and pool size must be specified. In this example, the first MAC pool starts at MAC address 00:25:B5:XY:1A:00 with a size of 256. The value 1A is increased to 2A for the second pool, to 3A for the third pool, and so on. Figure 4 depicts the creation of multiple MAC pools, matching the number of vnics to be created in a vnic template. For example, the first MAC pool (MGMT-A) represents a pool that will be assigned to a vnic representing management traffic tied to Fabric Interconnect A. The second MAC pool (IP_STORAGE-B) will be assigned to a vnic representing IP storage traffic tied to Fabric Interconnect B. Figure 4. Recommended MAC pools

9 Server Pools Server pools are created to group servers that are the same blade type and are used for a similar purpose. The creation of at least one server pool is recommended. This enables all servers hosting Citrix XenDesktop to be placed in the same pool. Dedicated pools for hosting virtual machines and Citrix XenDesktop infrastructure components are recommended, particularly if blade server types differ. For example, the memory capacity required for Citrix XenDesktop virtual desktop servers typically is much larger than that needed for infrastructure servers. Server pools are created under the Servers>Pools>Server Pools tab (Figure 5). Figure 5. Server pool Once a server pool is created, it is applied to the server pool policy under the Server>Policies>Server Pool Policies tab (Figure 6). Figure 6. Server pool policy

10 UUID Suffix Pools A UUID suffix pool is referenced when service profiles are created. A single pool is sufficient for hosting Citrix XenDesktop. For example, a XenDesktop UUID prefix pool can be created with the value 0000-XY0000000001 and the size set to match specific server requirements (Figure 7). Figure 7. UUID suffix pool WWNN Pools WWNN pools are used to derive node names assigned to individual servers. Used by vhbas when creating a vhba template, WWNN pools enable a number of individual adapters to be collated and assigned to a single server entity. The pool is configured in the SAN tab > Pools > WWNN Pools section (Figure 8). Figure 8. WWNN pool

11 A WWNN block of 20:00:00:25:B5:XX:XX:XX is recommended for assigning node names to Cisco UCS servers. As shown in Figure 9, 20:00:00:25:B5:11:XX:XX is used with a size of 256 hosts (X=1, Y=1). While the size can bet set to match the number of hosts in the existing environment, planning for future expansion is recommended when selecting the size. Figure 9. WWNN block configuration WWPN Pools Assigned to individual vhbas, WWPNs are used for zoning and other storage related configuration tasks. In this document, two vhbas are configured. As a result, two WWPN pools are needed. The WWPN pools are created in the SAN tab > Pools > WWPN Pools section. Figure 10. WWPN configuration

12 For example, a valid WWPN pool can be created with a value of 20:00:00:25:B5:XY:A1:00, where A1 is identifies the first vhba tied to fabric interconnect A, and B1 identifies the first vhba tied to fabric interconnect B. Figure 11. WWPN block configuration Note: Throughout this document, sub-organizations are not created with Cisco UCS Manager for simplicity. If a sub-organization is created for management purposes, the path of pools, policies, and templates will be under the suborganization under which the objects are created. Policy Design The next step is to create policies. Five policies are recommended for hosting Citrix XenDesktop within the specified design best practices. These policies relate to BIOS settings, firmware packages, local disk configuration, maintenance, and scrubbing. BIOS Policies In Cisco UCS, BIOS policies are associated with a service profile. Figure 12 shows the recommended BIOS settings to be applied for servers hosting Citrix XenDesktop. These policy settings were identified with performance and stability in mind, and are not necessarily designed for the lowest power consumption. If optimal power consumption is a requirement, these policies should be revisited. See this guide for details. Create a new BIOS policy in the Servers tab > Policies > BIOS Policies section. Under the Main page of the policy, change Quiet Boot to disabled (Figure 12).

13 Figure 12. BIOS policies Main tab Under the policy s Advanced > Processor tab, change the following settings from their platform-default values as shown in Figure 13. These values are recommended specifically for Citrix XenDesktop. The remainder can be left to reflect platform default values. Turbo Boost: disabled Enhanced Intel Speedstep: disabled Processor C State: disabled Processor C1E: disabled Processor C3, C6 and C7 Report: disabled CPU Performance: enterprise Figure 13. BIOS policies Advanced (Processor) tab

14 Next, set Advanced > Intel Directed I/O: VT for Director IO to enabled (Figure 14). Figure 14. BIOS policies Advanced (Intel Directed I/O) Under the policy s Advanced > RAS Memory tab, set Memory RAS Config to maximum-performance and enable NUMA. Figure 15. BIOS policies Advanced (RAS Memory) Boot Policies In Cisco UCS deployments hosting Citrix XenDesktop, Fibre Channel BFS is recommended for hypervisors. Several steps are needed, including the configuration of a Boot from SAN policy. First, create a new policy in the Servers > Policies > Boot Policies section and configure it as shown in Figure 16. For complete redundancy, configure primary and secondary paths for both SAN primary and SAN secondary. In the right situation, it might be acceptable to use only primary targets for both the primary and secondary SAN settings.

15 Figure 16. Boot policies (for BFS) Host Firmware Packages A host firmware package consists of adapter, BIOS, and RAID Controller packages that are to be applied to blade servers. Create a host firmware package for each blade type hosting Citrix XenDesktop (Figure 17) If separate blade server models are used for hosting Citrix XenDesktop infrastructure components, a separate package must be created. These firmware versions get applied to the respective servers along with service profiles. Figure 17. Host firmware package

16 Local Disk Configuration Policies If Boot from SAN is used, then a local disk policy of No Local Storage should be created in the Servers > Policies > Local Disk Config Policies tab (Figure 18). Figure 18. Local disk configuration policy No Local Disk (for BFS) Note: When local disks are used for installing the hypervisor, a local disk policy can be configured (Figure 19). Be sure to select the appropriate RAID configuration for the environment. Figure 19. Local disk configuration policy for Local Disk Boot

17 Maintenance Policies (Optional) Create a maintenance policy and select User Ack in the Reboot Policy tab. This ensures user acknowledgement prior to rebooting servers. Figure 20. Maintenance policy User Acknowledgement Management Firmware Packages This package consists of the CIMC firmware package to be applied to Cisco UCS servers. Create a new management firmware package for each blade server type and each version used in the environment. Figure 21 depicts the settings for creating a management firmware package Cisco UCS B200 M3 Blade Servers with the 2.0 (3c) firmware version (the latest at the time of this writing). Figure 21. Creating a management firmware package

18 Scrub Policies (Optional) For local storage configurations, it is recommended to create a noscrub policy with Disk Scrub and BIOS Settings Scrub values set to disabled (Figure 22). Figure 22. No scrub policy (for Local Boot) Networking and Storage Design The network design described in this section is based on a Cisco UCS B200 M3 Blade Server with a single Cisco UCS Virtual Interface Card 1240/1280. Using this adapter, Cisco UCS Manager can dynamically configure virtual interfaces as part of a service profile. This can be designed in multiple ways for hosting Citrix XenDesktop. The VIC is configured by creating vnic and vhba templates and associating them with a service profile. The service profile is associated with a physical server. When creating a service profile, several items must be configured from a networking and storage perspective, including VLANs, quality of service, vnic templates, VSANs, and vhba templates. VLANs When hosting Citrix XenDesktop with Provisioning Services (PVS), the following VLANs are recommended. MGMT, used for hypervisor management traffic (typically the same VLAN used for server management) VM_DATA, used for the virtual desktops network connection (typically the same VLAN used for the physical desktops within an organization) IP_STORAGE, used specifically to carry NFS traffic hosting PVS-based Write Cache files for virtual machines VM_MOTION, used in the server network for motion traffic (XenServer XenMotion, VMware VMotion, and Hyper-V Live Migration) The VLANs can be created using Cisco UCS Manager in the LAN tab > LAN Cloud > VLANs section (Figure 23).

19 Figure 23. VLANs Note: If virtual machines are spread across multiple VLANs in large-scale Citrix XenDesktop environments, additional VM_DATA VLANs should be created. Quality of Service Cisco UCS uses Data Center Ethernet (DCE) to handle all traffic inside Cisco UCS. DCE bandwidth is allocated within the system based on system classes that are configured system wide. Specific QoS policies are configured that leverage the settings specified in the system classes. The QoS policies are applied when a vnic template is created. Applying QoS policies for hosting Citrix XenDesktop guarantees bandwidth allocation to higher priority traffic, such as NFS write cache traffic over live migration or management traffic. In the recommended design documented in this white paper, the following custom QoS system class values are configured to be used by QoS Policies (Figure 24). The following Class of Service (Cos) values are configured: Figure 24. QoS system class

20 Table 2 describes the settings used in Figure 24. Table 2. QoS system classes Priority Cos Packet Drop Weight (%) MTU Platinum (Disabled) Not used in this design. Reserved for other purposes. Gold 4 Allowed 3 (30%) 9216 When a Gold QoS policy is used, these settings ensure 30% of total bandwidth at any given time. For example, a weight of 3 guarantees 3 Gbps if total throughput within Cisco UCS system is 10 Gbps. In this design, this policy is applied to IP_STORAGE traffic to ensure NFS-based virtual machine write cache storage has 30% of bandwidth reserved at any given time. Silver 2 Allowed Best effort (10%) 9216 When a Silver QoS policy is used, these settings guarantee 10% of total bandwidth at any given time. For example, best effort guarantees 1 Gbps if total throughput within Cisco UCS system is 10 Gbps. In this design, this policy is applied to VM_MOTION traffic to ensure live migration traffic has 10% of bandwidth reserved at any given time. Bronze 1 Allowed Best effort (10%) Normal This priority serves the same purpose as the Silver priority, but uses a regular MTU size. Best Effort Any Allowed 3 (30%) Normal This priority serves the same purpose as the Gold priority, but uses a regular MTU size. Fibre Channel 3 Packet drop not allowed and nonconfigurable 2 (20%) FC Sets a 20% minimum allocation to Fibre Channel traffic, which is used in this design for hypervisor BFS only. Note: The Fibre Channel QoS priority should be higher for Hyper-V, as only block-based storage is supported and NFS cannot be used to place virtual machine write cache files. After the system class values are configured, the following QoS policies should be created and applied to a vnic template. Each QoS policy consists of: Priority: applied from one of the system classes configured per Table 2. Burst: set to 10240 bytes by default. Rate: Specified in Kbps, this setting is used when rate limiting must be configured. The design specified in this document recommends rate limiting management traffic not to exceed 1000000 (1 Gbps).

21 Host control value: a default value of none is used. Figures 25, 26, 27, and 28 show QoS policies creation for each traffic type designed for Citrix XenDesktop. Figure 25. QoS policy for IP_STORAGE traffic Figure 26. QoS policy for VM_MOTION traffic

22 Figure 27. QoS policy for MGMT traffic Figure 28. QoS policy for VM_DATA traffic vnic Template Cisco UCS Manager supports the creation of virtual network interface (vnic) templates. These templates should be used to create vnics that can be associated with a service profile and blade servers. The Cisco UCS VIC Card can create up to 256 vnics. As a result, there are multiple ways to design vnics for a given Cisco UCS environment, and the number of vnics created typically differs in each deployment.

23 A key consideration while creating a vnic (and a vnic template) is whether to take advantage of Cisco UCS hardware failover. When the failover option is enabled, availability is provided at the hardware adapter level in the event one side of the fabric (fabric interconnect, I/O Module or Fabric Extender) is down. The hypervisor does not handle any aspect of failover. If the failover option is disabled, failover and availability are configured at the hypervisor level, using bonding or port groups, if a distributed virtual switch is in use. It is highly recommended to use either hardware failover or hypervisor level bonding, but not both. The use of hardware-level failover is not recommended since the hypervisor is unaware of hardware-level failover efforts and reports errors stating lack of redundancy. (Only one network is visible to the hypervisor.) Instead, individual vnics can be created for each fabric (A and B), with hypervisor-level bonding providing redundancy. This document suggests the creation of vnics to match the VLANs created creating one vnic for each fabric (A and B), and for each VLAN created (Figure 29). Fabric Interconnect - A Ethernet Switch NPV veth1 veth2 veth3 veth4 Fabric Interconnect - B Ethernet Switch NPV veth1 veth2 veth3 veth4 VNTAGs through Fabric Extender A VNTAGs through Fabric Extender B Cisco Virtual Interface Card (VIC) vnic MGMT-A vnic MGMT-B vnic VM_Data-A vnic VM_Data-A vnic IP_Storage-A vnic IP_Storage-B vnic vnic VM_Motion-A VM_Motion-B VHBA 1 VHBA 2 vif 0 vif 1 vif 2 vif 3 vif 4 vif 5 vif 6 vif 7 vmhba 0 vmhba 1 MGMT Bond VM_Data Bond IP_Storage Bond VM_Motion Bond Figure 29. Cisco UCS Virtual Interface Card design for Citrix XenDesktop Citrix XenServer on Cisco UCS B200 M3 Blade Server Two vnics for each traffic type (MGMT, VM_DATA, IP_STORAGE, VM_MOTION) are created and tied to Fabric Interconnect A and Fabric Interconnect B, respectively. A total of eight vnics are presented to the hypervisor. The hypervisor performs bonding and presents four networks that are used by virtual machines and management. Figure 30 shows the creation of a vnic template in the LAN tab > vnic Templates section. It requires the combination of the attributes created earlier, such as VLANs, MAC pools, and QoS policies. Select the respective policies and attributes created for each traffic type. A Target must be selected. In this case, the Target is an adapter.

24 In addition to the vnic design, vhbas are created on the VIC. In this design, two vhbas are created for configuring hypervisor Boot from SAN with multipathing. Figure 30. vnic template creation Note: Selecting MGMT as a native VLAN is required for Citrix XenServer, as Citrix XenServer does not support management traffic over a non-native VLAN. In addition, selecting a target option other than adapter enables advanced configuration of the VIC, such as VN-LINK in hardware, where all virtual machine traffic bypasses the hypervisor. This feature is not supported in Citrix XenServer 6.0.2 and Hyper-V Server 2008 was not tested for this design. Figure 31 shows all the vnic templates created. These vnics are used when creating service profiles. Figure 31. vnic template design for Citrix XenDesktop

25 VSANs At a minimum, two VSANs must be created (one for Fabric Interconnect A and one for Fabric Interconnect B. During configuration, these are attached, respectively, to the vhbas used by the multipathing capable hypervisor BFS. While VSANs are made available only within Cisco UCS, they must be created on an upstream SAN switch (such as the Cisco Nexus 5548UP Switch). Figures 32 and 33 show VSAN-A and VSAN-B creation for Fabric Interconnects A and B. The VSAN ID should match the VSAN ID created on the upstream SAN switch, as well as the FCoE VLAN ID created on the upstream SAN switch. Note: As a best practice, the FCoE VLAN ID is created with a higher range number that matches the VSAN ID for better management. For example, FCoE VLAN ID 2012 is created for VSAN ID 12. Figure 32. VSAN for Fabric Interconnect A Figure 33. VSAN for Fabric Interconnect B

26 vhba Templates Similar to vnic templates, vhba templates are created to include the VSANs. In this case, two vhba templates are created, tying one to Fabric Interconnect A and one to Fabric Interconnect B (Figure 34). Other attributes attached to the vhba template include the VSAN and WWNN pool. Figure 34. vhba template configuration Multipathing or bonding configurations are possible when configuring hypervisor BFS. When possible, multipathing is recommended to achieve higher utilization by leveraging both active/active paths. Note: In this design, Fibre Channel BFS is recommended, as iscsi BFS is not supported with current versions of Citrix XenServer 6.0.2. Service Profile Configuration After all pools, policies, vnics, and vhba templates are created, a service profile can be created and against a blade server. Testing should be conducted to confirm that all applied settings are performing as expected. Upon completion, a service profile template can be created from the service profile. This template can be used to create multiple service profiles that can be applied to individual blade servers or server pools. The following steps create a service profile based on the objects created earlier in this document. These objects are associated with a server, enabling the server to obtain the identity defined by the profile: First, apply the UUID pool as shown in Figure 35:

27 Figure 35. Creation of a service profile Next, apply the two vhba templates and set the WWPN pool assignments (Figure 36 and Figure 37). If vhba templates are not created, individual vhbas can be created at this step. Creating templates upfront enables better manageability. Figure 36. Applying the first vhba template

28 Figure 37. Applying the second vhba template Figure 38. vhba assignment Similar to vhba template assignment, the next step is to assign all eight vnic templates to the service profile. Figure 39 shows a vnic template being applied.

29 Figure 39. Applying the vnic template configuration Figure 40 by shows a view of the vnic template assignments. Figure 40. vnic templates The next step is to assign the vnic and vhba placements. It is important to note the order is internal to Cisco UCS system and does not determine the order in which the vnics are seen by the hypervisor. To match a specific vnic with a virtual interface on the hypervisor, it is recommended to perform a one-time, manual check after the hypervisor is installed. (The MAC addresses used earlier can help identify the adapter within the hypervisor. For example, the 1A in the MAC address corresponds to MGMT-A, and the 3B in the MAC address corresponds to VM_MOTION-B).

30 Figure 41. vnic and vhba placement Next, specify the boot policy created to define the boot order for the blade server. Figure 42. Specifying the server boot order

31 Next, apply the maintenance and operational policies. Figure 43. Applying the maintenance policy Figure 44. Applying server assignments and firmware management policies

32 Figure 45. Applying operational policies Once the service profile is created, it can be associated with a blade server for initial testing. Upon completion of testing, the next step is to create a service profile template from the service profile by right clicking on the service profile (Figure 46). Figure 46. Creating a service profile template Typically, an updating template is created. With an updating template, all ongoing changes made to policies and objects are applied to the associated template, as well as the profiles created from the template. This provides the simplest way to push changes to an environment. Therefore, it is important to evaluate settings prior to making changes, as they will directly apply to the blade servers upon reboot.

33 Figure 47. Creating an updating template Once a service profile template is created, it can be used to create multiple service profiles that can be applied to blade servers (Figure 48). Figure 48. Creating service profiles from a template Summary This white paper demonstrated the simplicity of scaling deployments. After a service profile template is created using the recommended best practices for hosting Citrix XenDesktop on Cisco UCS, it is extremely easy to scale the infrastructure to multiple servers for hosting virtual machines.

34 Appendix A: Automation with Cisco UCS PowerTool for UCSM Following the best practices outlined in this document, a Microsoft Windows PowerShell script was written to automate the creation and configuration of many of the pools, policies, and templates required in a Citrix XenDesktop environment, culminating in the creation of a Cisco UCS service profile. The service profile can be used to create one or more service profiles that ultimately are applied to physical servers used to host virtual desktops created with Citrix XenDesktop. Designed primarily for proof of concept (POC) environments, the script assumes a relatively clean Cisco UCS environment is being used, with very little configuration already completed. It does not check for existing configurations, and if a conflict arises, the script issues an error. The script takes the name of an Excel configuration file containing information needed for configuration. An optional input switch is available to specify output logs should be sent to the console. For example: /UCS_Config_Excel.ps1 -ExcelFile XenDesktopBP.xlsx -ToConsole Starting around line 350, the script includes a commented-out section. This section could output all variables from the worksheet and derived variables to help with troubleshooting efforts. Another commented-out section starts around line 505. This section can be used to clean up a Cisco UCS Platform Emulator environment prior to the creation of the pools, policies, and templates. The script creates the items listed in Table 3, using input from the Excel Configuration worksheet. Table 3. Items created Templates Policies Pools Service profile template BIOS policy Management IP pool 8 vnic templates One each for the management, virtual machine data, motion (vmotion), and IP storage networks, and fabrics A and B 4 vhba templates For 4 vhbas, only 2 are used by the template created in this script Other QoS system class settings Host firmware package policy Local disk configuration policy Maintenance policy Network control policy SAN boot policy Server pool policy Management firmware package policy 4 QoS policies One each for the management, virtual machine data, motion (vmotion), and IP storage networks 2 VSANs, One each for fabrics A and B Server pool UUID pool WWNN pool 4 WWPN pools For four HBAs, only 2 are used by the template created in this script 8 MAC pools One each for the management, virtual machine data, motion (vmotion), and IP storage networks, and fabrics A and B 4 VLANs One each for the management network, virtual machine data network, motion network (vmotion), and IP storage network

35 Two parameters can be used at script launch. ExcelFile, followed by the name of the Excel file. This file must be in the same directory as the script. The example below reads the XenDesktopBP.xlsx file and outputs the status of configuration to the Powershell console../ucs_config_excel.ps1 -ExcelFile XenDesktopBP.xlsx -ToConsole -ToConsole, if specified, prints output to the console. If the ToConsole switch is not used, only the name of the Excel Configuration file needs to be specified. The example below reads the XenDesktopBP.xlsx file and creates the UCSM_ Configuration_Script_Log.txt output file. /UCS_Config_CitrixXDBP.ps1 XenDesktopBP.xlsx Before using the script, a few prerequisites must be met. Install Cisco UCS PowerTool. First, download and install the Cisco UCS PowerTool from http://developer.cisco.com/web/unifiedcomputing/powertoolexamples. Scroll down to the Cisco UCS and Citrix XenDesktop Configuration Tool section and click on the link to download the CitrixXDBP_Powertool.zip file. In a Microsoft Windows environment with Microsoft Excel installed, extract the files and run the CiscoUcs-PowerTool-0.9.10.1 application. By default, a desktop icon for the Cisco UCS PowerTool is installed on the desktop. Edit the Excel configuration file. Save the UCS_Config_CitrixXDBP.ps1 script and XenDesktopBP.xlsx files to a location accessible from the Windows host where the Cisco UCS PowerTool was installed. Open the XenDesktopBP.xlsx file and edit the following values. Once edited, save the file in the same directory in which the script is located. Table 4. Values to edit IP Information Cisco UCS Manager IP address Management IP pool starting and ending IP addresses, subnet mask, and default gateway. These IP addresses are assigned to individual server blades for management purposes (one per blade) and must reside in the same subnet as the Cisco UCS Manager IP address. Site Information Service profile template name (single word) and description VLAN information for the management network, virtual desktop data network, motion network (vmotion), and IPbased storage network SAN boot target Information If SAN boot is not used, the defaults in the spreadsheet can be used. Launch the Cisco UCS PowerTool. Using the desktop icon created during installation, launch a PowerShell window. By default, PowerShell is launched with an ExecutionPolicy of RemoteSigned. The included UCS_Config_CitrixXDBP. ps1 script is signed, so the script should run using the default policy. If changes

36 are made to the script, it may be necessary to change the ExecutionPolicy to Unrestricted to allow the script to run. Change to the directory where the script and Excel Configuration file are located. cd.\citrixxdbp Execute the script, supplying the name of the Excel Configuration file and, optionally, a switch to output messages to the console. During the execution of the script, you are prompted for the username and password for the Cisco UCS Manager software../ucs_config_citrixsdbp.ps1 -ExcelFile XenDesktopBP.xlsx -ToConsole Modify pool and policy values. The BIOS policy created with this script assumes performance is more important than power consumption. If this is not the case, modify the CitrixHD_Host BIOS policy with desired values. In addition, a SAN_boot policy is created using values contained within the Excel configuration file. If SAN boot is not utilized, an alternative boot policy should be created, and the CitrixXDTemplate service profile template modified to utilize the new boot policy. Verify host and management firmware packages. Host and management firmware packages were created and added to the CitrixXDTemplate service profile template, however, no firmware has been added to these packages. Open and modify the CitrixXD host and management firmware packages as necessary to ensure the proper firmware is used in the environment. Add blade servers to the server pool. The CitrixXD_Server_Pool server pool was created and added to the CitrixXDTemplate service profile template, however, no servers have been added to the pool. Open and modify the pool as necessary to include the server blades to be used to host Citrix XenDesktop hosted virtual desktop machines. Create service profiles from the template. From the Servers tab within Cisco UCS Manager, click on the Service Profile Template named CitrixXDTemplate. In the Actions pane, click on Create Service Profiles From Template. Assign a naming prefix to be used for the new service profiles, and indicate the number of new service profiles to create. You should now have a number of service profiles created and ready to be used in a Citrix XenDesktop environment.

37 Appendix B: Content of the Excel File UCS Manager Information UCSM IP 172.16.213.129 Management IP Pool (ext-mgmt) From To Subnet Mask Default Gateway 172.16.213.150 172.16.213.160 255.255.255.0 172.16.213.1 *All IP addresses in the management IP pool must be in the same subnet as the IP address of the fabric interconnect. Site Information Site ID 1 Site Description Citrix XenDesktop Site 1 Pod ID 1 Pod Description Pod 1 Service Profile Template Name CitrixXDTemplate Description Citrix XenDesktop Service Profile Template VLANs Name VLAN Number Description MGMT 100 VLAN used for Hypervisor Management Network VM_DATA 200 VLAN used for Virtual Desktop Data Network VM_MOTION 300 VLAN used for VM Motion Network IP_STORAGE 400 VLAN used for access to IP-based Storage VSANs Fabric Name VSAN Number FCoE VLAN A VSAN-A 100 2100 B VSAN-B 101 2101

38 SAN Boot Target Information Primary Target Secondary Target Primary Path Secondary Path Primary Path Secondary Path 20:00:00:00:00:00:00:00 20:10:00:00:00:00:00:00 20:20:00:00:00:00:00:00 20:30:00:00:00:00:00:00 References Create Pools to Simplify Blade Management in Cisco UCS http://www.cisco.com/en/us/products/ps10281/products_configuration_ example09186a0080ae0f40.shtml#stepscreateuuidpools Cisco UCS Manager Configuration Common Practices and Quick-Start Guide http://www.cisco.com/en/us/prod/collateral/ps10265/ps10281/whitepaper_c11-697337.pdf Cisco UCS Manager CLI Configuration Guide, Release 1.3 (1) http://www.cisco.com/en/us/docs/unified_computing/ucs/sw/cli/config/ guide/1.3.1/cli_config_guide_1_3_1.pdf Configure BIOS Policy for Cisco UCS http://www.cisco.com/en/us/products/ps10281/products_configuration_ example09186a0080b36538.shtml Cisco Unified Computing System BIOS Settings http://www.cisco.com/en/us/prod/collateral/ps10265/ps10281/whitepaper_c07-614438.pdf UCS QoS Configuration Example http://www.cisco.com/en/us/products/ps10278/products_configuration_ example09186a0080ae54ca.shtml Product Versions The following product versions were used when creating this document: Citrix XenDesktop 5.6 Citrix XenServer 6.0.2 Cisco UCS 2.0 (3c) Revision History Revision Change Description Updated By Date 1.0 Initial document Bhumik Patel, Alliance Engagement Architect, Citrix Systems Chris Carter, VXI Consulting Systems Engineer, Cisco Systems 1.1 Appendix additions Bhumik Patel, Alliance Engagement Architect, Citrix Systems Chris Carter, VXI Consulting Systems Engineer, Cisco Systems October 9, 2012 December 15, 2012

39 Corporate Headquarters Citrix Systems, Inc. 851 West Cypress Creek Road Fort Lauderdale, FL 33309, USA T +1 800 393 1888 T +1 954 267 3000 Silicon Valley Headquarters Citrix Silicon Valley 4988 Great America Parkway Santa Clara, CA 95054, USA T +1 408 790 8000 EMEA Headquarters Citrix Systems International GmbH Rheinweg 9 8200 Schaffhausen, Switzerland T +41 52 635 7700 India Development Center 33 Ulsoor Road Bangalore, 560042 India Phone:+91 80 39541000 Fax:+91 80 39541060 Online Division Headquarters 7414 Hollister Avenue Goleta, CA 93117 Phone:805-690-6400 Fax:805-690-6471 Pacific Headquarters Citrix Systems Hong Kong Ltd. Suite 6301-10, 63rd Floor One Island East 18 Westland Road Island East, Hong Kong, China T +852 2100 5000 Latin America Headquarters 2525 Ponce de Leon Suite 1100 Coral Gables, FL 33134, USA Phone:786-449- 3700Fax:786-449-3701 UK Development Center Citrix Systems UK Ltd. Chalfont Park House Chalfont Park, Gerrards Cross Bucks, SL9 0DZ United Kingdom Phone:+44 (0)1753 276200 Fax:+44 (0)1753 276600 About Citrix Citrix Systems, Inc. (NASDAQ:CTXS) transforms how businesses and IT work and people collaborate in the cloud era. With market-leading cloud, collaboration, networking and virtualization technologies, Citrix powers mobile workstyles and cloud services, making complex enterprise IT simpler and more accessible for 260,000 organizations. Citrix products touch 75 percent of Internet users each day and it partners with more than 10,000 companies in 100 countries. Annual revenue in 2011 was $2.21 billion. Learn more at www.. 2013 Citrix Systems, Inc. All rights reserved. Citrix, Citrix Receiver, Citrix CloudGateway, Citrix ShareFile, HDX and Citrix XenDesktop are trademarks or registered trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries. All other trademarks and registered trademarks are property of their respective owners. 0213/PDF