5.6 Microsoft Hyper-V 2008 R2 / SCVMM ,000 Users. Contributing Technology Partners:

Size: px
Start display at page:

Download "5.6 Microsoft Hyper-V 2008 R2 / SCVMM 2012. 5,000 Users. Contributing Technology Partners:"

Transcription

1 5.6 Microsoft Hyper-V 28 R2 / SCVMM 212 5, Users Contributing Technology Partners:

2 Table of Contents EXECUTIVE SUMMARY... 1 ABBREVIATIONS AND NAMING CONVENTIONS... 2 KEY COMPONENTS... 3 SOLUTIONS ARCHITECTURE... 4 USERS AND LOCATIONS... 6 Enterprise Campus Datacenter (3,775 users)... 6 Large Branch Office (525 Users)... 6 Small Branch Office (15 users)... 6 Remote Access Users (6 Users)... 7 NETWORK INFRASTRUCTURE... 7 Network Design... 7 Datacenter and Remote Office LAN Network Architecture Overview... 7 WAN Network Architecture Overview... 8 STORAGE INFRASTRUCTURE Storage Planning Storage Deployment Infrastructure Storage VDI Storage COMMON INFRASTRUCTURE Infrastructure Deployment Methodology Physical Common Infrastructure Virtualized Common Infrastructure Common Infrastructure Services Overview DNS DHCP Hyper-V and SCVMM Infrastructure XenDesktop Infrastructure MODULAR VDI INFRASTRUCTURE Infrastructure Deployment Methodology Modular Block Overview Modular Block Sizing Modular Deployment Design Modular Block Infrastructure Hyper-V Virtualization for VDI SCVMM for VDI SCVMM Host SCVMM Cluster... 3 SCVMM Library File Server SCVMM SQL Provisioning Services (PVS) for VDI PVS Server Networking PVS Farm PVS File Servers PVS SQL User Profile Management Multi-Site Infrastructure Branch Offices Remote Access Users TEST METHODOLOGY TEST MILESTONES TEST TOOLS ii

3 Session Launching Performance Capturing In-Session Workload Simulation RESULTS... 4 PERFORMANCE RESULTS... 4 Performance Results Boot Storm... 4 PVS for VDA Performance (Boot Storm)... 4 SCVMM for VDA Performance (Boot Storm) Performance Results Test Run Hyper-V for VDA Performance (Test Run) PVS for VDA Performance (Test Run) SCVMM for VDA Performance (Test Run) SCVMM for VDA Library Server File Server Performance (Test Run) Multi-Site Performance (Test Run) ADDITIONAL TESTING: PVS WITH PERSONAL VDISK (PVD) Objective Results LESSONS LEARNED CONCLUSIONS APPENDIX... 5 DOCUMENTS REFERENCES... 5 HARDWARE CONFIGURATION... 5 Active Directory Physical Domain Controller Configuration:... 5 Hyper-V Cluster Pool Specifications Hyper-V Host for Virtual Desktops Configuration (Host #1): Hyper-V Host for Virtual Desktops Configuration: PVS Servers Configuration for each Modular Block: SCVMM Host for Virtual Desktops Configuration: HARDWARE SPECIFICATIONS Servers Storage Systems Network Switches Junipers Network Appliances AGEE Network Appliances SDX Network Appliances BRVPX Network Appliances Repeater Network Device MULTI-SITE PERFORMANCE (TEST RUN) BR VPX - Performance / Utilization... 6 SDX VPX Instance for BR VPX iii

4 Executive Summary This reference architecture documents the design, deployment, and validation of a Citrix Desktop Virtualization solution that leveraged the best-of-breed hardware and software vendors in a real-world environment. This design included Microsoft Hyper-V and SCVMM 212, HP blade servers, NetApp storage arrays, and Cisco networking. Five modular blocks (or 5,5 virtual desktops) consisted of users divided into the following groups: 3,775 in the Datacenter, 675 in 2 Branch Offices, and 6 remote users. This Desktop Virtualization reference architecture was built with the following goals: Leverage a modular design to allow for linear scalability and growth by adding additional modular blocks Design an environment to be highly available and resilient Architect a virtual desktop solution that can support users in different geographic locations, such as branch office workers and remote access users This Citrix Desktop Virtualization reference architecture was tested using the industry standard LoginVSI benchmark at the medium workload. Below are the high-level notable findings from the deployment: Citrix XenDesktop 5.6 delivered a resilient solution with Hyper-V and SCVMM 212 at 5,-hosted VDI desktops. SCVMM has the capability to support 2, desktops per host. When deploying a clustered SCVMM 212 server, we found that supporting 1, VMs in our environment was the optimal configuration for minimal impact on deployment time. Citrix Personal vdisk (PvD) in a Hyper-V clustered environment provided the benefits of desktop personalization while avoiding the increased server utilization of dedicated desktops. Hyper-V failover clustering proved to be a robust infrastructure that was highly available when a node failed during testing. The Citrix modular block architecture was validated to provide linear scalability of the VM architecture. This allows an environment to scale to large numbers by duplicating simple modular blocks. HP blade servers were able to easily support a large-scale deployment of virtual desktops offering a balance of power, efficiency, and performance for the end customer. This reference architecture provides design considerations and guidelines for a VDI deployment. 1

5 Abbreviations and Naming Conventions AG AGEE BR CIDR CSV HDX GPO ISL KMS NS NTP PvD PVS SCVMM U VDA VDI vdisk VPX XDC Access Gateway Access Gateway Enterprise Edition Branch Repeater Classless Inter-Domain Routing Cluster Shared Volume High Definition Experience Group Policy Object Inter-Switch Link Key Management Server NetScaler Network Time Protocol Personal vdisk Citrix Provisioning Services Microsoft System Center Virtual Machine Manager Citrix User Profile Manager Virtual Desktop Agent Virtual Desktop Infrastructure Virtual Disk (Provisioning Services Streamed VM image) Virtual Appliance XenDesktop Controller 2

6 Key Components Software VDI Desktop Broker Citrix XenDesktop 5.6 VDI Desktop Provisioning Citrix PVS 6.1 w/ Hotfix 3 Endpoint Client Citrix Receiver for Windows 3.1 User Profile Management Citrix User Profile Manager 4.1 VDI Personalization Citrix Personal vdisk Workload Generator Login VSI 3.6 Virtual Desktop OS Microsoft Windows 7 SP1 x86 Hypervisor Management Microsoft SCVMM 212 Update Rollup 2 Database Server Microsoft SQL 28 R2 Server Operating System Microsoft Windows Server 28 R2 SP1 VDI Hypervisor Microsoft Windows Server 28 R2 SP1 with Hyper-V Role Hardware Blade Servers HP BL46c G6 Infrastructure HP BL46c G7 Virtual Desktop Hypervisors Network Appliances WAN Optimization Citrix Branch Repeater 88, SDX, and VPX Remote Access Citrix NetScaler / Access Gateway Enterprise Network Devices Backbone Switch Cisco Nexus 71K 8x32/1G Workgroup Switches HP H3C 582, 581 Firewall/Routers Juniper Router SRX 24 Storage Systems - NetApp FAS Series Storage FAS324 Infrastructure / User Profile Storage FAS327 Virtual Desktop Storage FAS327 PVS and SCVMM Infrastructure Additional detail can be found in the appendix under Hardware Specifications. 3

7 Solutions Architecture Designing a Solutions Architecture to achieve the required scale involved a significant amount of planning for the systems and components in the environment. The first step in creating the conceptual architecture was to determine the number of users in the environment and the required services. Each of the sections below contains a description of the major elements of the environment, as well as sizing and design considerations for those elements. Figure 1: Full Environment with VDI and Remote Access Services 4

8 We also wanted to build a common block architecture that could be used to scale the solution more easily. The modular block is a concept that will be used throughout this document. A modular block is defined as a set of virtual desktops along with all of the components required to run that particular set of desktops. The high-level architecture shown in Figure 2 illustrates the components involved. Common infrastructure is the infrastructure put into place to run the entire project, or even datacenter. It consists primarily of services that are probably already in place in an environment, including AD, DNS, NTP, DHCP and other services. In our environment, this included a cluster of hypervisors to provide those services and an instance of SCVMM to manage that cluster. The shared XenDesktop Infrastructure components are those that are required for virtual desktops to execute in the environment, but that can be scaled and used for any number of virtual desktops deployed. For example, a Citrix License Server must exist in an environment, but this can easily be used for a deployment of any size. In this case, it included provisioning servers for the desktops as well. The modular blocks, in this environment, consist of the hypervisor hosts for the virtual desktops along with the System Center Virtual Machine Manager (SCVMM) management servers for those desktops. Each block of 1,1 desktops can be simply added in to the common and shared infrastructure as needed. The multi-site infrastructure also scales across multiple blocks, and will scale up as additional users are added and as the appliances allow. This Reference Architecture includes the following components: Users and Locations Network Infrastructure Storage Infrastructure Common Infrastructure Modular VDI Infrastructure Multi-site Infrastructure 5

9 Figure 2: High-Level Infrastructure Architecture Users and Locations Enterprise Campus Datacenter (3,775 users) The datacenter location required virtual desktop services with centralized system and desktop image management. The components selected to deliver these services were Citrix XenDesktop virtual desktops streamed by Provisioning Services 6.1, virtualized on Hyper-V, managed by Microsoft SCVMM 212 with Update Rollup 2 (UR2), and with shared storage hosted on NetApp FAS32 series Storage. NetScaler AGEE and Branch Repeater SDX appliances were selected to provide remote access and acceleration services for all remote branch and telecommuting users. Large Branch Office (525 Users) Users in the large branch office location needed secure and accelerated remote access to the datacenter-based virtual desktops. While having virtual desktops at a branch office of this size is a possibility, one of the requirements was ease of management and redundancy of infrastructure. The easiest way to meet that requirement was to have all virtual desktops be maintained in the datacenter. Components selected to provide connection acceleration services for these remote desktops utilize Citrix Branch Repeater technology: BR 88- series appliances at the branch location and BR-SDX appliances at the Datacenter. Small Branch Office (15 users) Users in the small branch office location also needed secure and accelerated remote access to datacenter-based virtual desktops. We selected Citrix Branch Repeater components, providing for a BR-VPX (virtual appliance) at the branch location to connect to the existing BR-SDX appliances in the Datacenter. 6

10 Remote Access Users (6 Users) A Citrix NetScaler with Access Gateway appliance was chosen to provide secure remoteaccess services because of its simple integration with a Citrix XenDesktop VDI infrastructure. Remote-access users connect to a NetScaler/Access Gateway appliance using the Citrix Receiver application, just like all users connect to the infrastructure. Network Infrastructure The next consideration for this environment is the network architecture and design. The network architecture included, but was not limited to, creation of IP address requirements, VLAN configurations, required IP services, and server network configurations. Considerations regarding IP allocation, IP block assignment, and WAN Routing were extremely important in ensuring that the design maintained its modularity while still being able to scale appropriately. Network Design When planning the network environment, one must determine how many nodes are needed at the beginning of the project and how many might be added throughout the lifetime of the project. Using this information, we can begin to plan the IP address blocks. It is desirable to employ a modular approach to network VLAN design. Traffic separation is efficient for VDI IP considerations and alleviating bandwidth traffic concerns. If possible, create a separate VLAN for certain types of traffic. For example: a Storage VLAN for storage traffic (that is, iscsi, NFS, or CIFS), DMZ s for certain external incoming traffic types, a server management VLAN (which may include Lights-Out capabilities and log-gathering mechanisms), and Guest VLANs for virtual desktops. This type of design approach keeps Layer-2 broadcasts to a minimum while not overutilizing the CPU and memory resources of network devices. Design Considerations: To provision 1, desktops and accommodate growth in chunks of 2 Desktops, using multiple blocks of /24 networks (254 hosts) aggregated is a more flexible approach than utilizing larger /23 (512 hosts) IP blocks. To grow the VDI IP Network environment in blocks of 4-5 users at a time, consider a larger/23 network. Allocate blocks of IP addresses according to what can be served logically from the Virtual Desktop administrator s perspective (gradual growth and scalability) along with what can be provisioned within the company s IT network-governance policies. To account for overhead and future headcount growth, as well as covering IP needs for services, allocate additional IP addresses as you grow. Planning the network design with growth and a buffer considered, blocks can be aggregated in chunks of /24 s, /23 s, and /22 s (124 hosts) as needed. In addition, CIDR supernetting of the IP blocks can be utilized as required. Datacenter and Remote Office LAN Network Architecture Overview A main Core switch, the Cisco Systems Nexus 71K Core Switch with eight cards and 32 1GbE ports, provided the Datacenter multilayer switching and routing services. This 7

11 switch also provided all routing, switching, and security services for the rest of the environment. H3C/HP 582 1GbE switches served other 1GbE service ports. Also, 1GbE ports were served by H3C/HP 581 1GbE switches with 1GbE uplinks. For Branch Office sites, workgroup switching and routing were required. The 1GbE ports required were provided by H3C/HP 581 1GbE switches, which incorporated 1GbE uplinks to the core infrastructure. WAN Network Architecture Overview The planning for the multisite WAN test environment included planning for VLAN and other network-based services; supporting both virtualized and physical systems; providing for WAN, DMZ, and firewall connectivity. Planning also included placing network service appliances, such as Branch Repeater and NetScaler systems, in correct, logical network locations. The solutions environment WAN routing at the datacenter was provided by a single Cisco core switch, as mentioned above. Providing appliance-to-appliance ICA optimization for the XenDesktop virtual desktops access required for the environment. To meet this requirement, we deployed BR-SDX appliances at the Datacenter and BR appliances (88- series and virtual appliances) at each of the Branch Office locations. Branch Site Layer-3 routing and edge services were provided by Juniper SRX24 full service Router/Firewall devices. A Branch Repeater 88-series appliance was selected for the large branch office (525 users), and a Branch Repeater virtual appliance (VPX) was selected for the 15 users at Branch Office 2. In the Datacenter, a Branch Repeater SDX appliance Model 1355 was used to allow for a single connection point for remote Branch Repeater connections at the Datacenter. WAN simulation and load generation, including WAN-byte traversal visibility, was provided by Apposite Linktropy 1GbE based WAN simulator appliances inserted between the remote sites and the Datacenter site. No reduction in bandwidth was introduced in the test environment. For remote access users, ICA Proxy and VPN services were required. To meet this requirement, a NetScaler appliance with an Access Gateway Enterprise Edition license was deployed in the datacenter. LACP 82.3AD was used for all ISL s between all devices. Network Design Considerations: Each network appliance is limited by the number of connections; most network appliances list the maximum number of TCP connections that they support. In the Citrix VDI environment, the ICA connection capacity of the Remote Access and WAN Acceleration devices need to be considered. It is necessary to match this capacity with the site user requirements, while including a buffer for future site growth. To optimize storage communications in the environment, we recommend using a dedicated VLAN for server to storage connections. The virtual desktop VLANs were created to match our Provisioning Services server farms: 22 IPs in /21 subnets. IP addresses were provided for both Legacy and 8

12 Synthetic NICs on 22 virtual desktop VMs per farm; as a result, two /21 VLANs were created across two modular VDI blocks. A storage VLAN was created for our environment and was sized large enough to provide IP addresses for all of our VDI hypervisor blades. It was configured so that each blade used two storage NICs bound via MPIO. Consider separating heavy network traffic in a dedicated VLAN so that it does not interfere with other traffic. In our environment, the virtual desktop PXE Boot traffic was separated based on the PVS servers servicing each modular VDI block. Uplinks between all network switches at Layer 2 should employ 82.3ad LACP Ethernet aggregation for increased throughput and resiliency. To determine the overall bandwidth needed, it s necessary to know the bandwidth required per session. The number of possible remote site sessions is directly proportional to the bandwidth. 9

13 Figure 3: Multi-Site Enterprise with 5K Users Network Concept 1

14 Storage Infrastructure Shared Storage is one of the most critical components in a large scale VDI environment. Scale, end user satisfaction and overall performance greatly depend on the storage systems deployed and their capabilities. Hardware and software features employed in the design of the storage layer architecture also impact these areas. As shown below, the Storage Layer touches and shares I/O with every other common block in the VDI server architecture. Storage Planning Figure 4: Storage Infrastructure Storage planning consists of two major sections: capacity planning and performance planning. If you have chosen a network attached storage (NAS) implementation, you must also account for additional network impact. Storage capacity planning is the projection of disk space assignment and allocation on the storage appliances as well as the projection of the required space based on known requirements. Single Server Scalability tests with application workloads specific to your company s operation can help you start your sizing diligence. This is a very common first step in any VDI storage implementation, as everyone s storage needs are unique. Storage performance planning takes into account your physical disk assignment and allocation while balancing the required disk IOPS for acceptable end user response. Storage processor CPU utilization (while running at full load) within the storage device is also an important data point. Every individual environment has its own requirements and bill of materials. Unfortunately, there is no concept of one size fits all in storage calculations. When planning for a network attached storage (NAS) implementation in your environment, an added consideration is the Ethernet network load and utilization that is associated with using NAS in large-scale virtual deployments. This often dictates employing 1GbE networking for the Storage layer of the network. For the Storage device, this may mean bonding multiple 1GbE network interfaces at the device level in order to aggregate to a higher bandwidth capability for each storage subsystem. 11

15 Sizing Considerations: When planning your NetApp RAID groups, the size and continuity of the RAID group member sizes change as you add more shelves of disk. There are two suggestions to consider: o Never exceed 24 disk members per RAID group. This is per guidance from o NetApp support Whenever possible keep all of your RAID group sizes even for even continuous data stripe length in your aggregates Consider the disk size, media type, spindle motor speed and disk cache of the disks employed in the array when calculating the projected storage implementation. Free space percentage of the provisioned storage units should be included in the capacity consumption. Free space is needed to allow sufficient seek time. In addition, some features of your storage subsystem, such as snapshots, deduplication, and other features, may require planning for additional space. Consider the amount of resource the Storage Processors utilize regarding the memory and CPU capacity to serve the disk shelves IO load. Design Considerations: Storage performance is greatly affected by the number of aggregate spindles in the storage array. With regards to NetApp storage, a very critical detail is the sizing of the RAID Groups in proportion to the amount of disks available for use. For NetApp, when employing Single Parity RAID4, there is a limitation of 16 disks per Raid Group. When employing 64Bit based RAID DP (Double Parity RAID 6), the disk design is limited to 24 Disks per Raid Group before performance limitations are imposed on the design. All RAID groups should be as close to the same exact size as possible. In addition, when calculating storage availability, remember that there is a RAID penalty: usable capacity goes down when using spindles for the RAID parity. When using iscsi, the overall Ethernet overhead of the protocol must be taken into consideration. This load of Ethernet encapsulation and unencapsulation can be extremely high when aggregated at this scale. We recommend the use of a dedicated TCP offload engine card (i.e. TOE card) to maximize performance. Implement the best practices of the storage vendor chosen in regard to LUN types, PIT copy practices, drivers, firmware, operating system releases, and other details that affect the host-to-storage relationship. Monitor the switch connected directly to the NAS devices, because that switch is a main point of interest when troubleshooting storage connectivity issues. 12

16 Storage Deployment It was found that a single NetApp FAS327 storage could reliably host storage for at least 22 XenDesktop 5.6 virtual desktop VMs virtualized on Hyper-V using the iscsi storage protocol. Infrastructure Storage NetApp FAS324 storage were utilized for the common infrastructure storage in this environment. These storage systems hosted iscsi LUNs for the common infrastructure Hyper-V Cluster, SQL database storage, and User Profile storage. They also hosted NFS volumes for test client storage. Sizing Considerations: The LUN for the Infrastructure Server VMs should be large enough to host several large fixed-vhd VMs. Note that the need for fixed-vhd is brought about by the combination of Hyper-V 28 R2 SP1 and a NetApp storage. All VHDs should be fixed VHD. Dynamic-VHDs should not be used when using iscsi storage for a Windows 28 R2 SP1 server) The DataCenter Infrastructure VM LUN was assigned 1TB to provide support 2 virtualized infrastructure server VMs at 4GB each with 35% free space, based on Microsoft and NetApp best practices. The SCVMM SQL Server LUN was assigned 23 GB to host five SCVMM 212 databases (each database was allocated 3GB) with 35% free space. The PVS SQL Server LUN was assigned 16GB to store three PVS farm databases, and also included 35% free space. User Profile LUNs were assigned 1GB each and were shared via iscsi. In addition, each LUN was assigned to a specific Modular Block. This size is based on small user profiles (less than 5MB per user) present in our environment. Users accessed these LUNs as Windows Shares off of an infrastructure Windows File Server and not directly on the storage via iscsi. Design Considerations: Windows Server 28 R2 SP1-based Hyper-V VMs have.bin files equal to the amount of RAM assigned to the VM. The.BIN files produce minimal IOPS activity, and should be assigned to thin-provisioned and/or lower-cost SAN storage if available. VDI Storage NetApp FAS327s were utilized for the VDI storage. Sizing Information for this environment: VDA Storage VDA LUNs were created for each 8-Node Hyper-V Cluster. Each cluster hosted 55 virtual desktop VMs and was allocated 3.23 TB, which included 28% free space. 13

17 VDA LUN space calculation: VDA RAM 1GB = Hyper-V.BIN File Size (When possible, put this on thin-provisioned storage) VDA Write Cache 4GB (this must to be large enough to contain the Page File as well as difference data) Total VM space 5GB each Total Space required 55 VMs * 5GB = 2525GB LUN Size* 2525GB + 28% = 3234GB *For the purpose of consistency among all of the storage we were using, the VDA LUN size was chosen to be 3234GB, which contains approximately 28% free space. In addition, 1GB RAM was selected per amount utilized as well as for maximum scalability. VDA RAM size needs to be determined for your specific user environment. PVS File Server Storage Each of the 3 PVS File Servers were assigned a 55 GB iscsi LUN that were shared as Windows File Server shares and mounted by the PVS server farms from a UNC location. Each PVS iscsi LUN was assigned to a PVS File Server dedicated to a 2,2 virtual desktop PVS Farm. Each PVS LUN was allocated 55GB. PVS LUN Size calculation: Host virtual desktop vdisks Backup location for virtual desktop write cache during failure of shared storage (up to 16MB per virtual desktop) Total Size (2 x 4GB) = 8GB 22*16/1 = 323GB 8GB + 323GB and with 35% free space = 544GB (Rounded up to 55GB) SCVMM Library Storage The SCVMM Library iscsi LUN was assigned 675GB to provide centralized storage for the virtual desktop template and also contain the backup of critical environment VMs. The LUN size chosen was to provide enough storage for at least ten 4GB VMs with free space. Cluster Quorum LUN A 2GB Cluster Quorum LUN was assigned to each Failover Cluster to serve as Quorum Witness storage LUN. Design Considerations Before creating the storage architecture for a large scale environment, you need to collect storage utilization data such as Disk Space and IOPS utilization for each of the environment component types mentioned above, For monitoring performance in VDI environments, knowing all desired storage system specifications is critical. It is particularly important to monitor utilization of 14

18 the Processor, network interfaces, Disk Space, and Disk Performance of the storage device and its components. It is recommended to use network interface cards (NIC) that provide maximum performance and caching capabilities. In this test environment, we deployed Intel 1GbE cards (NetApp X1117A-r6) and this resulted in higher network throughput with lower system CPU load due to higher buffer memory. NetApp LUN-type and format-allocation units are important for best performance of the cluster s shared storage. In this test environment, Windows_28 LUN-type and 64KB allocation-unit size were found to be the best model. Common Infrastructure The next step of creating the Solutions Architecture was the planning and preparation of the Common Infrastructure. Figure 5: Common Infrastructure The common infrastructure was made up of the systems that provided core services to the entire environment. These systems were comprised of a mixture of physical and virtualized systems. Infrastructure Deployment Methodology The infrastructure Operating System, Features, Roles, Software, IP information, and other configuration settings were centrally managed and deployed with HP Insight Control server. HP Insight Control allows for streamlined and consistent deployment to the large number of servers. The Common Infrastructure functions were hosted on HP BL46c G6 blade servers, which are managed by the Insight Control. The entire infrastructure consisted of HP Blades. We started by first creating VLANs, then HP Server Virtual Connect Profiles, and last, deployment of the Operating Systems, roles and features, software and settings. 15

19 Physical Common Infrastructure Resiliency and performance requirements of many infrastructure services mandated the servers to be physical rather than virtual. Additionally, two physical Domain Controllers were required to maintain the functionality of the Failover Cluster that supported the virtual Domain Controllers per Microsoft Best Practices. All physical infrastructure servers ran on HP BL46 G6 servers. Microsoft offers best practices around SCVMM leveraging both physical servers as well as for virtualizing the environment. Microsoft offers the option of running VMM as a highly available VM instead of relying on physical clusters. As there are a number of existing documents and testing that explore virtualized SCVMM servers, we decided to use the physical server option. Both designs are valid and supported by Microsoft. The following software/services ran on physical machines for the Common Infrastructure: Active Directory / DNS / DHCP / NTP Microsoft Windows 28 R2 SP1 Enterprise Edition with Hyper-V Role (This is the same code-base as Microsoft Hyper-V Server 28 R2 SP1, but includes the GUI management capabilities. We fully expect our test results to match results from that operating system choice as well.) Microsoft SCVMM 212 R2 Microsoft SQL 28 R2 Virtualized Common Infrastructure In order to make the best use of system resources and to take advantage of virtualization, some common infrastructure components were virtualized. Active Directory virtualized systems added resiliency to the already existing physical services and spread the active directory load during the Boot Storm and logon processes. All virtual machines in the infrastructure ran on Windows Server 28 R2 SP1 with the Hyper-V Role hosted on HP BL46c G6 server blades. The following software/services were virtualized to support the Common Infrastructure: Active Directory / DNS XenDesktop Controllers (Desktop Delivery Controllers, or DDCs) Citrix License Server Design Considerations: Ensure that virtualized systems participate in the NTP process. Disabling the Hyper-V host time sync for the Virtual Guest services. This was applied on all virtualized Active Directory Domain Controllers and XenDesktop Controllers (using host time sync was a duplication of effort, since the hosts were already participating in the NTP time sync process). The following sections outline the architecture of the common infrastructure environment. 16

20 Common Infrastructure Services Overview Active Directory o Two Active Directory DCs in the Datacenter Two Physical: these servers provided DC, DHCP, DNS and NTP services Hyper-V 28 R2 o One Hyper-V Cluster with 6 Nodes SCVMM 212 SQL 28 R2 XenDesktop o Two XenDesktop Controllers Citrix Licensing o One Citrix License Server DNS DNS services are critical for both Active Directory services and XenDesktop communications. DNS services in both the Datacenter and the Branch Offices were used to fulfill name resolution requests and support local Active Directory requests. Active Directory Integrated Zones were created for both forward- and reverse-lookup zones. A reverse-lookup zone was created for each VLAN/Subnet. This was required to allow 2-way communications between XenDesktop and the VDAs. Design Considerations: For Windows and Microsoft Office activation, if a Key Management System (KMS) is employed, DNS entries for the KMS service need to be added. DHCP One DHCP server resided in the Datacenter and one in each of the Branch Offices. These DHCP servers were used to provide IP addresses and specific configuration options used by PVS to allow the virtual desktops to locate and boot from the PVS server. Additional specific Scope Options used: Option 66 Boot Server Host Name Set to IP address of the PVS TFTP server Option 67 Bootfile Name Set to ardbp32.bin PVS boot file name Design Considerations: It is possible to configure TFTP HA for a PVS environment. The following guide provides more information: 17

21 Hyper-V and SCVMM Infrastructure Virtualization in the Common Infrastructure was performed via Microsoft Hyper-V 28 R2 SP1 and managed by SCVMM 212. SCVMM Servers The infrastructure virtual machines were hosted on a Hyper-V Cluster that was managed by SCVMM. The SCVMM was deployed as an Active-Passive Clustered application on a two Node Failover Cluster. SCVMM clustering was deployed to provide resilient management of the critical hypervisor Cluster for the common infrastructure. The SCVMM servers for the Common Infrastructure were the same HP BL 46c G6 blades as the SCVMM servers for the VDI. Design Considerations: SCVMM servers can be virtualized, and, Microsoft has guidelines about deploying SCVMM in a virtual fashion these can be found on Microsoft s TechNet website. Two SCVMM servers clustered in an active-passive Failover Cluster create a resilient Hyper-V management system, allowing for a single host failure. Our Common Infrastructure SCVMM Cluster used a dedicated database that was hosted on the VDI SCVMM database SQL 28 R2 Cluster. Hyper-V Cluster A Hyper-V Failover Cluster was deployed for the purpose of virtualizing the common and XenDesktop infrastructure systems. All virtual machines that provided common infrastructure services ran on this cluster of Hyper-V 28 R2 SP1 servers in High Availability mode. This cluster consisted of HP BL46c G6 servers with NetApp-presented iscsi LUNs for Cluster Shared Storage. Sizing Considerations: The number of servers selected for the infrastructure Hyper-V Cluster should be chosen based on the infrastructure VM count and number of CPUs assigned to them and to allow for host failures. Design Considerations: It is recommended that, for enhanced storage network communication performance, Microsoft Multipath I/O (MPIO) with NetApp OnTap Device Specific Modules (DSM) be used with two Storage VLAN NICs in a Round Robin configuration. 18

22 XenDesktop Infrastructure Figure 6: Shared XenDesktop Infrastructure XenDesktop Controllers Each XenDesktop controller was virtualized on Windows Hyper-V 28 R2 SP1 and configured with four vcpus. This configuration allowed for sufficient performance to manage the assigned virtual desktop VMs, as well as log-on requests for the users during the test. Virtualized XenDesktop controllers were hosted on the common infrastructure Hyper-V Cluster. XenDesktop Site The XenDesktop site was configured to contain the following components: One connection per SCVMM Cluster One host record per Hyper-V Cluster, two per Connection Two Machine Catalogs per Hyper-V Cluster o Catalog for regular virtual desktop VMs o Catalog for remote access virtual desktop VMs One Primary Desktop Group per Modular Block, with additional desktop group for remote access users. This configuration also facilitates sizing tests. This configuration facilitated management and troubleshooting in a large, multi-connection and multi-cluster XenDesktop VDI environment. Each connection to SCVMM from XenDesktop was configured with custom advanced settings. The following table shows the host advanced settings that were assigned to each connection in the XenDesktop Desktop Studio. The settings were adjusted from the default, primarily in order to accommodate the number of hosts each SCVMM server was managing. 19

23 This ensured that optimal performance was achieved. This host configuration allows for the starting of 8 virtual desktop VMs every minute, which is five per Hypervisor host, for connection with two 8-Node Clusters. Host Configuration Advanced Settings: Max active actions: 1 Max new actions per minute: 8 Max power actions as percentage of desktops: 8% Max Personal vdisk power actions as percentage: 25% For the full environment, these settings allowed the VMs to start at the rates listed below. Overall, this allows starting all 55 virtual desktop VMs in 12 to 13 minutes. Sizing Considerations: XenDesktop components should be virtualized on Hyper-V in an N+1 configuration for resiliency and hosted on a resilient Hyper-V Cluster Our performance results indicated that two XenDesktop Controllers could support at least the number of desktops in this environment. The CPU and RAM parameters should be determined during environment sizing tests based on observed utilization of the XenDesktop controllers. Design Considerations: The host advanced settings should be adjusted based on the number of hosts, number of virtual desktops on the host connection, and on the desired boot time for the environment. To ensure that the PVS farm can support the required boot speed, it is important to perform a Boot Storm validation of the host advanced settings in a PVS environment. This will validate whether the hypervisor and storage subsystems can sustain the increased load during the boot storm. Additional information can be found in the Appendix under XenDesktop Site Details SQL for XenDesktop The SQL environment that supported this XenDesktop environment was configured in a database synchronous mirroring SQL configuration with Principal, Mirror, and Witness servers. This model allowed for a high performance resilient configuration for the XenDesktop site database. Please note that SQL clustering can also be used to create a resilient configuration, or a virtual SQL server can be configured as an HA VM. SQL servers for the XenDesktop database store were configured on three HP BL46c G6 servers. 2

24 Deployment Note: The configuration of the SQL environment was done with the following steps: 1. Create the site database on the Principal server 2. Back up the database. 3. Restore the database to the Mirror server set with the No Recovery Mode option. Failure to set this option will result in error messages regarding lack of access when attempting to start the mirror. Citrix License Server A single virtualized Citrix License Server was used to provide all of the needed licensing to the Citrix components, such as EServer, XenDesktop, and PVS. One license file, XenDesktop Platinum, was utilized for the entire environment. Sizing Considerations: A single virtual machine running Citrix License Server was able to support the entire environment with no issues. The VM was assigned 1 vcpu and 2GB RAM. Design Considerations: The Citrix License Server should be deployed as a highly available VM to allow for a Hypervisor host failure. It is recommended to back up the Citrix Licensing Server VM for easy recovery in the event of failure. Virtual Desktop VM Virtual desktop parameters can greatly influence sizing and performance of the modular VDI environment. Understand and assess the need for each Virtual desktop parameter to ensure it does not impact sizing, deployment and operation. 21

25 Figure 7: VDI Environment XenDesktop virtual desktops in this VDI environment were deployed with PVS as provisioned (streamed) VMs on Windows Server 28 R2 SP1 with Hyper-V Role hypervisors. All virtual desktop VMs were configured to be highly available in Hyper-V Failover Clusters and were configured with Fixed VHD drives to improve storage performance. Virtual Desktop VM o 1 vcpu, 1GB RAM, 2GB Page File, 4GB Write Cache Fixed VHD o Windows 7 x86 o Citrix XenDesktop Virtual Desktop Agent 5.6 o User Profile Manager 4.1 o Login VSI 3.6 configured for Medium workload 22

26 Figure 8: VDA Network Configuration Virtual desktop VMs were configured with two network interfaces: Legacy NIC for PXE boot process. This is required for Virtual Desktop VMs to boot from PVS and is specific to Hyper-V deployments. Synthetic NIC for optimized network communications in Windows (optional) During the virtual machine creation, virtual desktop VM network interfaces were configured with Static MAC addresses. The environment utilized the non-pvd virtual desktop model, storage configuration for which is shown in the diagram below. Figure 9: Streamed VDA Storage Structure with Non-PVD Configuration 23

27 Modular VDI Infrastructure Figure 1: Modular VDI Block #1 With the common infrastructure in place, the next step was to design, create, and configure the infrastructure that would support the first Modular Block. Once the first Modular Block was established and tested, additional Modular Blocks were deployed. This section outlines the modular VDI Infrastructure, Modular Block sizing, and configuration of the Hyper-V environment, SCVMM, PVS, and SQL. Infrastructure Deployment Methodology Similar to the Common Infrastructure Deployment Methodology, the Modular Infrastructure utilized HP Insight Control for the deployment of the Operating System, Features, Roles, Software, IP information, and other configuration. The VDI infrastructure was hosted on BL46c G7 servers and also deployed and configured via HP Insight Control jobs. 24

28 Modular Block Overview Modular VDI Block hardware contains virtual desktops, Hyper-V servers, SCVMM servers and PVS servers. Figure 11: Modular VDI Block Infrastructure Figure 12: Modular Block Overview 25

29 Modular Block Sizing When scaling up a Virtual Desktop Infrastructure environment from a single server to a modular block, it is important to understand your equipment along with its capacity and performance capabilities. This is important not only as a single component, but also as an orchestrated system working together. Finding the optimal design involves multiple cycles of testing, troubleshooting, and resolution of issues that will eventually lead to a design where every component within the system has the ability to handle the load. The first step in this process was to determine single server s VM density. Based on this, the next step was to calculate the single cluster VM density numbers. A single cluster consisted of 8 servers. We progressed to a single chassis (in our case, with HP blades, this was 16 blade servers) VM density evaluation that consisted of two clusters; this was defined as a Modular Block of virtual desktops. For this project, each of the blocks was built to support 1,1 users. It is important to note that moving up from a single server to a higher scale was an iterative process that required testing the environment, finding, isolating and resolving the bottlenecks; and scaling the environment up until the goal was achieved. This resulted in the final Modular Block design and architecture. To ensure fail over in the virtual desktop environment all Hyper-V Clusters were configured with high availability. For this reason, all hosts were run at approximately 7% of maximum capacity, in order to accommodate any host failures by redistributing the VMs running on that failed host to other hosts within the cluster. These sizing numbers would have been greater had the goal of this test been to run at full server capacity. Based on the server models used in this project, the workload that was leveraged, and the configuration of the systems, it was found that a single HP BL46c G7 server could accommodate 9 virtual desktops at maximum load during single server scalability testing. Modular Deployment Design With the number of virtual desktops defined at 1,1 per modular block, the infrastructure was designed to support that size deployment. Each Block would have XenDesktop, PVS, SCVMM, Hyper-V and NetApp Shared Storage. Each of these components needed to meet the performance and capacity requirements. The environment was configured to include a dedicated clustered SCVMM in each Modular Block instead of a shared SCVMM for two Blocks. The PVS farm scaled as expected. It was found that with a single SCVMM servicing a single Modular Block of 2 Hyper-V clusters and 1,1 virtual machines, performance was at the expected level. As noted earlier, nonclustered SCVMM servers have been shown to support up to 2, virtual desktops. With the architecture implemented and tested, the remaining three Blocks were deployed and prepared for a full five Block environment of 5,5 virtual desktops. Design Considerations: All components of a modular block should be performance validated at the scale of the single block before merging with other blocks. Performance should be validated for all shared or common VDI components for the scale of deployment; we used NetApp storage and PVS server farms. 26

30 Modular Block Infrastructure Hyper-V Virtualization for VDI Hyper-V Host This environment deployed HP BL46c G7 servers with Windows 28 R2 SP1 and Failover and Hyper-V Roles for virtualizing Citrix XenDesktop virtual desktops. The Hyper-V network was configured on the hosts prior to the creation of the Cluster and the addition of SCVMM. o VDA Virtual Network: configured as External on a matching NIC o PXE Virtual Network : configured as External on a matching NIC The Network architecture for the Hyper-V Clusters was configured, segregating traffic for specific functions including Management, Storage, VDA, and PXE. o Management Cluster and RDP management. A separate VLAN was used for management operations. o Storage: Two NICs allowed for iscsi MPIO access to NetApp. A separate VLAN was used for storage traffic. o VDA: Dedicated VLAN for VDA synthetic NICs. A Hyper-V virtual NIC was created with no IP address due to its use for host management. o PXE: Dedicated VLAN for PVS streaming and PXE operations for the virtual desktops. A Hyper-V virtual NIC was created with no IP address due to its use for host management. The hypervisor network configuration is show in the diagram below: Figure 13: Hyper-V Host Network Configuration 27

31 Hyper-V Cluster VDI Hyper-V Clusters were configured leveraging eight HP BL 46c G7 servers per Cluster. The 8-Node Cluster was chosen as the cluster size, allowing for two Clusters per HP C7 chassis. The Cluster connects to two NetApp iscsi LUNs, one as a Cluster Shared Volume for VDA storage and one as Cluster Quorum Witness. Figure 14: Hyper-V Cluster Calculations for sizing VDI Hyper-V Cluster in our environment: 1. A single Hyper-V Server maximum load after testing with LoginVSI, and the workload as we configured it, was found to be ~9 VDA VMs. 2. A single Server at 8% capacity for a production environment was found to be 72 VMs, which equals 8% of A single cluster has eight hosts. 4. To allow for a single host failure within a cluster and maintain the 8% capacity per server, we reduced the server count by 1 (n-1, where n is the number of hosts in a cluster). This resulted in a desired single Cluster VM density of [72 * (8-1) = 54] 5. Based on these measurements, a single cluster will start with 55 VMs, which results in 63 VMs per hosts when all hosts are up and running. In the event of a host failure, each host will end up with 72 VMs, representing the maximum 8% target load on each server. Sizing Considerations: The VDA Hyper-V Cluster VM count was chosen as 55 VMs to allow for a single hypervisor host failure and maximum of 8% load per host. Storage for the Hyper-V Cluster was sized to provide storage for the number of virtual desktops to be deployed. 28

32 Design Considerations: Hyper-V Cluster VM density sizing was determined by a process that started with single server and then full Cluster sizing. These estimates are highly dependent on the hardware specifications of the hypervisor hosts. The Cluster should be validated with exactly the same network and storage configurations that will be deployed in scaling the VDI environment. The first Node in each Cluster was configured with File services and a Windows Share created for C:\ClusterStorage. This is the share for Cluster Shared Volumes (CSVs), and is required for Citrix XenDesktop 5.6. SCVMM for VDI SCVMM in the XenDesktop and Hyper-V VDI processes provides virtualization management functionality for the Hyper-V clusters and is a single point of connection from XenDesktop controller and virtualization systems. It is also utilized for the VM deployment as connection point from PVS server XenDesktop Setup Wizard. SCVMM Host Figure 15: Hyper-V and SCVMM and XenDesktop Topology Each SCVMM host was deployed on a HP BL46c G6 running Windows 28 R2 SP1. Each SCVMM host was configured with two network interfaces for management of Hyper-V hosts and another for the Cluster application heartbeat. 29

33 SCVMM Cluster Host network configuration is shown in the following figure: Design Considerations: Figure 16: SCVMM Host Network Configuration The SCVMM Cluster Quorum File Share should have a dedicated Windows File Share per cluster on a file server in the environment. The SCVMM host hardware configuration needs to be based on performance during both deployment and large-scale power operations on both virtual desktops and virtualization hosts, as they can be different numbers. SCVMM Cluster SCVMM 212 is used by XenDesktop to manage all of the Hyper-V Clusters and virtual desktop VMs. For each modular VDI Block, SCVMM was configured as a 2-Node Clustered application with Cluster Quorum configured as Node + File Share Witness. 3

34 Figure 17: SCVMM for VDI in Modular Block Each SCVMM cluster configuration contained the following items: A MAC Address Pool per Hyper-V cluster A Host group per Hyper-V cluster for MAC Address Pool and Load Balancing policy assignments 31

35 Design Considerations: In order for the 2-Node Clusters to function properly, the Domain Computers Active Directory group should be added to the security settings of the file share used for the witness. In order to ensure there were no MAC address conflicts between the Clusters, a Host Group with specific MAC address assignments should be used. Consider creating special RunAs accounts to be used by PVS and XenDesktop to be able to easily identify SCVMM job originators in the job log. SCVMM Library File Server All VDI SCVMM servers were connected to a dedicated SCVMM Library file server. Following the Microsoft best practices for highly available SCVMM configuration, the SCVMM library was configured as a 2-Node Failover Cluster for Windows File Server application, and the storage was set up as a Cluster-Shared NetApp iscsi LUN. The hosts used were the HP BL46c G6 blades with Windows 28 R2 SP1 configured with Failover and File Services Roles. The Virtual Desktop template used for deployment by the PVS XenDesktop Setup Wizard resided in the SCVMM Library. Additional free space for critical VM backups was also allocated to the SCVMM Library LUN. The following shows the configuration of the Virtual Desktop VM Template used for the deployment: RAM CPU Legacy NIC Synthetic NIC Hard Disk Boot Mode Availability 1GB of Static vram 1 vcpu Used for PXE traffic Used for VDA communication Assigned a Static 4 GB VHD PXE Highly available SCVMM SQL A Cluster utilizing shared storage with a Quorum configuration of Node and Disk Majority supported the Microsoft SQL database. Design Considerations: The SQL Database should be designed to allow for host redundancy. Provisioning Services (PVS) for VDI Citrix Provisioning Services 6.1 servers provided virtual desktop deployment and provisioning in the test environment. 32

36 The decision to implement either physical servers or virtual servers for these components should be based on your specific requirements. PVS Server Networking Each PVS server was deployed with 2 NICs, one for management and another for streaming: A 1Gbps NIC assigned to the Management VLAN was used for server to server communication A 9Gbps NIC assigned to a VDA PXE VLAN was used for streaming network traffic. The TFTP service was configured to run on this NIC on all PVS servers Figure 18: PVS Host Network Configuration Each PVS server was configured for optimum performance for the scale deployed. To support 2K desktops per farm, the following calculations and PVS server network and advanced settings were implemented: VMs = Ports * Threads Ports Configured (5 ports) Threads per Port = 16 Total Threads = 5*16 = 8 Other PVS Server settings configured for improved performance: Buffers per Thread = 32 Buffers Device Booting = 1 devices Remote Concurrent I/O Limit = 24 Transactions 33

37 PVS Farm Each PVS farm was comprised of three servers supporting two Modular Blocks (22 virtual desktop VMs) per farm or 7-8 streams per PVS server. Each PVS Farm was configured with a shared vdisk store on a dedicated PVS File Server for the vdisk storage. The below diagram shows the PVS farm structure. Figure 19: PVS for VDI Farm Structure The configuration for the PVS farm for each Modular Block as shown on the diagram: Site: <Environment Name> o Servers (each server was configured identically) Log events to the severs Windows Event Log Enabled Stores - <shared storage path> Options Active Directory password updates (3 Days) Enabled Logging Logging level was set to Info o vdisk Pool o vdisk Update Management (not configured) o Device Collection <Modular Block 1> <Modular Block 1> Remote Users 34

38 <Modular Block 2> <Modular Block 2> Remote Users o Views (not configured) Storage o Shared Storage Site: <Environment Name> Servers: Host1, Host2, and Host3 Enabled Path: <path to shared storage> Boot Strap Settings were configured on each PVS server to load balance boot process to all servers in the PVS farm In this configuration, if one of the PVS farm hosts failed, the remaining PVS servers will still be able to support the 2 Blocks. There were two vdisks in each PVS farm, one for the Datacenter virtual desktops and one Remote Access virtual desktops with additional configuration. The configuration for the vdisk for the virtual desktops is as follows: vdisk Details o Name: <image name> o Size: 4GB o Mode: Cache on Device Hard Drive vdisk settings General o Access mode: Standard Image (multi-device, read-only access) o Cache Type: Cache on device hard drive o Enable Active Directory machine account password management Enabled o Enable streaming of this vdisk Enabled Identification: (default values) Auto update: Not configured PVS XenDesktop Setup Wizard was deployed to create virtual desktop VMs and used a Template from SCVMM to create the virtual desktops VMs. Special registry settings were applied to optimize the VM creation process. When using the PVS XenDesktop Setup Wizard to deploy the virtual desktops, the following settings were used to reduce the time required: [HKEY_CURRENT_USER\Software\Citrix\ProvisioningServices\VdiWizard\Max_VM_CREATE_T HREADS_PER_HYPERVISOR] set to 2 (This specifies two VM creations per Hyper-V Host to set only two write operations per Cluster Shared Volume LUN per host) Sizing Considerations: 35

39 Two PVS servers are required to support two Modular Blocks. The third PVS server should be added to the PVS farm for resiliency. A centralized Windows File Share vdisk store should be stored on a shared NetApp iscsi LUN to allow for best vdisks read performance both during deployment and the boot processes. Design Considerations: The number of servers in the PVS farm is based on the number of VMs that a specific PVS server configuration can support and the number of the VMs the total PVS farm needs to stream, plus one for resiliency, in case of a host failure. Having a DNS entry for the Windows file server or storage system hosting the PVS store is a requirement, because an IP address cannot be used to point to a storage location. Consider using Windows File Share for shared vdisk store to get the benefit of Windows share caching technology. This will help improve storage performance Consider using KMS setting with PVS vdisks if the image contains Microsoft Windows 7 or later and Office 21 or later, which are activated using Microsoft KMS Volume Licensing method. PVS File Servers Each PVS farm was configured with a shared vdisk store configured to connect to a dedicated Windows File Server share, which was a shared NetApp iscsi LUN. Dedicated PVS File Servers with dedicated iscsi LUN were configured for each PVS farm to provide the required performance. This reduces storage contention or file locking of the vdisks among the PVS farms. PVS SQL The SQL Database should be designed to allow for host redundancy. User Profile Management The user accounts were created and organized based on a per-modular Block design. Profile management was then configured based on that design. This allowed for better overall management of the user accounts and policies associated with those accounts. Citrix User Profile Manager 4.1 was leveraged for user profiles. Each block was configured with a dedicated network share for profile storage and was placed on a CIFS share located on a NetApp FAS 324. The user setting for profile management was assigned by Group Policy Objects (GPO), which aligned configurations specific to each modular block. Profiles for this project were configured as follows: Desktops without Personal vdisk: Streamed enable delete cache on logoff Desktops with Personal vdisk: Streamed Disable delete cache on logoff Design Considerations: Ensure that the GPO with the U.adm file has the appropriate settings. 36

40 For this environment, separate CIFS shares and a separate U settings GPO was used for each of the five Modular Blocks. Consider validating storage performance and protocol selection for user profiles to match the environment requirements. In the test environment, iscsi LUNs shared as Windows File Shares were chosen to meet performance requirements. Multi-Site Infrastructure Figure 2: Multi-site Infrastructure The multi-site design with remote access was implemented with the intention of replicating the production environment of an Enterprise-level organization. The organization would have infrastructures comprised of a back-end Datacenter and geographically remote business locations that required access to resources in the Datacenter. We also took providing access for telecommuters into account. This was accomplished by including a Datacenter, two Branch Offices, and a Remote Access entry point for telecommuters. Branch Offices Branch offices were designed with segregated LAN and WAN networks. The WAN was created with both routers and firewalls on either side to emulate a production environment. The branch locations leveraged either a Branch Repeater VPX running as a VM on XenServer or a physical Branch repeater 882 appliance. Both of these branch office types connect to a Branch Repeater VPX appliance set in the central Datacenter. Remote Access Users With the increase of mobile work styles and devices used in organizations, remote access users are becoming more commonplace. Using a Netscaler Access Gateway Enterprise Edition, a separate network over a simulated WAN connection linked users to the XenDesktop environment. These connections leveraged an ICA proxy configuration rather 37

41 than a full VPN tunnel. The ICA proxy configuration facilitated the configuration of a VPN tunnel that allows for ICA connections, XenApp and XenDesktop, via the SSL protocol. Design Considerations: A major consideration was monitoring and reporting device state. The Citrix Command Center was leveraged to report on all the Citrix Network Devices, which allowed a single unified console to manage, monitor, and troubleshoot the entire global application delivery infrastructure. The Command Center also incorporates up-to-date, proprietary SNMP counters that were used for reporting purposes. Test Methodology Test Milestones Many tests were performed during different phases of the project. After completion of each test run, Performance and Test Reports were generated and analyzed. The purpose of each of the phases was to find possible bottlenecks that might be caused by one or more components,. These components were adjusted prior to the next higher-scale phase, which ultimately led to the full 5k-scale multisite test. Test phases: Single Server Scale: A single server was tested to confirm the maximum load that a single server could support and to validate the environment. Single Cluster Scale: A single Cluster consisting of eight Nodes was tested to obtain Single Cluster performance data. Single Modular Block Scale: A single chassis consisting of two Clusters (16 Nodes) was tested to obtain Single Chassis performance data. Dual Modular Block Scale: All modular blocks hosted by a single storage system were tested to determine single storage-performance data. Appliance Full Scale Tests: These were multiple tests performed to validate the network appliances assigned to branch and remote sites could handle the number of user sessions assigned to those sites. The tests are performed for each type or model of network appliance deployed, such as Branch Repeaters or Access Gateway. Full 5K-Scale Multisite: This was the final phase of testing and consisted of all Chassis, Clusters, and Servers. This test utilized multiple storage units and spanned multiple sites. This testing represented the primary goal of the project. Test Tools Testing consisted of multiple-user ICA sessions launched from Citrix Receiver clients to XenDesktop Virtual Desktop VMs virtualized on Hyper-V, and launching user activity with Login VSI 3.6 medium workload. Performance was monitored and recorded for all environment systems and appliances, utilizing both Windows Performance Monitor counters and Citrix Command Center for the Branch Repeater and NetScaler appliances. 38

42 Login VSI analyzer was used to record user experience data for all user sessions. Session Launching Citrix Receiver was utilized to launch ICA sessions to virtual desktops. In this test environment virtualized clients were utilized. These clients were configured to launch multiple ICA sessions each. Orchestration tasks: Launching an ICA client session to a XenDesktop VDA Starting the in-session workload Logging off the user at the end of the test Launcher Details: Datacenter Site Client Launchers: 17 Number of VDA: 3775 Sessions Per Client: 22 BR 88 Site Client Launchers: 3 Number of VDA: 525 Sessions Per Client: 17 BR VPX Site Client Launchers: 9 Number of VDA: 15 Sessions Per Client: 16 AG Site Client Launchers: 35 Number of VDA: 6 Sessions Per Client: 17 Performance Capturing Figure 21: Client Launcher Details for Sites The Windows Performance Monitor was used to collect data for all major systems during each test run. This allowed for near real-time performance monitoring during the test and in-depth historical data analysis after the test completed. All pertinent data was collected and centrally stored, including general system metrics. Citrix internal tools were leveraged for capturing specific data, such as XML Brokering time, user logon time. In-Session Workload Simulation LoginVSI is a publicly available tool from Login VSI V.B. that provides an in session workload representative of a typical user. A predefined Medium Workload was used for this test as described in the link. In this case, the VSI workload is used to generate the traffic load through the infrastructure. Login VSI simulates user activity over the time period of the test. This data is used to find the maximum number of active sessions that can be connected through the environment. In 39

43 addition, the workload script was running on the server-side desktop and the generated traffic stream was almost completely downstream. The test was configured to have each user execute two Login VSI loops, each minutes long. The following defines the Login VSI Medium Load loop: The workload emulated a medium knowledge user executing Office, IE and PDF. Once a session is started, the medium workload repeats every 12 minutes. During each loop, the response time is measured every two minutes. The medium workload is up to five apps opened simultaneously. The type rate is 16ms for each character. Approximately two minutes of idle time is included to simulate real-world users. For additional detail, see the Login Consultants VSI Admin Guide: a and under Results Performance Results During the execution of the test, performance was monitored and data was gathered after the each test to gauge the load on each component in the environment. Performance was gathered during boot storms and test runs. Performance Results Boot Storm Boot Storm performance data was captured while spinning up the 5 virtual desktops in preparation for the Test Run. The Boot Storm starts when XenDesktop powers on the first virtual desktop and ends when all desktops are registered. PVS for VDA Performance (Boot Storm) During boot storm, we captured the following results: Maximum CPU load: 5%-55% The Average PVS Sent Bandwidth per VM from PVS was 629KBps. The results validated our design assumption of having a 3-server PVS farm to support two Modular Blocks of 22-streamed Desktops 4

44 CPU Peak System Role (%) R1E3C1B3 PVS for VDA - Block 1& R1E3C2B3 PVS for VDA - Block 1& R3E7C3B3 PVS for VDA - Block 1& R1E3C1B11 PVS for VDA - Block 3& R1E3C2B11 PVS for VDA - Block 3& R3E2C2B11 PVS for VDA - Block 3& R1E3C2B6 PVS for VDA - Block R3E2C2B6 PVS for VDA - Block R3E7C3B6 PVS for VDA - Block Figure 22: PVS for VDA Performance Boot Storm Note: Modular Block 6 was not deployed in the environment; thus, the load on the third PVS server farm is only 11 VMs. SCVMM for VDA Performance (Boot Storm) During boot storm, we captured the following results: CPU Utilization: 17% Bandwidth Utilization: 1MBps The results validated our design assumption that a single SCVMM can handle a Modular Block of 11 Desktops. System Role CPU Peak (%) Total RAM (MB) RAM Used (MB) RAM % R1E3C1B13 SCVMM - Block 1 1.2% % R3E7C3B13 SCVMM - Block % % R1E3C2B13 SCVMM - Block % % R3E2C2B13 SCVMM - Block 2 1.9% % R1E3C1B5 SCVMM - Block 3 1.6% % R3E7C3B5 SCVMM - Block % % R1E3C2B5 SCVMM - Block % % R3E2C2B5 SCVMM - Block 4 1.7% % R3E2C2B3 SCVMM - Block 5 1.1% % R3E7C3B11 SCVMM - Block % % Figure 23: SCVMM for VDA Performance Boot Storm 41

45 Performance Results Test Run This section covers the Test Run performance data that was captured on the systems while running the full-scale virtual desktops test. Hyper-V for VDA Performance (Test Run) The test run validated that the 8-server cluster was sized adequately as a CPU utilization of 7-8% was captured. The test data also shows that the cluster is able to maintain all VMs running if one host fails. During the test, one of the hosts failed, and VMs were distributed to the other seven nodes within the cluster. Hyper-V RAM utilization was as expected. The table below shows the data for a single Modular Block. The overall average CPU Utilization for all Blocks was 73.48% with a Maximum of 87.35%. Virtualization (Hyper-V VDA Host) Role CPU Peak (%) RAM Available RAM Used RAM % R1E4C3B1 Hyper-V - Block 1 (4C3P1) % R1E4C3B2 Hyper-V - Block 1 (4C3P1) % R1E4C3B3 Hyper-V - Block 1 (4C3P1) % R1E4C3B4 Hyper-V - Block 1 (4C3P1) % R1E4C3B5 Hyper-V - Block 1 (4C3P1) % R1E4C3B6 Hyper-V - Block 1 (4C3P1) % R1E4C3B7 Hyper-V - Block 1 (4C3P1) % R1E4C3B8 Hyper-V - Block 1 (4C3P1) % R1E4C3B9 Hyper-V - Block 1 (4C3P2) % R1E4C3B1 Hyper-V - Block 1 (4C3P2) % R1E4C3B11 Hyper-V - Block 1 (4C3P2) % R1E4C3B12 Hyper-V - Block 1 (4C3P2) % R1E4C3B13 Hyper-V - Block 1 (4C3P2) % R1E4C3B14 Hyper-V - Block 1 (4C3P2) % R1E4C3B15 Hyper-V - Block 1 (4C3P2) % R1E4C3B16 Hyper-V - Block 1 (4C3P2) % Figure 24: Hyper-V for VDA Performance PVS for VDA Performance (Test Run) PVS VDA servers had a very low utilization of CPU. They also used a very low amount of memory resources. This is expected as the maximum load for the PVS servers is during the boot storm. 42

46 System Role CPU Peak (%) Total RAM (MB) RAM Used (MB) RAM % R1E3C1B3 PVS for VDA - Block 1&2 9.9% % R1E3C2B3 PVS for VDA - Block 1&2 12.2% % R3E7C3B3 PVS for VDA - Block 1&2 11.4% % R1E3C1B11 PVS for VDA - Block 3&4 12.8% % R1E3C2B11 PVS for VDA - Block 3&4 12.4% % R3E2C2B11 PVS for VDA - Block 3&4 11.4% % R1E3C2B6 PVS for VDA - Block 5 6.2% % R3E2C2B6 PVS for VDA - Block 5 7.5% % R3E7C3B6 PVS for VDA - Block 5 5.1% % SCVMM for VDA Performance (Test Run) Figure 25: PVS for VDA Performance The test results show low CPU and RAM utilization on the SCVMMs VDA servers as there are limited request to the SCVMM servers during the test run. System Role CPU Peak (%) Total RAM (MB) RAM Used (MB) RAM % R1E3C1B13 SCVMM - Block 1 1.2% % R3E7C3B13 SCVMM - Block 1 6.8% % R1E3C2B13 SCVMM - Block 2 1.4% % R3E2C2B13 SCVMM - Block 2 2.% % R1E3C1B5 SCVMM - Block 3 1.9% % R3E7C3B5 SCVMM - Block 3 8.4% % R1E3C2B5 SCVMM - Block 4 15.% % R3E2C2B5 SCVMM - Block 4 2.% % R3E2C2B3 SCVMM - Block 5 1.2% % R3E7C3B11 SCVMM - Block 5 1.8% % Figure 26: SCVMM for VDA Performance 43

47 SCVMM for VDA Library Server File Server Performance (Test Run) The test results shows that there is minimum utilization on the SCVMM Library servers. System Role CPU Peak (%) Total RAM (MB) RAM Used (MB) RAM % R1E3C1B1 SCVMM - Infrastructure.9% % R3E2C2B1 SCVMM - Infrastructure 1.3% % Figure 27: SCVMM for VDA Library Server File Server Performance Multi-Site Performance (Test Run) The results for the Branch Repeater VPX showed a 3.81 to 1 compression ratio or about 73.75% over the life of the test run. The graphs below show the send and receive comparisons for both compressed and noncompressed traffic. Notice that the receive traffic is more prevalent since the virtual desktop traffic is being received by the endpoints at the branch office. Figure 28: BR VPX Receive Stats Figure 29: BR VPX Send Stats With the SDX appliance on the Datacenter side that is accepting the accelerated connections, note that the SDX has higher send traffic that correlates to the receive traffic on the branch side. Figure 3: BR SDX Receive stats Figure 31: BR SDX Send stats 44

48 The next branch office type leverged a physical Branch Repeater (BR) 882. This appliance showed similar results to the VPX, but can handle more connections because it is a more powerful appliance. The test run showed a 3.48 to 1 compression ratio, or about 71.3%. The graphs below show the send and receive comparisons for both compressed and noncompressed traffic. Similar to the VPX statistics above, the BR 882 showed the following: Figure 32: BR 882 Receive stats Figure 33: BR 882 Send stats Figure 34: BR SDX Receive stats Figure 35: BR SDX Send stats Additional Testing: PVS with Personal vdisk (PvD) Deployed in a separate parallel environment in order to perform PvD-related tests, the virtual desktop for the PvD configuration was streamed from PVS, and utilized GB on Cluster shared storage. Please note that the following PvD data cannot be used as comparison to our non-pvd environment as it was deployed with a newer version of PvD with performance enhancements. The diagram below shows storage structure for the PvD virtual desktop, deployed in the separate, parallel test environment. 45

49 Objective Figure 36: Streamed VDA Storage Structure with PVD Configuration A separate environment that was utilized for the initial sizing of the environment for this project was also used to do additional testing using PVS with Personal vdisk. The goal of this additional test was to determine the maximum number of desktops that could successfully pass a test using PvD in the same environment as Non-PvD. Results A successful test was run with 55 virtual desktops. Below is some of the data that was analyzed. 46

50 READ WRITE Test Total IOPS per READ IOPS WRITE IOPS per per Size IOPS Desktop Desktop Desktop Figure: VDA Storage NetApp 317A (IOPs) Test Total RUN READ RUN WRITE IOPS READ WRITE Size IOPS IOPS IOPS PD PD PD CPU Busy Processor Figure: PvD Storage NetApp 317B (IOPs) CPU Busy Processor 1 CPU Busy Processor 2 CPU Busy Processor % 36.85% 43.9% 54.2% Figure: VDA Storage NetApp 317A Storage Processors (Processor Busy) CPU Busy Processor CPU Busy Processor 1 CPU Busy Processor 2 CPU Busy Processor % 6.1% 8.71% 46.44% Figure 37: PvD Storage NetApp 317B Storage Processors (Processor Busy) Lessons Learned The project provided a wealth of valuable lessons learned, starting from design, to deployment, and throughout the testing process. They included: Strong PowerShell-scripting capabilities are recommended for large-scale XenDesktop deployments utilizing Hyper-V and SCVMM 212. NetApp Storage: o Plan the VDA storage for either your boot-storm requirements or your steady state requirements. Boot-storm requirements are typically higher than steady-state requirements. If you plan for your steady-state requirements, procedures need to be in place to prevent all your desktops starting simultaneously, as this will result in a decrease in performance. Hyper-V / Failover Cluster o VM Failover worked as expected during the failure of a single Hyper-V host within a Cluster, as was found during testing with successful VDA operations and VSI execution. 47

51 o The Hyper-V host storage networks were load-balanced via MPIO. During the Boot Storm, these IO numbers were roughly equal. This shows that the MPIO was working as expected and balancing the load correctly between the two channels. SCVMM and XenDesktop o During this test, non-persistent PVS VMs shutdown commands were executed faster when they were executed directly on the SCVMM console using asynchronous jobs via PowerShell Network Appliances o Branch Repeater appliances were capable of handling the assigned load, although for some the load, it was very close to absolute maximum (CPU Load 1% BR VPX at 162 and NS SDX VPX at 53). Performance Design Considerations: Settings that were tweaked to affect performance or scalability o NetApp Storage Systems Various settings such as Flow Control, OnTap version, and Fast Path allow configuration of the storage system to the optimum setup Different storage protocols, such as iscsi or CIFS, for various shares have a direct effect on the storage utilization o PVS The location and type of storage used for the vdisk affects the management time of the PVS farm o Windows Clustering Options such as NIC teaming, Cluster Quorum Configuration, and Cluster Shared Volumes affect performance of the VDI environment o XenDesktop Advanced Host-connection settings governing power actions allowed us to configure the optimum load and speed of the boot storm Conclusions This document focused on deploying a Citrix Desktop solution for interoperability at scale with Microsoft, HP, NetApp, and Cisco. We validated the below assumptions: - We designed and deployed a XD multi-site highly-reliable clustered solution with Hyper-V and SCVMM up to 5, desktops - The Citrix modular-block architecture scaled linearly, proving that future growth of an environment can be achieved by replicating modular blocks. - High Availability and resiliency results showed that the Hyper-V failover cluster correctly handled the failure of a single Hyper-V host. 48

52 - Testing Citrix PVD provided additional valuable personal data- and applicationpersistency without performance degradation. The performance numbers of the hardware and software configuration were based on a real-world environment. However, the numbers may vary based on workload, user type, and configuration settings. Although enterprise environments vary in scale and features, the best practices described in this document will be helpful to any Citrix desktop virtualization solution. 49

53 Appendix Documents References Title/Description XenDesktop 5.6 and Upgrading to Microsoft System Center 212 Virtual Machine Manager Configuring XenDesktop 5.6 with Provisioning Services 6.1 and System Center Virtual Machine Manager 212 RC How to Configure XenDesktop 5 with Microsoft Hyper-V and System Center Virtual Machine Manager How to Configure XenDesktop 5 with Microsoft Hyper-V and SCVMM SCVMM 212 Creating a Hyper-V Cluster How to Change Personal vdisk Drive Letters in XenDesktop Provisioning Services 6.1 XenDesktop and XenApp Best Practices Infrastructure Planning and Design Location v.nu/archives/hvredevoort/211/4/scvmm-212- creating-a-hyper-v-cluster/ ng-61/pvs-provisioning-61.html Hardware Configuration Active Directory Physical Domain Controller Configuration: System Function Active Directory Physical Domain Controller Hardware Model HP BL46c G6 Processor Configuration CPU Type Intel Xeon 2.27Ghz Number of CPUs 2 Number of Cores 4 5

54 System Function Memory Configuration Network Configuration Types of Network Cards Active Directory Physical Domain Controller 96GB HP NC532i Dual Port 1GbE Multifunction BL-c Adapter Hyper-V Cluster Pool Specifications Hypervisor Cluster/Pool Specifications Function Common Infrastructure Virtualization Hardware Model HP 46c G6 Member Host Count 6 Pool / Cluster Configuration Failover Cluster, Node + Disk Quorum Network Configuration Types of Cluster Networks Management, Storage x2, DC INF Number of Network Cards Total 8, Utilzed 4 VLAN Configuration Management, Storage, DC INF Shared Storage Configuration Shared Storage Protocol iscsi to NetApp with MPIO (2 LUNs) Shared Storage Type NetApp 324 Shared Storage Size 16GB Software Configuration Operating System Windows 28 R2 SP1 Installed Software Failover Cluster, Hyper-V, RDS, NetApp DSM 3.5 Misc Special Settings HA / Resiliency Configuration HA Configuration to allow for single host failure Hyper-V Host for Virtual Desktops Configuration (Host #1): System Function Hyper-V Host for VDAs (Host #1 in Cluster) Hardware Model HP BL46c G7 Processor Configuration CPU Type IntelXeon 2.67Ghz Number of CPUs 2 51

55 System Function Hyper-V Host for VDAs (Host #1 in Cluster) Number of Cores 6 Memory Configuration 192GB Network Configuration Types of Network Cards NC553i Dual Port FlexFabric 1Gb Number of Network Cards Total 8, Utilized 5 Network Card Assignments Management, 2 Storage, VDA, PXE VLAN Configuration Management, Storage, VDA, PXE IP Configuration IP addresses on Management and Storage Interfaces Shared Storage Configuration iscsi to NetApp with MPIO (2 LUNs) Local Storage Configuration Storage Controller Storage Setup Software Configuration Operating System Installed Software Special Settings HA / Resiliency Configuration Hypervisor Host Settings Hypervisor Network Configuration Cluster Host Settings Smart Array P41i RAID, SAS 15K (2x 75GB SAS 15K HDDs) Windows 28 R2 SP1 Enterprise Failover Cluster, Hyper-V, RDS, NetApp DSM 3.5, File Services 6GB Swap File, Shared C:\ClusterStorage NetApp DSM 3.5 for MPIO storage connectivity 2 Virtual Networks, PXE and VDA, with no Management System Access Failover Cluster Member Host, 8 Host Cluster 52

56 Hyper-V Host for Virtual Desktops Configuration: System Function Hyper-V Host for VDAs Hardware Model HP BL46c G7 Processor Configuration CPU Type IntelXeon 2.67Ghz Number of CPUs 2 Number of Cores 6 Memory Configuration 192GB Network Configuration Types of Network Cards NC553i Dual Port FlexFabric 1Gb Number of Network Cards Total 8, Utilized 5 Network Card Assignments Management, 2 Storage, VDA, PXE VLAN Configuration Management, Storage, VDA, PXE IP Configuration IP addresses on Management and Storage Interfaces Shared Storage Configuration iscsi to NetApp with MPIO (2 LUNs) Local Storage Configuration Storage Controller Storage Setup Smart Array P41i RAID, SAS 15K (2x 75GB SAS 15K HDDs) Software Configuration Operating System Windows 28 R2 SP1 Enterprise Installed Software Failover Cluster, Hyper-V, RDS, NetApp DSM 3.5 Special Settings HA / Resiliency Configuration Hypervisor Host Settings Hypervisor Network Configuration Cluster Host Settings 6GB Swap File NetApp DSM 3.5 for MPIO storage connectivity 2 Virtual Networks, PXE and VDA, with no Management System Access Failover Cluster Member Host, 8 Host Cluster 53

57 PVS Servers Configuration for each Modular Block: System Function Hardware Model PVS Server for VDAs HP BL46c G6 Processor Configuration CPU Type IntelXeon 2.67Ghz Number of CPUs 2 Number of Cores 4 Memory Configuration 96GB Network Configuration Types of Network Cards NC553i Dual Port FlexFabric 1Gb Number of Network Cards Total 8, Utilized 2 Network Card Assignments Management, PXE VLAN Configuration Management, PXE IP Configuration IP addresses on Management and Storage Interfaces Shared Storage Configuration Local Storage Configuration Storage Controller Storage Setup CIFS Share on NetApp for PVS Store Smart Array P41i RAID, SAS 15K (2x 75GB SAS 15K HDDs) Software Configuration Operating System Windows 28 R2 SP1 Enterprise Installed Software RDS, PVS 6.1 Build 188 Special Settings HA / Resiliency Configuration 6GB Swap File 3 Server PVS Farm, Shared Store SCVMM Host for Virtual Desktops Configuration: System Function SCVMM Host for VDAs Hardware Model HP BL46c G6 Processor Configuration CPU Type Intel Xeon 2.67Ghz Number of CPUs 2 Number of Cores 4 54

58 System Function Memory Configuration Network Configuration SCVMM Host for VDAs 96GB Types of Network Cards NC553i Dual Port FlexFabric 1Gb Number of Network Cards Total 8, Utilized 2 Network Card Assignments VLAN Configuration IP Configuration Shared Storage Configuration Local Storage Configuration Storage Controller Storage Setup Software Configuration Operating System Installed Software Special Settings HA / Resiliency Configuration Hypervisor Host Settings Hypervisor Network Configuration Cluster Host Settings Management, Heartbeat Management, Heartbeat IP addresses on Management and HB Interfaces iscsi to NetApp with MPIO (2 LUNs) Smart Array P41i RAID, SAS 15K (2x 75GB SAS 15K HDDs) Windows 28 R2 SP1 Enterprise Failover Cluster, RDS 6GB Swap File Failover Cluster Member Host, 2 Host Cluster Hardware Specifications Servers Manufacturer Model Role CPU Memory Network HP BL46c G6 XD SQL, SCVMM SQL, SCVMM, PVS SQL, PVS Dual Quad- Core Intel Xeon L GHz 96GB HP BL46c G7 Hyper-V Dual Hexa- Core Intel Xeon X GHz 192GB HP Dual Porto 1G NC532I HP Dual Port 1G NC553I 55

59 Storage Systems Device CPU Memory Interface Shelf / Disk Purpose R4E1NA327B Xeon(4) E MB vif1g 2/48 45g VDA 3.GHz Block3 and R4E2NA327B R4E6NA327A R4E6NA327B R4E5NA324A Xeon(4) 3.GHz Xeon(4) 3.GHz Xeon(4) 3.GHz Xeon(4) 2.33GHz Block4 248 MB vif1g 4/98 3g VDA for Block1 and Block2 248 MB vif1g 3/72 45g VDA for Block5 248 MB vif1g 3/72 45g VDI Infrastructure 8192 MB vif1g 2/24 45g Infrastructure / User Profiles R4E5NA324B Xeon(4) L MB vif1g 2/24 45g User 2.33GHz Note: All of the devices in these storage systems used OnTap version ontap Mode. Network Switches Device Manufacturer Model Role Version Quantity Nexus Cisco N71kX Core L2/ K H3C-582 HP/H3c 582 Dist L2/L H3C-581 HP/H3C 581 1G Access Junipers Network Appliances Device Model Role Version R3E12U38-JUN1 SRX24 Branch L3 Router 1.R3.1 R3E12U3-JUN5 SRX24 Branch L3 Router 1.R3.1 R3E12U28-JUN6 SRX24 DatacenterL3 Router 1.R3.1 AGEE Network Appliances Device Model Role Version R3E2U4-AG-1 MPX-15 SSLVPN NS 9.4 Build 5.4nc SDX Network Appliances Device Model Role Version R2E3U29-NS-MSDX-1-XS1 SDX-1155 Hypervisor R2E3U29-NS-MSDX-1-MSVC1 SDX-1155 Management Service NS9.3 56

60 R2E3U29-NS-MSDX-1-VM-NS1 SDX-1155 Load Balancer NS9.3 Build e.nc R2E3U29-NS-MSDX-1-VM-BR1 Repeater for ICA Acceleration Build Netscaler SDX R2E3U29-NS-MSDX-1-VM-BR2 Repeater for Netscaler SDX ICA Acceleration Build BRVPX Network Appliances Device Model Role Version R2E4U21-BRVPX- DL36 G6 Hypervisor XS R2E4U21-BRVPX Branch Repeater V45 ICA Acceleration Build Repeater Network Device Device Model Role Version R3E2U34-BR1 Repeater 882 SM88 Series 3 ICA Acceleration Build Multi-Site Performance (Test Run) BR Performance / Utilization Version (Production) Appliance Hardware SM88 Series 3 6 Accelerated ICA Sessions Value 57

61 ICA Compression Ratio Value BR 88 CPU Usage Value The BR 88 performance indicates 53 successful ICA sessions were accelerated, and it is not the limit of the appliance under 8% CPU load. SDX VPX Instance for BR1 - Performance / Utilization Version (Production) Appliance Hardware Citrix Repeater for NetScaler SDX 58

62 SDX VPX for BR 1 Compression Ratio Axis Title :: 3:5: 3:1: 3:15: 3:2: 3:25: 3:3: 3:35: 3:4: 3:45: 3:5: 3:55: Series :: 12 SDX VPX for BR 1 CPU Utilization 1 Axis Title :: 3:5: 3:1: 3:15: 3:2: 3:25: 3:3: 3:35: 3:4: 3:45: 3:5: 3:55: Series :: During the 53 concurrent accelerated ICA Sessions, the VPX instance on the NS SDX completely used the CPU, and was successful in those sessions. 59

63 BR VPX - Performance / Utilization 12 1 BRVPX CPU Utilization Axis Title :5 : 3: : 3: 5: 3:1 : 3:1 5: 3:2 : 3:2 5: 3:3 : 3:3 5: 3:4 : 3:4 5: 3:5 : 3:5 5: Series : : As expected, the BR VPX running 162 sessions used 1% CPU utilization; it is more than the expected 1% usage for 15 sessions. BRVPX Accelerated ICA Connections Axis Title :: 3:5: 3:1: 3:15: 3:2: 3:25: 3:3: 3:35: 3:4: 3:45: 3:5: Series :55: 6

64 BRVPX Compression Ratio 12 1 Axis Title : : 3:5 : 3:1 : 3:15 : 3:2 : 3:25 : 3:3 : 3:35 : 3:4 : 3:45 : 3:5 : 3:55 : Series : : Relative BR VPX CPU to Session Count 12 1 Axis Title Series The chart above identifies the number of sessions that caused BR VPX CPU utilization to be 1%. 61

65 SDX VPX Instance for BR VPX SDX VPX for BR VPX ICA Compression Ratio Compression Ratio : : 3:5 : 3:1 : 3:15 : 3:2 : 3:25 : 3:3 : 3:35 : 3:4 : 3:45 : 3:5 : Compression :55 : SDX VPX for BR VPX ICA Connections Axis Title : : 3: 5: 3:1 : 3:1 5: 3:2 : 3:2 5: 3:3 : 3:3 5: 3:4 : 3:4 5: 3:5 : ICA Connections :5 5: 62

66 SDX VPX for BR VPX CPU Utilization Axis Title CPU % The SDX VPX instance for BR VPX performance data shows that 162 ICA sessions caused the VPX instance to be only 67% CPU utilization. 63

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions

Citrix XenDesktop Modular Reference Architecture Version 2.0. Prepared by: Worldwide Consulting Solutions Citrix XenDesktop Modular Reference Architecture Version 2.0 Prepared by: Worldwide Consulting Solutions TABLE OF CONTENTS Overview... 2 Conceptual Architecture... 3 Design Planning... 9 Design Examples...

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

CVE-401/CVA-500 FastTrack

CVE-401/CVA-500 FastTrack CVE-401/CVA-500 FastTrack Description The CVE-400-1I Engineering a Citrix Virtualization Solution course teaches Citrix engineers how to plan for and perform the tasks necessary to successfully integrate

More information

Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant

Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant Tim Tharratt, Technical Design Lead Neil Burton, Citrix Consultant Replacement solution for aging heritage branch infrastructures (Co-op and Britannia) New unified app delivery platform for the bank to

More information

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0 Dell Virtual Remote Desktop Reference Architecture Technical White Paper Version 1.0 July 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

5,100 PVS DESKTOPS ON XTREMIO

5,100 PVS DESKTOPS ON XTREMIO 5,1 PVS DESKTOPS ON XTREMIO With XenDesktop 5.6 and XenServer 6.1 A Test Report December 213 ABSTRACT This report documents the consistent low latency performance of XtremIO under the load of 5,1 concurrent

More information

Course: CXD-202 Implementing Citrix XenDesktop Administration

Course: CXD-202 Implementing Citrix XenDesktop Administration Course: CXD-202 Implementing Citrix XenDesktop Administration Overview This course provides the foundation necessary for administrators to effectively centralize and manage desktops in the datacenter and

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide

605: Design and implement a desktop virtualization solution based on a mock scenario. Hands-on Lab Exercise Guide 605: Design and implement a desktop virtualization solution based on a mock scenario Hands-on Lab Exercise Guide Contents Overview... 2 Scenario... 5 Quick Design Phase...11 Lab Build Out...12 Implementing

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

CXD-202-1 Citrix XenDesktop 5 Administration

CXD-202-1 Citrix XenDesktop 5 Administration CXD-202-1 Citrix XenDesktop 5 Administration This course provides the foundation necessary for administrators to effectively centralize and manage desktops in the datacenter and deliver them as a service

More information

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Windows Server 2012 R2 Hyper-V: Designing for the Real World Windows Server 2012 R2 Hyper-V: Designing for the Real World Steve Evans @scevans www.loudsteve.com Nick Hawkins @nhawkins www.nickahawkins.com Is Hyper-V for real? Microsoft Fan Boys Reality VMware Hyper-V

More information

Pure Storage: All-Flash Performance for XenDesktop

Pure Storage: All-Flash Performance for XenDesktop Pure Storage: All-Flash Performance for XenDesktop 2 Executive Summary The high costs and performance challenges associated with traditional disk-based storage have inhibited the broad adoption of desktop

More information

Provisioning Server High Availability Considerations

Provisioning Server High Availability Considerations Citrix Provisioning Server Design Considerations Citrix Consulting Provisioning Server High Availability Considerations Overview The purpose of this document is to give the target audience an overview

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...

More information

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS

CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS Number: 1Y0-A14 Passing Score: 800 Time Limit: 90 min File Version: 42.2 http://www.gratisexam.com/ CITRIX 1Y0-A14 EXAM QUESTIONS & ANSWERS Exam Name: Implementing

More information

Provisioning Server Service Template

Provisioning Server Service Template Provisioning Server Service Template Provisioning Server 7.1 Service Template Technology Preview for System Center - Virtual Machine Manager The Citrix Provisioning Server System Center - Virtual Machine

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

CXS-203-1 Citrix XenServer 6.0 Administration

CXS-203-1 Citrix XenServer 6.0 Administration Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,

More information

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

CMB-207-1I Citrix XenApp and XenDesktop Fast Track

CMB-207-1I Citrix XenApp and XenDesktop Fast Track 1800 ULEARN (853 276) www.ddls.com.au CMB-207-1I Citrix XenApp and XenDesktop Fast Track Length 5 days Price $5995.00 (inc GST) This fast-paced course covers select content from training courses CXA-206

More information

Citrix Training. Course: Citrix Training. Duration: 40 hours. Mode of Training: Classroom (Instructor-Led)

Citrix Training. Course: Citrix Training. Duration: 40 hours. Mode of Training: Classroom (Instructor-Led) Citrix Training Course: Citrix Training Duration: 40 hours Mode of Training: Classroom (Instructor-Led) Virtualization has redefined the way IT resources are consumed and services are delivered. It offers

More information

High Availability for Citrix XenDesktop and XenApp

High Availability for Citrix XenDesktop and XenApp Worldwide Consulting Solutions WHITE PAPER Citrix XenDesktop High Availability for Citrix XenDesktop and XenApp Planning Guide for High Availability www.citrix.com Overview... 3 Guidelines... 5 Hardware

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Index C, D. Background Intelligent Transfer Service (BITS), 174, 191

Index C, D. Background Intelligent Transfer Service (BITS), 174, 191 Index A Active Directory Restore Mode (DSRM), 12 Application profile, 293 Availability sets configure possible and preferred owners, 282 283 creation, 279 281 guest cluster, 279 physical cluster, 279 virtual

More information

Introduction. Options for enabling PVS HA. Replication

Introduction. Options for enabling PVS HA. Replication Software to Simplify and Share SAN Storage Enabling High Availability for Citrix XenDesktop and XenApp - Which Option is Right for You? White Paper By Andrew Melmed, Director of Enterprise Solutions, Sanbolic,

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

XenDesktop 4 Product Review

XenDesktop 4 Product Review XenDesktop 4 Product Review Virtual Desktop software is technology that is designed to run a desktop operating system, on a virtual cluster while attempting to provide the same user experience as a physical

More information

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper

Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Storage Infrastructure and Solutions

More information

Part 1 - What s New in Hyper-V 2012 R2. [email protected] Datacenter Specialist

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist Part 1 - What s New in Hyper-V 2012 R2 [email protected] Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure

More information

Greatexam.1Y0-401.Premium.VCE.205q. Vendor: Citrix. Exam Code: 1Y0-401. Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: 15.

Greatexam.1Y0-401.Premium.VCE.205q. Vendor: Citrix. Exam Code: 1Y0-401. Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: 15. Greatexam.1Y0-401.Premium.VCE.205q Number: 1Y0-401 Passing Score: 925 Time Limit: 120 min File Version: 15.071 http://www.gratisexam.com/ Vendor: Citrix Exam Code: 1Y0-401 Exam Name: Designing Citrix XenDesktop

More information

NET ACCESS VOICE PRIVATE CLOUD

NET ACCESS VOICE PRIVATE CLOUD Page 0 2015 SOLUTION BRIEF NET ACCESS VOICE PRIVATE CLOUD A Cloud and Connectivity Solution for Hosted Voice Applications NET ACCESS LLC 9 Wing Drive Cedar Knolls, NJ 07927 www.nac.net Page 1 Table of

More information

Citrix XenServer 6 Administration

Citrix XenServer 6 Administration Citrix XenServer 6 Administration CTX-XS06 DESCRIZIONE: In this Citrix XenServer 6.0 training course, you will gain the foundational knowledge necessary to effectively install, configure, administer, and

More information

Virtual Desktop Infrastructure (VDI) made Easy

Virtual Desktop Infrastructure (VDI) made Easy Virtual Desktop Infrastructure (VDI) made Easy HOW-TO Preface: Desktop virtualization can increase the effectiveness of information technology (IT) teams by simplifying how they configure and deploy endpoint

More information

CloudBridge. Deliver the mobile workspace effectively and efficiently over any network. CloudBridge features

CloudBridge. Deliver the mobile workspace effectively and efficiently over any network. CloudBridge features Deliver the mobile workspace effectively and efficiently over any network Businesses rely on branch offices or remote employees to serve customers, to be near partners and suppliers and to expand into

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Cisco Desktop Virtualization with UCS: A Blueprint for Success

Cisco Desktop Virtualization with UCS: A Blueprint for Success Cisco Desktop Virtualization with UCS: A Blueprint for Success Eugenios Zervoudis Product Sales Specialist [email protected] Challenges with Desktop Virtualization today Challenges implementing Enterprise

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

CMB 207 1I Citrix XenApp and XenDesktop Fast Track

CMB 207 1I Citrix XenApp and XenDesktop Fast Track CMB 207 1I Citrix XenApp and XenDesktop Fast Track This fast paced course provides the foundation necessary for students to effectively centralize and manage desktops and applications in the datacenter

More information

Vblock Solution for Citrix XenDesktop and XenApp

Vblock Solution for Citrix XenDesktop and XenApp www.vce.com Vblock Solution for Citrix XenDesktop and XenApp Version 1.3 April 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

This white paper has been deprecated. For the most up to date information, please refer to the Citrix Virtual Desktop Handbook.

This white paper has been deprecated. For the most up to date information, please refer to the Citrix Virtual Desktop Handbook. This white paper has been deprecated. For the most up to date information, please refer to the Citrix Virtual Desktop Handbook. Prepared by: Worldwide Consulting Desktops and Apps Group Consulting Solutions

More information

Citrix Desktop Virtualization Fast Track

Citrix Desktop Virtualization Fast Track Citrix Desktop Virtualization Fast Track Description: Days: 5 Prerequisites: This fast-paced course provides the foundation necessary for students to effectively centralize and manage desktops and applications

More information

Exam 70-410: Installing and Configuring Windows Server 2012

Exam 70-410: Installing and Configuring Windows Server 2012 Exam 70-410: Installing and Configuring Windows Server 2012 Course Overview This course is part one, of a series of three courses, which validate the skills and knowledge necessary to implement a core

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

70-417: Upgrading Your Skills to MCSA Windows Server 2012

70-417: Upgrading Your Skills to MCSA Windows Server 2012 70-417: Upgrading Your Skills to MCSA Windows Server 2012 Course Overview This course prepares students to demonstrate your real-world knowledge of Windows Server 2012 core infrastructure services. Exam

More information

Microsoft Hyper-V chose a Primary Server Virtualization Platform

Microsoft Hyper-V chose a Primary Server Virtualization Platform Roger Shupert, Integration Specialist } Lake Michigan College has been using Microsoft Hyper-V as it s primary server virtualization platform since 2008, in this presentation we will discuss the following;

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

Delivering 5000 Desktops with Citrix XenDesktop

Delivering 5000 Desktops with Citrix XenDesktop Delivering 5 Desktops with Citrix XenDesktop Validation Report and Recommendations for a Scalable VDI Deployment using Citrix XenDesktop and Provisioning Services, NetApp Storage and VMWare Server Virtualization

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information

CMB-207-1I Citrix Desktop Virtualization Fast Track

CMB-207-1I Citrix Desktop Virtualization Fast Track CMB-207-1I Citrix Desktop Virtualization Fast Track Description This fast-paced course provides the foundation necessary for students to effectively centralize and manage desktops and applications in the

More information

Table of contents. Technical white paper

Table of contents. Technical white paper Technical white paper Provisioning Highly Available SQL Server Virtual Machines for the HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x Table of contents Executive

More information

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform

Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform XenDesktop Storage Validation on Nimble Storage Whitepaper XenDesktop Storage Validation on Nimble Storage Whitepaper Citrix XenDesktop Validation on Nimble Storage s Flash-Optimized Platform 1 2 Executive

More information

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Cisco Nexus 1000V Switch for Microsoft Hyper-V Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.

More information

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Solution Design

Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware. Solution Design Citrix XenDesktop 7.1 on Microsoft Hyper-V Server 2012 R2 on Cisco UCS C- Series Hardware Solution Design Citrix Validated Solutions July 10 th 2014 Prepared by: APAC Solutions TABLE OF CONTENTS Section

More information

In addition to their professional experience, students who attend this training should have technical knowledge in the following areas.

In addition to their professional experience, students who attend this training should have technical knowledge in the following areas. 6422A - Implementing and Managing Windows Server 2008 Hyper-V Course Number: 6422A Course Length: 3 Days Course Overview This three-day instructor-led course teaches students how to implement and manage

More information

Desktop Virtualization. The back-end

Desktop Virtualization. The back-end Desktop Virtualization The back-end Will desktop virtualization really fit every user? Cost? Scalability? User Experience? Beyond VDI with FlexCast Mobile users Guest workers Office workers Remote workers

More information

Communication ports used by Citrix Technologies. July 2011 Version 1.5

Communication ports used by Citrix Technologies. July 2011 Version 1.5 Communication ports used by Citrix Technologies July 2011 Version 1.5 Overview Introduction This document provides an overview of ports that are used by Citrix components and must be considered as part

More information

Bosch Video Management System High availability with VMware

Bosch Video Management System High availability with VMware Bosch Video Management System High availability with VMware en Technical Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3

More information

App Orchestration Setup Checklist

App Orchestration Setup Checklist App Orchestration Setup Checklist This checklist is a convenient tool to help you plan and document your App Orchestration deployment. Use this checklist along with the Getting Started with Citrix App

More information

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft 2010 Virsto www.virsto.com Virsto: Improving VDI with Citrix and Microsoft Virsto Software, developer

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

Tintri VMstore with Hyper-V Best Practice Guide

Tintri VMstore with Hyper-V Best Practice Guide TECHNICAL WHITE PAPER Tintri VMstore with Hyper-V Best Practice Guide Technical Best Practices Paper, Rev 1.2, Feb 2nd, 2015 www.tintri.com Contents Intended Audience... 4 Introduction... 4 VMstore: Application-aware

More information

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide

Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide Consulting Solutions WHITE PAPER Citrix XenDesktop Citrix Personal vdisk Technology Planning Guide www.citrix.com Overview XenDesktop offers IT administrators many options in order to implement virtual

More information

Citrix Provisioning Services Administrator s Guide Citrix Provisioning Services 5.1 SP2

Citrix Provisioning Services Administrator s Guide Citrix Provisioning Services 5.1 SP2 Citrix Provisioning Services Administrator s Guide Citrix Provisioning Services 5.1 SP2 December 2009 Revision 4 Copyright and Trademark Notice Information in this document is subject to change without

More information

Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice

Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice Technical Report Windows Server 2012 Remote Desktop Services on NetApp Storage Implementation and Best Practice Rob Briggs March 2013 TR-4134i TABLE OF CONTENTS 1 Introduction... 5 1.1 Using this Document...5

More information

Windows Server 2012 2,500-user pooled VDI deployment guide

Windows Server 2012 2,500-user pooled VDI deployment guide Windows Server 2012 2,500-user pooled VDI deployment guide Microsoft Corporation Published: August 2013 Abstract Microsoft Virtual Desktop Infrastructure (VDI) is a centralized desktop delivery solution

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

FOR SERVERS 2.2: FEATURE matrix

FOR SERVERS 2.2: FEATURE matrix RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,

More information

XenDesktop 7 Database Sizing

XenDesktop 7 Database Sizing XenDesktop 7 Database Sizing Contents Disclaimer... 3 Overview... 3 High Level Considerations... 3 Site Database... 3 Impact of failure... 4 Monitoring Database... 4 Impact of failure... 4 Configuration

More information

NexGen N5 Hybrid Flash Array with XenDesktop

NexGen N5 Hybrid Flash Array with XenDesktop NexGen N5 Hybrid Flash Array with XenDesktop citrix.com/ready Executive Summary NexGen Storage teamed with Citrix to validate a VDI solution that emulates a day in the life of a 750 user Citrix deployment.

More information

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform 1 Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform Reference Architecture Rick Andersen August, 2011 Month Year Feedback Hitachi Data Systems welcomes your

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

How To Build A Call Center From Scratch

How To Build A Call Center From Scratch Design Guide Transforming Call Centers XenApp 7.5 Design Guide on vsphere 5.5 Table of Contents About FlexCast Services Design Guides 3 Project overview 3 Objective 3 Assumptions 4 Conceptual architecture

More information

System Requirements. Version 8.2 November 23, 2015. For the most recent version of this document, visit our documentation website.

System Requirements. Version 8.2 November 23, 2015. For the most recent version of this document, visit our documentation website. System Requirements Version 8.2 November 23, 2015 For the most recent version of this document, visit our documentation website. Table of Contents 1 System requirements 3 2 Scalable infrastructure example

More information

Private Cloud Migration

Private Cloud Migration W H I T E P A P E R Infrastructure Performance Analytics Private Cloud Migration Infrastructure Performance Validation Use Case October 2012 Table of Contents Introduction 3 Model of the Private Cloud

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information

High Availability for Citrix XenApp

High Availability for Citrix XenApp WHITE PAPER Citrix XenApp High Availability for Citrix XenApp Enhancing XenApp Availability with NetScaler Reference Architecture www.citrix.com Contents Contents... 2 Introduction... 3 Desktop Availability...

More information

Desktop Virtualization Made Easy Execution Plan

Desktop Virtualization Made Easy Execution Plan Consulting Solutions WHITE PAPER Citrix XenDesktop Desktop Virtualization Made Easy Execution Plan A desktop virtualization architecture guide for small to medium environments www.citrix.com Trying to

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

Vblock Specialized System for Extreme Applications with Citrix XenDesktop 7.1

Vblock Specialized System for Extreme Applications with Citrix XenDesktop 7.1 www.vce.com Vblock Specialized System for Extreme Applications with Citrix XenDesktop 7.1 Version 1.1 August 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR

More information