Reference Architecture Versin 1.0 June 2015 RA-2022 Running LginVSI n Nutanix 2
Cpyright 2015 Nutanix, Inc. All rights reserved. This prduct is prtected by U.S. and internatinal cpyright and intellectual prperty laws. Nutanix is a trademark f Nutanix, Inc. in the United States and/r ther jurisdictins. All ther marks and names mentined herein may be trademarks f their respective cmpanies. 2
1 Executive Summary... 5 2 Intrductin... 6 2.1 Audience... 6 2.2 Purpse... 6 3 Nutanix Overview... 7 3.1 What Is the Nutanix Architecture?... 7 4 Applicatin Overview... 9 4.1 What is Citrix XenDesktp?... 9 4.1.1 Deplyment Scenari: Machine Creatin Services (MCS)... 9 4.1.2 Deplyment Scenari: Prvisining Services (PVS)... 10 4.2 Citrix XenDesktp the Nutanix Way... 12 5 Slutin Design... 14 5.1 Desktp Optimizatins... 17 5.2 XenDesktp Machine Creatin Services (MCS)... 18 5.2.1 MCS Pd Design... 18 5.2.2 Hsted Virtual Desktp I/O path with MCS... 19 5.3 XenDesktp Prvisining Services (PVS)... 21 5.3.1 PVS Pd Design... 21 5.3.2 PVS Stre and Netwrk Mapping... 22 5.3.3 Streamed Desktp I/O path with PVS... 23 5.4 Nutanix: Cmpute and Strage... 24 5.5 Netwrk... 26 6 Slutin Applicatin... 27 6.1 Scenari: 400 Desktps... 27 6.2 Scenari: 800 Desktps... 28 6.3 Scenari: 1,600 Desktps... 29 6.4 Scenari: 3,200 Desktps... 30 6.5 Scenari: 6,400 Desktps... 31 6.6 Scenari: 12,800 Desktps... 32 6.7 Scenari: 25,600 Desktps... 33 7 Validatin and Benchmarking... 34 7.1 Envirnment Overview... 34 7.2 Lgin VSI Benchmark... 39 7.3 Hw t Interpret the Results... 40 8 Results... 42 8.1 MCS: 360 Office Wrker Desktps... 42 3
8.2 MCS: 300 Knwledge Wrker Desktps... 44 8.3 PVS: 360 Office Wrker Desktps... 46 8.4 PVS: 300 Knwledge Wrker Desktps... 48 9 Further Research... 50 10 Cnclusin... 51 11 Appendix: Cnfiguratin... 52 12 References... 53 12.1 Table f Figures... 53 12.2 Table f Tables... 55 13 Abut the Authr... 56 4
1 Executive Summary This dcument makes recmmendatins fr the design, ptimizatin, and scaling f Citrix XenDesktp deplyments n Nutanix. It shws the scalability f the Nutanix virtual cmputing platfrm and prvides detailed perfrmance and cnfiguratin infrmatin abut the cluster s ability t scale when used fr XenDesktp deplyments. We used Lgin VSI t simulate real-wrld wrklads and the cnditins f a XenDesktp envirnment using MCS and PVS n Nutanix. The sizing data and recmmendatins in this dcument derive frm multiple testing iteratins and thrugh technical validatin. We cmpleted the slutin and testing data with Citrix XenDesktp deplyed n VMware vsphere, bth running n the Nutanix virtual cmputing platfrm. In a Citrix XenDesktp deplyment n Nutanix, desktp user density will be based primarily n the available hst CPU resurces, nt I/O r resurce bttlenecks fr MCS and PVS deplyments n Nutanix. Lgin VSI Office Wrker test results shwed that densities f ver 120 Office Wrker desktps per nde (cunting fur per 2U appliance) are pssible. Hwever, mst VDI deplyments are relevant t the Knwledge Wrker categry. The Lgin VSI Knwledge Wrker test demnstrated that this categry is capable f accmmdating mre than 100 desktps per nde, including fur per 2U appliance. We determined sizing fr the pds after carefully cnsidering perfrmance and after accunting fr additinal resurces fr N+1 failver capabilities. 5
2 Intrductin 2.1 Audience This reference architecture dcument is part f the Nutanix Slutins Library and is intended fr architecting, designing, managing, and supprting Nutanix infrastructures. Cnsumers f this dcument shuld be familiar with VMware vsphere, Citrix XenDesktp, and Nutanix. We have rganized this dcument t address key items fr each rle that fcuses n enabling a successful design, implementatin, and transitin t peratin. 2.2 Purpse This dcument cvers the fllwing subject areas: Overview f the Nutanix slutin. Overview f Citrix XenDesktp and its use cases. The benefits f Citrix XenDesktp n Nutanix. Architecting a cmplete Citrix XenDesktp slutin n the Nutanix platfrm. Design and cnfiguratin cnsideratins when architecting a Citrix XenDesktp slutin n Nutanix. Benchmarking Citrix XenDesktp perfrmance n Nutanix. 6
3 Nutanix Overview 3.1 What Is the Nutanix Architecture? The Nutanix Virtual Cmputing Platfrm is a scaled-ut cluster f high-perfrmance ndes, r servers, each running a standard hypervisr and cntaining prcessrs, memry, and lcal strage, cnsisting f SSD Flash and high capacity SATA disk drives. Each nde runs virtual machines just like a standard virtual machine hst. Cnt rller VM User VM(s) St rage I/ O VM I/ O Hyp ervisr SCSI Cnt rller SSD SSD HDD HDD HDD HDD Figure 1: Nutanix Nde Architecture In additin, the Nutanix Distributed File System (NDFS) virtualizes lcal strage frm all ndes int a unified pl. In effect, NDFS acts like an advanced NAS that uses lcal SSDs and disks frm all ndes t stre virtual machine data. Virtual machines running n the cluster write data t NDFS as if they are writing t shared strage. User VM(s) User VM(s) User VM(s) Hypervisr VM I/ O Hypervisr VM I/ O... Hypervisr VM I/ O SCSI Cntrller Cnt rller VM SCSI Cntrller Cnt rller VM SCSI Cntrller Cnt rller VM SSD SSD HDD HDD HDD HDD SSD SSD HDD HDD HDD HDD SSD SSD HDD HDD HDD HDD NDFS SCALE Figure 2: Nutanix Architecture NDFS is VM-centric and prvides advanced data management features. It brings data clser t virtual machines by string the data lcally n the system, resulting in higher perfrmance at a lwer cst. Nutanix web-scale cnverged infrastructure can hrizntally scale frm as few as three ndes t a large number f ndes, enabling rganizatins t scale their infrastructure as their needs grw. The Nutanix Cntrller VM and NDFS deliver a unified pl f strage frm all ndes acrss the cluster, using techniques including striping, replicatin, aut-tiering, errr detectin, failver, and autmatic recvery. This pl is then presented as shared strage resurces t Nutanix ndes r 7
hsts fr seamless supprt f features, including vmtin, HA, and DRS, alng with industry-leading data management features. Additinal ndes can be added in a plug-and-play manner in this highperfrmance scale-ut architecture t build a cluster that adapts t meet the needs f the business. The Nutanix Elastic Deduplicatin Engine is a sftware-driven, highly scalable, and intelligent data reductin technlgy. It increases the effective capacity in the disk tier, as well as the RAM and flash cache tiers f the system, by eliminating duplicate data. This substantially increases strage efficiency while imprving perfrmance due t larger effective cache capacity in RAM and flash. Deduplicatin is perfrmed by each nde individually in the cluster, allwing fr efficient and unifrm deduplicatin at scale. This technlgy is increasingly effective with full/persistent clnes r P2V migratins. Ingest dat a is fingerprint ed at 16K granularity fr efficient deduplicatin Each nde part icipat es in, and perfrms, it s w n fingerprint ing and deduplicatin VM 1... VM N VM 1... VM N VM 1... VM N Hypervisr Hypervisr... Hypervisr Cache Cache St rage CVM Cache Cache St rage CVM Cache Cache St rage CVM NDFS Fr n-disk deduplicat in, nly a single inst ance f t he duplicat e VM dat a is st red n t he clust er (maintaining RF). Figure 3: Elastic Deduplicatin Engine Fr lcalit y, dat a can be cached lcally using t he deduped Cnt ent Cache The NDFS Shadw Clne feature allws fr distributed caching f vdisks r VM data which is in a multi-reader scenari. This allws VMs n each nde t read the Base VM s vdisk lcally instead f frwarding read requests t a master Base VM. In the case f VDI, this means the base disk can be cached by each nde and all read requests fr the base will be served lcally. If the Base VM is mdified, the Shadw Clnes will be drpped and the prcess will start ver. W hen a clne r snapsht ccurs t he base vdisk w ill becme read-nly Read I/ O fr Base VM is served lcally frm cached Shadw vdisk Base VM Clne... Clne N Clne... Clne N Clne... Clne N Hypervisr Hypervisr... Hypervisr St rage vdisk vdisk Base VM vdisk CVM St rage vdisk vdisk Shadw vdisk CVM St rage vdisk vdisk Shadw vdisk CVM Once NDFS has det ermined t he Base VM s t arget vdisk is mult i-reader, t he vdisk w ill be marked as immut able and t he shadw vdisk can t hen be cached n each lcal CVM NDFS Figure 4: NDFS Shadw Clnes 8
4 Applicatin Overview 4.1 What is Citrix XenDesktp? Citrix XenDesktp is a desktp virtualizatin slutin that transfrms desktps and applicatins int a secure, n-demand service available t any user, anywhere, n any device. With XenDesktp, yu can deliver individual Windws, web, and SaaS applicatins, r full virtual desktps, t PCs, Macs, tablets, smartphnes, laptps, and thin clients with a high-definitin user experience. Citrix XenDesktp prvides a cmplete virtual desktp delivery system by integrating several distributed cmpnents with advanced cnfiguratin tls that simplify the creatin and real-time management f the virtual desktp infrastructure. The cre cmpnents f XenDesktp are: Desktp Delivery Cntrller. Installed n servers in the datacenter, the cntrller authenticates users, manages the assembly f users' virtual desktp envirnments, and brkers cnnectins between users and their virtual desktps. It cntrls the state f the desktps, starting and stpping them based n demand and administrative cnfiguratin. The Citrix license needed t run XenDesktp als includes Prfile management, in sme editins, t manage user persnalizatin settings in virtualized r physical Windws envirnments. Studi. Citrix Studi is the management cnsle that will allw yu t cnfigure and manage yur Citrix XenDesktp envirnment. It will prvide different wizard-based deplyment r cnfiguratin scenaris t publish resurces using desktps r applicatins. Virtual Desktp Prvisining pwered by Citrix Machine Creatin Services. Machine Creatin Services (MCS) is the building mechanism f the Citrix Desktp Delivery cntrller that autmates and rchestrates the deplyment f desktps using a single image. MCS cmmunicates with the rchestratin layer f yur hypervisr, prviding a rbust and flexible methd f image management. Virtual Desktp Prvisining pwered by Citrix Prvisining Services. Prvisining Services (PVS) creates and prvisins virtual desktps frm a single desktp image n demand, ptimizing strage utilizatin and prviding a pristine virtual desktp t each user every time they lg n. Desktp prvisining als simplifies desktp images, prvides ptimal flexibility, and ffers fewer pints f desktp management fr bth applicatins and desktps. Virtual Desktp Agent. Installed n virtual desktps, the agent enables direct FMA (Flexcast Management Architecture) cnnectins between the virtual desktp and user devices. Citrix Receiver. Installed n user devices, the Citrix Desktp Receiver enables direct ICA cnnectins frm user devices t virtual desktps. Citrix FlexCast. Citrix XenDesktp with FlexCast delivery technlgy lets yu deliver virtual desktps and applicatins tailred t meet the diverse perfrmance, security, and flexibility requirements f every wrker in yur rganizatin thrugh a single slutin. Centralized, singleinstance management helps yu deply, manage, and secure user desktps mre easily and efficiently. 4.1.1 Deplyment Scenari: Machine Creatin Services (MCS) Machine Creatin Services: Prvides images nly t desktps virtualized n a hypervisr. The images are cntained within the hypervisr pl and then thin-prvisined as needed. The thin-prvisined 9
virtual desktps use identity management functinality t vercme the new security identity (SID) requirements typical with clning. Machine Creatin Services is integrated with and managed by the XenDesktp Cntrllers and uses the capabilities f the underlying hypervisr. Figure 5: Machine Creatin Services MCS des nt require additinal servers; it uses integrated functinality built int Citrix XenServer, Micrsft Hyper-V, and VMware vsphere. As MCS uses hypervisr functinality, it is nly a viable ptin fr desktps virtualized n a hypervisr. A master desktp image is created and maintained within the hypervisr pl. The XenDesktp Cntrller instructs the hypervisr t create a snapsht f the base image and thin-prvisin new virtual machines thrugh the built-in hypervisr functins. Hwever, thin prvisining images ften results in clning issues as each prvisined desktp has the same identity as the master. MCS uses special functinality within the XenDesktp Cntrller and XenDesktp Virtual Desktp Agent (installed within the virtual desktp image) t build unique identities fr each virtual machine; these identities are stred within the virtual desktp s identity disk. This functinality allws each virtual desktp t be unique even thugh it uses the same base image. Figure 6: Machine Creatin Services: vdisks 4.1.2 Deplyment Scenari: Prvisining Services (PVS) Prvisining Services streaming technlgy allws cmputers t be prvisined and re-prvisined in real-time frm a single shared-disk image. Administratrs manage all images n the master image instead f managing and patching individual systems. The lcal hard-disk drive f each system may be 10
used fr runtime data caching r, in sme scenaris, remved frm the system entirely, which reduces pwer usage, system failure rates, and security risks. Prvisining Services can stream these images t bth virtual and physical devices. Figure 7: Prvisining Services The Prvisining Services slutin s infrastructure is based n sftware-streaming technlgy. After installing and cnfiguring Prvisining Services cmpnents, a vdisk is created frm a device s hard drive by taking a snapsht f the OS and applicatin image and then string that image as a vdisk file n the netwrk. A device that is used during this prcess is called a Master target device. The devices that use thse vdisks are called target devices. vdisks can exist n a Prvisining Server, file share, r, in larger deplyments, n a strage system with which the Prvisining Server can cmmunicate (iscsi, SAN, NAS, and CIFS). vdisks can be assigned t a single target device (Private Image Mde) r t multiple target devices (Standard Image Mde). When a target device is turned n, it is set t start up frm the netwrk and t cmmunicate with a Prvisining Server. Unlike thin-client technlgy, prcessing takes place n the target device (refer t Step 1 in Figure 8). Figure 8: Prvisining Services: vdisks 11
SCALE SCALE 4.2 Citrix XenDesktp the Nutanix Way The Nutanix platfrm perates and scales Citrix XenDesktp MCS and PVS. Figure 9 shws the XenDesktp n Nutanix slutin: Site Lad Balancer(s) Infrastructure Services Site Desktp Brkers StreFrnt vcenter XenDesktp Cntrller Server Pl Active Directry Hsted Desktps Pls Streamed Desktps Pls Dedicated/Pled/Shared w/mcs Streamed w/ PVS User Data vcops SQL Master Images Master vdisks Prvisining Services Figure 9: XenDesktp n Nutanix Cnceptual Arch The Nutanix apprach f mdular scale-ut enables custmers t select any initial deplyment size and grw in mre granular data and desktp increments. Custmers can realize a faster time-t-value fr their XenDesktp implementatin because it remves the hurdle f a large initial infrastructure purchase. The Nutanix slutin is fully integrated with the VMware APIs fr Array Integratin (VAAI) and prvides high perfrmance SSD flash t enable yu t prvide the best pssible experience t the end user with the flexibility f a single mdular platfrm. Running Citrix XenDesktp n Nutanix enables yu t run multiple wrklads all n the same, scalable cnverged infrastructure while achieving these benefits: Mdular incremental scale: With the Nutanix slutin yu can start small and scale up. A single Nutanix blck prvides up t 20TB strage and 400 desktps in a cmpact 2U ftprint. Given the mdularity f the slutin, yu can granularly scale by nde (up t apprximately 12
5TB/100 desktps); by blck (up t apprximately 20TB/400 desktps); r with multiple blcks, giving yu the ability t accurately match supply with demand and minimize the upfrnt Capex. Integrated: The Nutanix platfrm prvides full supprt fr VAAI allwing yu t take advantage f all the latest advancements frm VMware and ptimize yur VDI slutin. High perfrmance: By using memry caching fr read I/O and flash strage fr write I/O, yu can deliver high perfrmance thrughput in a cmpact 2U 4-nde cluster. Change management: Maintain envirnmental cntrl and separatin between develpment, test, staging, and prductin envirnments. Snapshts and fast clnes can help in sharing prductin data with nn-prductin jbs, withut requiring full cpies and unnecessary data duplicatin. Business cntinuity and data prtectin: User data and desktps are missin critical and need enterprise-grade data management features including backup and DR. Nutanix prvides data management features which can be used the same as they wuld be fr virtual envirnments. Data efficiency: The Nutanix slutin is truly VM-centric fr all cmpressin plicies. Unlike traditinal slutins that perfrm cmpressin mainly at the LUN level, the Nutanix slutin prvides all f these capabilities at the VM and file level, greatly increasing efficiency and simplicity. These capabilities ensure the highest pssible cmpressin and decmpressin perfrmance n a sub-blck level. By allwing fr bth inline and pst-prcess cmpressin capabilities, the Nutanix slutin breaks the bunds set by traditinal cmpressin slutins. Enterprise-grade cluster management: A simplified and intuitive Apple-like apprach t managing large clusters, including a cnverged GUI that serves as a central pint fr servers and strage, alert ntificatins, and the bnjur mechanism t aut-detect new ndes in the cluster. As a result, yu can spend mre time enhancing yur envirnment rather than maintaining it. High-density architecture: Nutanix uses an advanced server architecture in which eight Intel CPUs (up t 96 cres) and up t 3TB f memry are integrated int a single 2U appliance. Cupled with data archiving and cmpressin, Nutanix can reduce desktp hardware ftprints by up t 5x. Time-sliced clusters: Like public clud EC2 envirnments, Nutanix can prvide a truly cnverged clud infrastructure, allwing yu t run yur server and desktp virtualizatin n a single cnverged clud. Get the efficiency and savings yu require with a cnverged clud n a truly cnverged architecture. 13
5 Slutin Design With the Citrix XenDesktp n Nutanix slutin, yu gain the flexibility t start small with a single blck and scale up incrementally a nde, a blck, r multiple blcks at a time. This prvides the best f bth wrlds the ability t start small and grw larger in scale withut any impact t perfrmance. In the fllwing sectin we cver the design decisins and ratinale fr the XenDesktp deplyments n the Nutanix Cmplete Cluster. Table 1: Slutin Design Decisins Item Detail Ratinale General Minimum Size 1 x Nutanix blck (4 hsts) Minimum size requirement Scale Apprach Incremental mdular scale Allw fr grwth frm PC (hundreds f desktps) t massive scale (thusands f desktps) Scale Unit Nde(s), Blck(s), r Pd(s) Granular scale t precisely meet the capacity demands Scale in n x nde increments VMware vsphere Cluster Size Up t 12-32 vsphere hsts (Minimum f 3 hsts Islated fault dmains VMware best practice Clusters per vcenter Up t 2x24 r 4x12 hst clusters Task parallelizatin Datastre(s) 1 x Nutanix DFS datastre per pd (XenDesktp Server VMs, Prvisining Services Stre, VM clnes, VAAI clnes, etc.) (Max 2000 machines per cntainer) Nutanix handles I/O distributin/lcalizatin n-cntrller mdel Infrastructure Services Small deplyments: Shared cluster Large deplyments: Dedicated cluster Dedicated infrastructure cluster fr larger deplyments (best practice) Nutanix Cluster Size Up t 16 ndes Islated fault dmains Strage Pl(s) 1 x Strage Pl (SSD, SATA SSD, SATA HDD) Standard practice ILM handles tiering Cntainer(s) Features/ Enhancements Citrix XenDesktp XenDesktp Cntrllers 1 x Cntainer fr VMs 1 x Cntainer fr Data (nt used here) Increase CVM Memry t 32 GB Turn n MapReduce Dedupe Min: 2 (n+1) Scale: 1 per additinal pd Standard practice MapReduce Dedupe needs 32 GB f RAM t be enabled. HA fr XenDesktp Cntrllers Users per Cntrller Up t 5,000 users XenDesktp best practice Lad Balancing Citrix NetScaler Ensures availability f cntrllers Balances lad between cntrllers and pds 14
Citrix Prvisining Services PVS Servers Min: 2 (n+1) Scale: 1 per additinal pd HA fr PVS servers Users per PVS Server Up t 1,500 streams PVS best practice Lad Balancing Prvisining Services Farm Ensures availability f PVS servers Balances lad between PVS servers and pds vdisk Stre Dedicated disk n Nutanix Standard practice Write Cache On lcal hard drive Best practice if the strage can prvide enugh I/O. Citrix Strefrnt Strefrnt Servers Min: 2 (n+1) HA fr Strefrnt servers Lad Balancing Citrix NetScaler Ensures availability f StreFrnt servers Balances lad between StreFrnt servers Citrix NetScaler (If used) NetScaler Servers Min: 2 HA fr NetScaler (active/passive) Users per NetScaler See prduct data sheet Varies per mdel Server Lad Balancing NetScaler HA Ensures availability f NetScaler servers Balances lad between NetScaler servers and pds 15
Highlights frm a high-level snapsht f the Citrix XenDesktp n Nutanix Pd are shwn in Table 2. Table 2: Pd Highlights Item Qty Cntrl Pd # f vcenter Server(s) 1 # f XenDesktp Cntrller(s) 2 # f XenDesktp StreFrnt Server(s) 2 Services Pd # f Nutanix Blcks Up t 4 # f Hsts Up t 32 # f Nutanix Cluster(s) 1 # f Datastre(s) 1 Cntrl Pd Lad Balancer Citrix XenDesktp Cntrller StreFrnt StreFrnt Citrix XenDesktp Cntrller Services Pd Streamed/Hsted Virtual Desktps 8 Hst Cluster 8 Hst Cluster VMware vcenter NFS Datastre Nutanix DFS (NDFS) Nutanix Blck Nutanix Blck Nutanix Blck Nutanix Blck Figure 10: XenDesktp Pd Overview The sectin belw describes the desktp sizing and cnsideratins fr hsted virtual and streamed desktps. The fllwing are examples f typical scenaris fr desktp deplyment and use based n the Lgin VSI definitin. 16
Table 3: Desktp Scenari Definitin Scenari Task Wrkers Knwledge Wrkers Pwer Users Definitin Task wrkers and administrative wrkers perfrm repetitive tasks within a small set f applicatins, usually at a statinary cmputer. The applicatins are usually nt as CPU and memry intensive as the applicatins used by knwledge wrkers. Task wrkers wh wrk specific shifts might all lg in t their virtual desktps at the same time. Task wrkers include call center analysts, retail emplyees, and warehuse wrkers. Knwledge wrkers daily tasks include accessing the Internet, using email, and creating cmplex dcuments, presentatins, and spreadsheets. Knwledge wrkers include accuntants, sales managers, and marketing research analysts. Pwer users include applicatin develpers and peple wh use graphicsintensive applicatins. Table 4 prpses sme initial recmmendatins fr desktp sizing fr a Windws 7 desktp. Nte: These are recmmendatins fr sizing and shuld be mdified after a current state analysis. Table 4: Desktp Scenari Sizing Scenari vcpu Memry Disks Task Wrkers 1 1.5 GB 35 GB (OS) Knwledge Wrkers 2 2 GB 35 GB (OS) Pwer Users 2 4 GB 35 GB+ (OS) 5.1 Desktp Optimizatins We used the fllwing high-level desktp ptimizatins fr this design: Size desktps apprpriately fr each particular use case. Use a mix f applicatins installed in gld images and applicatin virtualizatin, depending n the scenari. Disable unnecessary OS services and applicatins. Redirect hme directries r use a prfile management tl fr user prfiles and dcuments. Fr mre detail n desktp ptimizatins refer t the Citrix XenDesktp Windws 7 Optimizatin Guide dcument n http://supprt.citrix.cm/ 17
5.2 XenDesktp Machine Creatin Services (MCS) Citrix Machine Creatin Services uses a standardized mdel fr hsted virtual desktp creatin. Using a base, r Master VM, MCS will create clne VMs that cnsist f a delta and identity disk and which link back t the base VMs disks. Figure 11 shws the main architectural cmpnents f an MCS deplyment n Nutanix and the cmmunicatin path between services. User Layer Access Layer Hardware Layer Resurce Layer Pled Desktp Catalg Delivery Grup Office Wrkers HTTP HA Pair NetScaler ADC Windws 7 VM Virtual Delivery Agent vcpu: 2 RAM: 1.5 Delivery Grup Task Wrkers 3 Servers 300 PCs Lad Balancer Resurce Hsts Physical, Virtual Sync StreFrnt Sync Cntrl Layer Delivery Cntrller SQL Database 1 Servers 23 VMs Directr Studi License Server Active Directry Access & Cntrl Hsts Physical, Virtual Figure 11: MCS Cmmunicatin 5.2.1 MCS Pd Design Table 5 shws highlights frm a high-level snapsht f the Citrix XenDesktp n Nutanix Hsted Virtual Desktp Pd. Table 5: MCS Pd Detail Item Qty Cntrl Pd # f vcenter Server(s) 1 # f XenDesktp Cntrller(s) 2 # f XenDesktp StreFrnt Server(s) 2 Services Pd # f Nutanix Blcks Up t 4 # f Hsts Up t 16 # f Nutanix Cluster(s) 1 18
# f Datastre(s) 1 # f Desktps Up t 1200 Cntrl Pd Lad Balancer Citrix XenDesktp Cntrller Citrix StreFrnt Citrix StreFrnt Citrix XenDesktp Cntrller Services Pd Hsted Virtual Desktps Up t 1200 Desktps 8 Hst Cluster 8 Hst Cluster VMware vcenter NFS Datastre Nutanix DFS (NDFS) Nutanix Blck Nutanix Blck Nutanix Blck Nutanix Blck Figure 12: MCS Pd Detail 5.2.2 Hsted Virtual Desktp I/O path with MCS Figure 13 describes the high-level I/O path fr an MCS-based desktp n Nutanix. As shwn, all I/O peratins are handled by NDFS and ccur n the lcal nde t prvide the highest pssible I/O perfrmance. Read requests fr the Master VM ccur lcally fr desktps hsted n the same nde and ver 10 GbE fr desktps hsted n anther nde. 19
READ IO READ/WRITE IO OS Desktp 1..N OS Delta Persnal vdisk OS Desktp 1..N OS Delta Persnal vdisk Master VM OS Hst 10 GbE Hst Read Cache PCIe SSD SATA SSD ILM Nutanix CVM SATA HDD Read Cache Nutanix DFS (NDFS) PCIe SSD SATA SSD ILM Nutanix CVM SATA HDD Nde N Nde 1 Figure 13 MCS I/O Overview Figure 14 describes the detailed I/O path fr an MCS-based desktp n Nutanix. All write I/Os ccur lcally n the lcal nde s SSD tier t prvide the highest pssible perfrmance. Read requests fr the Master VM ccur lcally fr desktps hsted n the same nde and ver 10 GbE fr desktps hsted n anther nde. These reads are served frm the high- perfrmance read cache (if cached) r the SSD tier. Each nde will als cache frequently accessed data in the read cache fr any lcal data (delta disks, persnal vdisks (if used)). Nutanix ILM will cntinually mnitr data and the I/O patterns t chse the apprpriate tier placement. READ IO READ/WRITE IO OS Desktp 1..N OS Delta Persnal vdisk OS Desktp 1..N OS Delta Persnal vdisk Master VM OS Hst 10 GbE Hst Read Cache PCIe SSD SATA SSD SATA HDD Read Cache PCIe SSD SATA SSD SATA HDD ILM Nutanix CVM ILM Nutanix CVM Nde N Nde 1 Figure 14: MCS I/O Detail 20
5.3 XenDesktp Prvisining Services (PVS) Citrix Prvisining Services streams desktps ver the netwrk frm a centralized stre f master vdisks (OS images). These vdisks are stred by the PVS server and are delivered by the Citrix Stream service. During startup, the streamed desktp pulls the cnfiguratin using PXE/TFTP and then initiates cmmunicatin with the PVS server t cntinue starting the vdisk. Figure 15 shws the main architectural cmpnents f a PVS deplyment n Nutanix and the cmmunicatin path between services. User Layer Access Layer Hardware Layer Resurce Layer Pled Desktp Catalg Delivery Grup Office Wrkers HTTP HA Pair NetScaler ADC Windws 7 VM Virtual Delivery Agent vcpu: 2 RAM: 1.5 Delivery Grup Task Wrkers 3 Servers 300 PCs Lad Balancer Resurce Hsts Physical, Virtual Sync StreFrnt Sync Cntrl Layer Delivery Cntrller SQL Database Sync 1 Servers 25 VMs Directr Studi License Server Active Directry Prvisining Services Access & Cntrl Hsts Physical, Virtual Figure 15: PVS Cmmunicatin 5.3.1 PVS Pd Design Table 6 highlights a high-level snapsht f the Citrix XenDesktp n Nutanix Streamed Desktp Pd. Table 6: PVS Pd Detail Item Qty Cntrl Pd # f vcenter Server(s) 1 # f XenDesktp Cntrller(s) 2 # f XenDesktp StreFrnt Server(s) 2 # f PVS Server(s) 2 Services Pd # f Nutanix Blcks Up t 4 21
# f Hsts Up t 16 # f Nutanix Cluster(s) 1 # f Datastre(s) 1 # f Desktps Up t 1200 Cntrl Pd Lad Balancer Citrix XenDesktp Cntrller Citrix StreFrnt Citrix StreFrnt Citrix XenDesktp Cntrller Citrix Prvisining Services Services Pd Citrix PVS Farm Citrix Prvisining Services Streamed Desktps Up t 1200 Desktps 8 Hst Cluster VMware vcenter 8 Hst Cluster 5.3.2 PVS Stre and Netwrk Mapping NFS Datastre Nutanix DFS (NDFS) Nutanix Blck Nutanix Blck Nutanix Blck Nutanix Blck Figure 16: PVS Pd Detail Figure 17 shws the mapping fr the PVS server s strage and netwrk. In this case we used dedicated interfaces fr bth PVS server management and Stream services. 22
Prvisining Services VM OS + App (C:) Stre (S:) PV-SCSI Cntrller 40GB VMDK 250GB+ VMDK NIC vmxnet3 Infra/AD PXE/TFTP/ StreamServices NFS Datastre RF2 CTR VM vswitch/vds PrtGrup vswitch/vds SP1 (PCIe SSD,SATA SSD, SATA HDD) Nutanix DFS (NDFS) 5.3.3 Streamed Desktp I/O path with PVS Figure 17: PVS Cmpnent Mapping Figure 18 describes the high-level I/O path fr a streamed desktp n Nutanix. All write I/O peratins are handled by NDFS and ccur n the lcal nde t prvide the highest pssible I/O perfrmance. Streamed desktps hsted n the same server as the PVS hst will be handled by the hst s lcal vswitch and will nt use the external netwrk. READ IO READ/WRITE IO OS Streamed Desktp 1..N Write Cache Persnal vdisk OS Streamed Desktp 1..N Write Cache Persnal vdisk OS PVS Server Memry Stream vdisk Stre vdisk... vdisk Hst 10 GbE Hst Read Cache PCIe SSD SATA SSD ILM Nutanix CVM SATA HDD Nutanix DFS (NDFS) Read Cache PCIe SSD SATA SSD ILM Nutanix CVM SATA HDD Nde N Nde 1 Figure 18: PVS I/O Overview Figure 19 describes the detailed I/O path fr a streamed desktp n Nutanix. All write I/Os (write cache r persnal vdisks (if used)) will ccur lcally n the lcal nde s SSD tier t prvide the highest pssible perfrmance. The PVS server s vdisk stre is hsted n the lcal nde s SSD tier and will als be cached in memry. All read requests frm the streamed desktp will then be streamed either frm the PVS server s memry r its vdisk stre which is hsted n NDFS. Each nde will cache frequently accessed data in the read cache fr any lcal data (write cache, persnal vdisks). Nutanix ILM will cntinually mnitr data and the I/O patterns t chse the apprpriate tier placement. 23
READ IO READ/WRITE IO OS Streamed Desktp 1..N Write Cache Persnal vdisk OS Streamed Desktp 1..N Write Cache Persnal vdisk OS PVS Server Memry Stream vdisk Stre vdisk... vdisk Hst 10 GbE Hst Read Cache PCIe SSD SATA SSD SATA HDD Read Cache PCIe SSD SATA SSD SATA HDD ILM ILM Nutanix CVM Nutanix CVM Nde N Figure 19: PVS I/O Detail Nde 1 5.4 Nutanix: Cmpute and Strage The Nutanix virtual cmputing platfrm prvides an ideal cmbinatin f bth high-perfrmance cmpute with lcalized strage t meet any demand. True t this capability, this reference architecture cntains zer recnfiguratin f r custmizatin t the Nutanix prduct t ptimize fr this use case. Figure 20 shws a high-level example f the relatinship between a Nutanix blck, nde, strage pl, and cntainer.... Cntainer 1 - Cntainer N CTR-RF2-VM-01 Strage Pl SP01 PCIe SSD SATA SSD SATA HDD Nutanix Nde PCIe SSD SATA SSD SATA HDD Nutanix Nde PCIe SSD SATA SSD SATA HDD Nutanix Nde PCIe SSD SATA SSD SATA HDD Nutanix Nde... PCIe SSD SATA SSD SATA HDD Nutanix Nde PCIe SSD SATA SSD SATA HDD Nutanix Nde PCIe SSD SATA SSD SATA HDD Nutanix Nde PCIe SSD SATA SSD SATA HDD Nutanix Nde Nutanix Blck Nutanix Blck Figure 20: Nutanix Cmpnent Architecture 24
Table 7 shws the Nutanix strage pl and cntainer cnfiguratin. Table 7: Nutanix Strage Cnfiguratin Name Rle Details SP01 Main strage pl fr all data PCI-e SSD, SATA, SSD, SATA- HDD CTR-RF2-VM-01 Cntainer fr all VMs Datastre CTR-RF2-DATA-01 Cntainer fr all Data (Nt used here) Datastre 25
5.5 Netwrk Designed fr true linear scaling, we use a Leaf Spine netwrk architecture. A Leaf Spine architecture cnsists f tw netwrk tiers: an L2 Leaf and an L3 Spine based n 40GbE and nn-blcking switches. This architecture maintains cnsistent perfrmance withut any thrughput reductin due t a static maximum f three hps frm any nde in the netwrk. Figure 21 shws a design f a scaled-ut Leaf Spine netwrk architecture which prvides 20Gb active thrughput frm each nde t its L2 Leaf and scalable 80Gb active thrughput frm each Leaf t Spine switch, prviding scale frm 1 Nutanix blck t thusands withut any impact t available bandwidth. Figure 21: Leaf Spine Netwrk Architecture 26
6 Slutin Applicatin This sectin applies Nutanix pd-based reference architecture t real-wrld scenaris and utlines the sizing metrics and cmpnents. Nte: Detailed hardware cnfiguratin and prduct mdels can be fund in the appendix. 6.1 Scenari: 400 Desktps Table 8: Detailed Cmpnent Breakdwn: 400 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 1 (partial) # f vcenter Servers 1 # f Nutanix Blcks 1 # f Hsts 4 # f RU (Nutanix) 2 # f vsphere Clusters 1 # f 10 GbE Prts 8 # f Datastre(s) 1 # f 100/1000 Prts (IPMI) 4 # f L2 Leaf Switches 2 # f L3 Spine Switches 1 Figure 22: Rack Layut: 400 Desktps 27
6.2 Scenari: 800 Desktps Table 9: Detailed Cmpnent Breakdwn: 800 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 1 (partial) # f vcenter Servers 1 # f Nutanix Blcks 2 # f Hsts 8 # f RU (Nutanix) 4 # f vsphere Clusters 1 # f 10 GbE Prts 16 # f Datastre(s) 1 # f 100/1000 Prts (IPMI) 8 # f L2 Leaf Switches 2 # f L3 Spine Switches 1 Figure 23: Rack Layut: 800 Desktps 28
6.3 Scenari: 1,600 Desktps Table 10: Detailed Cmpnent Breakdwn: 1,600 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 1 # f vcenter Servers 1 # f Nutanix Blcks 4 # f Hsts 16 # f RU (Nutanix) 8 # f vsphere Clusters 2 # f 10 GbE Prts 32 # f Datastre(s) 1 # f 100/1000 Prts (IPMI) 16 # f L2 Leaf Switches 2 # f L3 Spine Switches 2 Figure 24: Rack Layut: 1,600 Desktps 29
6.4 Scenari: 3,200 Desktps Table 11: Detailed Cmpnent Breakdwn: 3,200 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 2 # f vcenter Servers 1 # f Nutanix Blcks 8 # f Hsts 32 # f RU (Nutanix) 16 # f vsphere Clusters 1 # f 10 GbE Prts 64 # f Datastre(s) 2 # f 100/1000 Prts (IPMI) 32 # f L2 Leaf Switches 2 # f L3 Spine Switches 2 Figure 25: Rack Layut: 3,200 Desktps 30
6.5 Scenari: 6,400 Desktps Table 12: Detailed Cmpnent Breakdwn: 6,400 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 4 # f vcenter Servers 1 # f Nutanix Blcks 16 # f Hsts 64 # f RU (Nutanix) 32 # f vsphere Clusters 2 # f 10 GbE Prts 128 # f Datastre(s) 4 # f 100/1000 Prts (IPMI) 64 # f L2 Leaf Switches 4 # f L3 Spine Switches 2 Figure 26: Rack Layut: 6,400 Desktps 31
6.6 Scenari: 12,800 Desktps Table 13: Detailed Cmpnent Breakdwn: 12,800 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 8 # f vcenter Servers 2 # f Nutanix Blcks 32 # f Hsts 128 # f RU (Nutanix) 64 # f vsphere Clusters 8 # f 10 GbE Prts 256 # f Datastre(s) 8 # f 100/1000 Prts (IPMI) 128 # f L2 Leaf Switches 8 # f L3 Spine Switches 2 Figure 27: Rack Layut: 12,800 Desktps 32
6.7 Scenari: 25,600 Desktps Table 14: Detailed Cmpnent Breakdwn: 25,600 Desktps Item Value Item Value Cmpnents Infrastructure # f Nutanix Desktp Pds 16 # f vcenter Servers 2 # f Nutanix Blcks 64 # f Hsts 256 # f RU (Nutanix) 128 # f vsphere Clusters 16 # f 10 GbE Prts 512 # f Datastre(s) 16 # f 100/1000 Prts (IPMI) 256 # f L2 Leaf Switches 14 # f L3 Spine Switches 2 Figure 28: Rack Layut: 25,600 Desktps 33
7 Validatin and Benchmarking The slutin and testing prvided in this dcument was cmpleted with Citrix XenDesktp 7.6 deplyed n VMware vsphere 5.5, n Nutanix virtual cmputing platfrm. We used Lgin VSI Office Wrker and Knwledge Wrker benchmarks t prvide details fr the desktp perfrmance fr a knwledge user n the Nutanix appliance. 7.1 Envirnment Overview One nde f an existing Nutanix NX-3400 was used t hst all infrastructure and XenDesktp services, as well as the Lgin VSI test harness. The three remaining ndes in the Nutanix NX-3400 was used as the target envirnment and prvided all desktp hsting. The Nutanix blck was cnnected t an Arista 7050S tp-f-rack switch using 10 GbE. Citrix XenDesktp Services NetScaler Infrastructure Services StreFrnt XenDeskt Prvisini p ng Cntrller Services s LginVSI Test Envirnment Desktps vcenter DFS Server Sessins Active Directry vcops LginVSI Launcher(s) Figure 29: Test Envirnment Overview 34
Test Envirnment Cnfiguratin Assumptins: Knwledge wrker use case Per-desktp IOPS (Office Wrker): 5 sustained/70 peak (startup) Per-desktp IOPS (Knwledge Wrker): 10 sustained/70 peak (startup) Using bth MCS and PVS Hardware: Strage and Cmpute: 1 Nutanix NX-3400 Netwrk: Arista 7050Q (L3 Spine)/7050S (L2 Leaf) Series Switches Desktp Cnfiguratin: OS: Windws 7 SP1 x86 2 vcpu & 2 GB memry 1 x 35 GB OS Disk Applicatins: Micrsft Office 2013 Adbe Acrbat Reader XI Internet Explrer Flash vide Lgin VSI: Lgin VSI 4.1 Prfessinal 35
XenDesktp Cnfiguratin: Table 15 shws the XenDesktp cnfiguratin used in the test envirnment. Table 15: XenDesktp Cnfiguratin VM Qty vcpu Memry Disks XenDesktp Cntrller(s) 2 4 8 1 x 40 GB (OS) PVS Server(s) 2 4 16 1 x 40 GB (OS) 1 x 250 GB (Stre) StreFrnt Server(s) 2 4 4 1 x 40 GB (OS) 36
Test Image Preparatin: MCS 1. Create Base VM. 2. Install Windws 7. 3. Install Standard sftware. 4. Optimize Windws 7. 5. Add Machine t Dmain. 6. Install Citrix VDA. 7. Install Lgin VSI Cmpnents. 8. Create Snapsht. 9. Create Clnes using XenDesktp Create Machine Catalg Wizard. Test Image Preparatin: PVS 1. Create Base VM. 2. Install Windws 7. 3. Install Standard sftware. 4. Optimize Windws 7. 5. Install PVS Target Device. 6. Create vdisk. 7. Set Bis t start up frm PXE. 8. Remve VMDK. 9. Bt VM frm vdisk (Private Mde). 10. Add Machine t Dmain. 11. Install Citrix VDA. 12. Install Lgin VSI Cmpnents. 13. Create disk fr write cache. 14. Cnvert t template. 15. Cnvert vdisk (Standard Mde). 16. Set Cache t Lcal Disk. 37
17. Create Clnes using XenDesktp Setup Wizard. Test Executin Restart/Turn n Desktps Restart/Start Lgin VSI Launcher(s) Lg in t VSI Management Cnsle Set test parameters and number f sessins Start test Wait fr test executin t finish Analyze results (Lgin VSI) 38
7.2 Lgin VSI Benchmark Lgin Virtual Sessin Indexer (Lgin VSI) is the de-fact industry standard benchmarking tl fr testing the perfrmance and scalability f centralized Windws desktp envirnments like Server-based Cmputing (SBC) and Virtual Desktp Infrastructures (VDI). Lgin VSI is 100 percent vendr independent and is used t test virtual desktp envirnments like Citrix XenDesktp and XenApp, Micrsft VDI, and Remte Desktp Services, VMware View r any ther Windws-based SBC r VDI slutin. Lgin VSI is used fr testing and benchmarking by all majr hardware and sftware vendrs and is recmmended by bth leading IT-analysts and the technical cmmunity. Lgin VSI is vendr independent and wrks with standardized user wrklads, therefre cnclusins that are based n Lgin VSI test data are bjective, verifiable, and replicable. Fr mre infrmatin abut Lgin VSI visit http://www.lginvsi.cm/ The fllwing table includes all fur wrklads available n Lgin VSI 4.1. Table 16 Lgin VSI 4.1 Wrklads Task Wrker Office Wrker Knwledge Wrker Pwer User Light Medium Medium Heavy 1 vcpu 1 vcpu 2 vcpus 2-4 vcpus 2-3 Apps 4-6 Apps 4-7 Apps 5-9 Apps N vide 240p vide 360p vide 720p vide Lgin VSI Wrkflws The Lgin VSI Wrkflw base layut is captured in the Lgin VSI 4.1 Wrklads dcument, which als dcuments the changes frm previus versins f Lgin VSI t versin 4.1 in great detail. Table 17: Lgin VSI Wrklad Definitins Wrklad Light Medium Heavy Task Office Knwledge Pwer User Name Wrker wrker Wrker VSI Versin 4 4 4 4.1 4.1 4.1 4.1 Apps Open 2 5-7 8-10 2-7 5-8 5-9 8-12 CPU Usage 66% 99% 124% 70% 82% 100% 119% Disk Reads 52% 93% 89% 79% 90% 100% 133% Disk Writes 65% 97% 94% 77% 101% 100% 123% IOPS 5.2 7.4 7 6 8.1 8.5 10.8 Memry 1 GB 1 GB 1 GB 1 GB 1.5 GB 1.5 GB 2 GB vcpu 1vCPU 2vCPU 2vCPU 1vCPU 1vCPU 2vCPU 2vCPU+ 39
7.3 Hw t Interpret the Results Lgin VSI Lgin VSI is a test benchmark used t simulate real-wrld user wrklad n a desktp. These values represent the time it takes fr an applicatin r task t cmplete, fr example, t launch Outlk, and is nt in additin t traditinal desktp respnse times. These d nt refer t the rund trip time (RTT) fr netwrk I/O, rather the ttal time t perfrm an actin n the desktp. During the test all VMs are turned n and the wrklad is started n a new desktp every 30 secnds until all sessins and wrklad is active. Evaluatin is quantified using the fllwing metrics: Minimum Respnse: The minimum applicatin respnse time. Average Respnse: The average applicatin respnse time. Maximum Respnse: The maximum applicatin respnse time. VSI Baseline: Average applicatin respnse time f the first 15 sessins. VSI Index Average: The VSI index average is the average respnse time drpping the highest and lwest 2 percent. VSImax: If reached, the maximum value f sessins launched befre the VSI Index Average gets abve the VSI Baseline x 125 percent + 3,000 ms. Based n user experience and industry standards, we recmmend that ideal ranges fr these values are kept belw the fllwing values, as stated in Table 18. Table 18: Lgin VSI Metric Values Metric Value (ms) Ratinale Minimum Respnse <1,000 Acceptable ideal respnse time Average Respnse <2,000 Acceptable average respnse time Maximum Respnse <3,000 Acceptable peak respnse time VSI Baseline <1,000 Acceptable ideal respnse time VSI Index Average <2,000 Acceptable average respnse time 40
Lgin VSI Graphs The Lgin VSI graphs shw the values defined in Table 18 during the launching f each desktp sessin. Figure 30 shws an example graph f the test data. The y-axis is the respnse time in ms and the x-axis is the number f active sessins. Figure 30: Example Graph f Lgin VSI Test 41
8 Results 8.1 MCS: 360 Office Wrker Desktps Lgin VSI Office Wrker Results During the testing with 360 desktps, VSImax was nt reached with a baseline f 1,096 ms and average VSIindex f 2006 ms. Cluster Metrics Figure 31: Lgin VSI 360 Office Wrker Desktps Figure 32 shws user sessins ver time, measured with Splunk and UberAgent. Figure 32: User Sessins ver Time Average lgn duratin ver time during the test was measured with Splunk and UberAgent. The scale is frm 0-20 secnds: 42
Figure 33: Average Lgn Duratin At the peak f the test executin, CPU utilizatin fr the hsts peaked at 99.96 percent and memry utilizatin peaked at apprximately 71.28 percent: Nutanix Datastre Metrics Figure 34: Peak CPU Utilizatin fr Hsts IOPS peaked at apprximately 3,458 during the high-vlume startup perid t refresh the desktps; peak during the test was a little abve -5,059: Figure 35: Peak Cluster IOPS Cmmand latency peaked at apprximately 9.91 ms during the tests: Figure 36: Peak Cmmand Latency 43
8.2 MCS: 300 Knwledge Wrker Desktps Lgin VSI Knwledge Wrker Results During the testing with 300 desktps VSImax was nt reached with a baseline f 979 ms and an average VSIindex f 1,169 ms. Cluster Metrics Figure 37: Lgin VSI Knwledge Wrker Results User sessins ver time measured with Splunk and UberAgent: Figure 38: User Sessins ver Time Average lgn duratin ver time during the test, measured with Splunk and UberAgent; the scale is frm 0-20 secnds: Figure 39: Average Lgn Duratin ver Time At the peak f the test executin, CPU utilizatin fr the hsts peaked at 98.2 percent and Memry utilizatin peaked at apprximately 82.53 percent. 44
Figure 40: Peak CPU Utilizatin Nutanix Datastre Metrics IOPS peaked at apprximately 4,788 during the high-vlume startup perid t refresh the desktps; peak during the test was -4,837. Figure 41: Peak IOPS Cmmand latency peaked at ~4.32 ms during the tests: Figure 42: Cmmand Latency Peak 45
8.3 PVS: 360 Office Wrker Desktps Lgin VSI Office Wrker Results During the testing with 360 desktps, VSImax was nt reached with a baseline f 1,127 ms and average VSIindex f 1,791 ms: Cluster Metrics Figure 43: Lgin VSI Office Wrker Results User sessins ver time measured with Splunk and UberAgent: Figure 44: User Sessins ver Time Average lgn duratin ver time during the test, measured with Splunk and UberAgent. The scale is frm 0-20 secnds: Figure 45: Average Lgn Duratin 46
At the peak f the test executin, CPU utilizatin fr the hsts peaked at 99.36 percent and Memry utilizatin peaked at apprximately 71.67 percent: Nutanix Datastre Metrics Figure 46: Peak CPU Utilizatin IOPS peaked at 3,743 during the high-vlume startup perid t refresh the desktps; peak during the test was a little abve 4,807: Figure 47: IOPS Vlume Cmmand latency peaked at apprximately 10.31 ms during the tests: Figure 48: Cmmand Latency Peak 47
8.4 PVS: 300 Knwledge Wrker Desktps Lgin VSI Knwledge Wrker Results During the testing with 300 desktps, VSImax was nt reached with a baseline f 721 ms and average VSIindex f 1,479 ms: Cluster Metrics Figure 49: Knwledge Wrker Results User sessins ver time measured with Splunk and UberAgent: Figure 50: User Sessins ver Time Average lgn duratin ver time during the test, measured with Splunk and UberAgent; the scale is frm 0-20 secnds: Figure 51: Average Lgn Duratin 48
At the peak f the test executin, CPU utilizatin fr the hsts peaked at 99.42 percent and memry utilizatin peaked at apprximately 83.93 percent: Nutanix Datastre Metrics Figure 52: Peak CPU Utilizatin IOPS peaked at apprximately 5,975 during the high-vlume startup perid t refresh the desktps; peak during the test was -4,542: Figure 53: IOPS Vlume Cmmand latency peaked at apprximately 4.18 ms during the tests: Figure 54: Peak Cmmand Latency 49
9 Further Research As part f its nging cmmitment t delivering the best pssible slutins, Nutanix will cntinue cnducting research in the fllwing areas: Perfrmance ptimizatins. Scale testing. Detailed use-case applicatin. XenApp cnfiguratin and testing. Persnal vdisk cnfiguratin and testing. GPU fflad and peripheral testing. Jint slutins with partners. 50
10 Cnclusin Our extensive testing f MCS and PVS deplyments n Nutanix demnstrates that desktp user density will be based primarily n the available hst CPU resurces, nt by any I/O r resurce cnstraints. Lgin VSI Office Wrker test results shwed densities f mre than 480 Office Wrker desktps fr each 2U Nutanix appliance. Hwever, mst VDI deplyments fit within the knwledge wrker categry, which was validated at ver 400 desktps fr each 2U appliance. When determining the sizing fr the pds, we cnsidered bth perfrmance and additinal resurces fr N+1 failver capabilities. The MCS tests shwed light I/O ftprints n the Nutanix platfrm, with a peak f apprximately 5,000 aggregate IOPS (during the high-vlume startup perids). Sustained IOPS were light, ranging frm 4,000-5,000. I/O latencies averaged less than 2 ms fr read and less than 5 ms fr write during peak lad. The PVS tests shwed light I/O ftprints n the Nutanix platfrm as well, with a peak f apprximately 2,600 aggregate IOPS during the high-vlume startup perids. Sustained IOPS were light, ranging frm 500-2,600. I/O latencies averaged less than 1 ms fr read and less than 8 ms fr write during peak lad. PVS server CPU utilizatin peaked at apprximately 40 percent during the high-vlume startup perid, with an average steady state at apprximately 10 percent. The Citrix XenDesktp-n-Nutanix slutin prvides a single, high-density platfrm fr desktp and applicatin delivery. This mdular, pd-based apprach als enables deplyments t scale simply and efficiently with zer dwntime. 51
11 Appendix: Cnfiguratin Hardware Strage and Cmpute Nutanix NX-3400 Per nde specs (4 ndes per 2U blck): CPU: 2x Intel Xen E5-2680 Memry: 256 GB Memry Netwrk Arista 7050Q - L3 Spine Arista 7050S - L2 Leaf Sftware Nutanix NOS 4.1.1.3 XenDesktp 7.6 Prvisining Services 7.6 Desktp Windws 7 SP1 x86 Infrastructure 5.5.2 vcenter 5.5.2 VM Desktp CPU: 2 vcpu Memry: 1.5 GB Strage: 1 x 35 GB OS Disk n CTR-RF2-VM-01 NDFS-backed NFS datastre 52
12 References 12.1 Table f Figures Figure 1: Nutanix Nde Architecture... 7 Figure 2: Nutanix Architecture... 7 Figure 3: Elastic Deduplicatin Engine... 8 Figure 4: NDFS Shadw Clnes... 8 Figure 5: Machine Creatin Services... 10 Figure 6: Machine Creatin Services: vdisks... 10 Figure 7: Prvisining Services... 11 Figure 8: Prvisining Services: vdisks... 11 Figure 9: XenDesktp n Nutanix Cnceptual Arch... 12 Figure 10: XenDesktp Pd Overview... 16 Figure 11: MCS Cmmunicatin... 18 Figure 12: MCS Pd Detail... 19 Figure 13 MCS I/O Overview... 20 Figure 14: MCS I/O Detail... 20 Figure 15: PVS Cmmunicatin... 21 Figure 16: PVS Pd Detail... 22 Figure 17: PVS Cmpnent Mapping... 23 Figure 18: PVS I/O Overview... 23 Figure 19: PVS I/O Detail... 24 Figure 20: Nutanix Cmpnent Architecture... 24 Figure 21: Leaf Spine Netwrk Architecture... 26 Figure 22: Rack Layut: 400 Desktps... 27 Figure 23: Rack Layut: 800 Desktps... 28 Figure 24: Rack Layut: 1,600 Desktps... 29 Figure 25: Rack Layut: 3,200 Desktps... 30 53
Figure 26: Rack Layut: 6,400 Desktps... 31 Figure 27: Rack Layut: 12,800 Desktps... 32 Figure 28: Rack Layut: 25,600 Desktps... 33 Figure 29: Test Envirnment Overview... 34 Figure 30: Example Graph f Lgin VSI Test... 41 Figure 31: Lgin VSI 360 Office Wrker Desktps... 42 Figure 32: User Sessins ver Time... 42 Figure 33: Average Lgn Duratin... 43 Figure 34: Peak CPU Utilizatin fr Hsts... 43 Figure 35: Peak Cluster IOPS... 43 Figure 36: Peak Cmmand Latency... 43 Figure 37: Lgin VSI Knwledge Wrker Results... 44 Figure 38: User Sessins ver Time... 44 Figure 39: Average Lgn Duratin ver Time... 44 Figure 40: Peak CPU Utilizatin... 45 Figure 41: Peak IOPS... 45 Figure 42: Cmmand Latency Peak... 45 Figure 43: Lgin VSI Office Wrker Results... 46 Figure 44: User Sessins ver Time... 46 Figure 45: Average Lgn Duratin... 46 Figure 46: Peak CPU Utilizatin... 47 Figure 47: IOPS Vlume... 47 Figure 48: Cmmand Latency Peak... 47 Figure 49: Knwledge Wrker Results... 48 Figure 50: User Sessins ver Time... 48 Figure 51: Average Lgn Duratin... 48 Figure 52: Peak CPU Utilizatin... 49 54
Figure 53: IOPS Vlume... 49 Figure 54: Peak Cmmand Latency... 49 12.2 Table f Tables Table 1: Slutin Design Decisins... 14 Table 2: Pd Highlights... 16 Table 3: Desktp Scenari Definitin... 17 Table 4: Desktp Scenari Sizing... 17 Table 5: MCS Pd Detail... 18 Table 6: PVS Pd Detail... 21 Table 7: Nutanix Strage Cnfiguratin... 25 Table 8: Detailed Cmpnent Breakdwn: 400 Desktps... 27 Table 9: Detailed Cmpnent Breakdwn: 800 Desktps... 28 Table 10: Detailed Cmpnent Breakdwn: 1,600 Desktps... 29 Table 11: Detailed Cmpnent Breakdwn: 3,200 Desktps... 30 Table 12: Detailed Cmpnent Breakdwn: 6,400 Desktps... 31 Table 13: Detailed Cmpnent Breakdwn: 12,800 Desktps... 32 Table 14: Detailed Cmpnent Breakdwn: 25,600 Desktps... 33 Table 15: XenDesktp Cnfiguratin... 36 Table 16 Lgin VSI 4.1 Wrklads... 39 Table 17: Lgin VSI Wrklad Definitins... 39 Table 18: Lgin VSI Metric Values... 40 55
13 Abut the Authr Kees Baggerman is a slutins and perfrmance senir cnsultant at Nutanix Inc. In his rle, Kees develps methds fr successfully implementing applicatins n the Nutanix platfrm. In additin, he delivers custmer prjects, including defining architectural, business, and technical requirements, creating designs, and implementing the Nutanix slutin. Befre wrking with Nutanix, Kees main areas f wrk were migratins and implementatins f Micrsft and Citrix infrastructures, writing functinal/technical designs fr Micrsft infrastructures, Micrsft Terminal Server r Citrix (Presentatin Server/ XenApp, XenDesktp and NetScaler) in cmbinatin with RES Wrkspace Manager and/r RES Autmatin Manager. Kees is a Citrix Certified Integratin Architect, Micrsft Certified IT Prfessinal, RES Certified Prfessinal and RES Certified Trainer. RES Sftware als named him RES RSVP six cnsecutive years and Kees was hnred as the RES Sftware Mst Valuable Prfessinal f 2011. As a demnstratin f his passin fr virtualizatin technlgy, Kees earned the title f VMware vexpert in 2013, 2014, and 2015. Citrix als named him a Citrix Technlgy Prfessinal in 2015. Fllw Kees n Twitter at @kbaggerman Abut Nutanix Nutanix is the recgnized leader in the emerging Virtual Cmputing Platfrm market. The Nutanix slutin cnverges cmpute and strage resurces int a single appliance, delivering a pwerful, mdular building blck fr virtual datacenters. It incrprates the same advanced, distributed sftware architecture that pwers leading IT innvatrs such as Ggle, Facebk, and Amazn, but is tailred fr mainstream enterprises and gvernment agencies. The Nutanix slutin enables easy deplyment f any virtual wrklad, including large-scale virtual desktp initiatives (VDI), develpment/test apps, big data (Hadp) prjects and mre. Nutanix custmers can radically simplify and scale ut their datacenter infrastructures with cst-effective appliances that can be deplyed in under 30 minutes fr rapid time t value. Fllw the Nutanix blgs at http://www.nutanix.cm/blg/ Fllw Nutanix n Twitter at @Nutanix 56
57