NexentaConnect View Edition Hardware Reference Guide 2.3 5000-nex_con-v2.3-000012-A
Copyright 2014 Nexenta Systems, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose, without the express written permission of Nexenta Systems, Inc. (hereinafter referred to as Nexenta ). Nexenta reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. Nexenta products and services only can be ordered under the terms and conditions of Nexenta applicable agreements. All of the features described in this document may not be available currently. Refer to the latest product announcement or contact your local Nexenta sales office for information on feature and product availability. This document includes the latest information available at the time of publication. Nexenta is a registered trademark of Nexenta in the United States and other countries. All other trademarks, service marks, and company names in this document are properties of their respective owners. This document applies to the following product versions Product NexentaConnect View 2.3 Edition VMware vcenter Server 5.x VMware vsphere Client 5.x VMware Horizon View 5.x Microsoft Windows Active 2008 R2 Directory Server VMware ESXi 5.x Versions supported ii NexentaConnect View Edition Hardware Configuration Reference Guide
Contents Preface................................................ v 1 Introduction............................................ 1 About NexentaConnect View Edition............................. 1 NexentaConnect View Edition Components........................ 1 About NexentaConnect Management Appliance..................... 2 Advantages of using NexentaStor VSA........................... 4 2 Deployment Scenarios..................................... 5 About Deployment Scenarios................................. 5 Floating Desktops......................................... 5 Dedicated Desktops........................................ 6 3 System Requirements..................................... 7 VMware VDI Prerequisites.................................... 7 Server Agent, Desktop Agent, and Management Appliance Requirements... 7 NexentaConnect View Edition ESXi Host Requirements................ 8 NexentaStor VSA Requirements............................. 8 DVM Requirements...................................... 8 4 DVM Deployment Recommendations......................... 11 General DVM Deployment Recommendations..................... 11 Recommendations for Allotting Physical Resources for Floating Desktops.. 12 Worksheet for Estimating ESXi Host Resources for Normal Floating Users.. 13 Example of Estimating ESXi Host Resources for 100 Normal Floating Users 17 5 Example Configurations and Performance.................... 21 Sizing the ESXi Host for 100 Normal Floating Desktops.............. 21 Example of Physical Server with 100 Floating Normal Desktops......... 22 Performance Results for Example Configuration................... 23 Glossary............................................... 25 NexentaConnect View Edition Hardware Configuration Reference Guide iii
Contents This page intentionally left blank iv NexentaConnect View Edition Hardware Configuration Reference Guide
Preface Intended Audience Documentation History This documentation presents information specific to Nexenta products. The information is for reference purposes and is subject to change. This documentation is intended for VDI Administrators and assumes that you have experience with data storage concepts, such as NAS, SAN, NFS, and ZFS; VMware vsphere, VMware Horizon View; SQL database administration; Microsoft Windows Active Directory 2008 R2. The following table lists the released revisions of this documentation. Table 1: Documentation Revision History Revision Date Description 5000-nex_con-v2.3-000012-A April, 2014 GA Contacting Support Comments Choose a method for contacting support: Visit the Nexenta customer portal or partner portal. Log in and browse the customers knowledge base. Using the NexentaConnect Management Appliance. Click Help > Support Request and complete the request form. Then click Send Request. Using the NexentaConnect View Edition Management Appliance, click Help > Support Request > Download as file. Save the files on your system and attach them to support request e-mail. You may want use this option, if you cannot use the NexentaConnect Management Appliance built-in mail notification system. Your comments and suggestions to improve this documentation are greatly appreciated. Send any feedback to doc.comments@nexenta.com and include the documentation title, number, and revision. Refer to specific pages, sections, and paragraphs whenever possible. NexentaConnect View Edition Hardware Configuration Reference Guide v
Preface This page intentionally left blank vi NexentaConnect View Edition Hardware Configuration Reference Guide
1 Introduction This section includes the following topics: About NexentaConnect View Edition NexentaConnect View Edition Components About NexentaConnect Management Appliance Advantages of using NexentaStor VSA About NexentaConnect View Edition NexentaConnect View Edition is a new approach to simplifying and automating virtual desktop infrastructure (VDI) deployments, management, and calibration using virtual storage and VMware Horizon View 5.x. NexentaConnect View Edition integrates with your standard network, VMware vsphere infrastructure, and VMware Horizon View to deploy and manage your desktop virtual machines (DVMs). It also provides performance analytics that can be used to improve your VDI. NexentaConnect View Edition Components NexentaConnect View Edition is a client/server environment. A Server Agent is installed on the VMware Horizon View Connection server with a Desktop Agent in each desktop template on each ESXi server that is dedicated to NexentaConnect View Edition. A Management Appliance provides a GUI interface and communication with the VMware VDI environment. NexentaConnect View Edition consists of the following components: NexentaConnect Management Appliance provides the NexentaConnect View Edition management functions. The Management Appliance is installed from an included template and can be located on any ESXi host in the network. NexentaStor VSA is a virtual storage appliance (VSA) that provides storage management for the NexentaConnect View Edition DVMs through a NexentaConnect View Edition vsphere plug-in, which communicates with VMware Horizon View and VMware vcenter to perform the actual DVM provisioning and management. Administrators interact with NexentaStor VSA using wizards. NexentaStor VSA is installed from an included template on each dedicated NexentaConnect View Edition ESXi host. NexentaConnect View Edition Hardware Configuration Reference Guide 1
Introduction NexentaConnect View Edition Server Agent handles all communication between NexentaConnect View Edition and the VMware components. The Server Agent is installed on the VMware Horizon View Connection Server. NexentaConnect View Edition Desktop Agent provides communication between NexentaConnect View Edition and the DVMs. The Desktop Agent is installed on the desktop template, which is installed on each NexentaConnect View Edition ESXi host.! Note: The bundled NexentaStor VSA package can only be installed as internal storage on the same ESXi host that deploys the DVMs. The bundled NexentaStor VSA package cannot be installed as external storage. The following diagram illustrates the NexentaConnect View Edition components in a typical VDI. Figure 1-1: NexentaConnect View Edition components About NexentaConnect Management Appliance The NexentaConnect Management Appliance has a web interface with management wizards that allow administrators to simplify DVM deployments and optimize VDI workloads. It uses standard inter-process communication (IPC) mechanisms to communicate with the VMware VDI environment. 2 NexentaConnect View Edition Hardware Configuration Reference Guide
Introduction The following diagram illustrates the components of the NexentaConnect Management Appliance. Figure 1-2: NexentaConnect Management Appliance Components The Deployment Wizard reduces approximately 150 configuration steps down to four steps. Create NFS and/or ZFS storage from local ESXi storage, including all clustered ESXi servers. Then create DVMs based on this automatically configured storage. The Configuration Wizard helps administrators tune VDI deployments with rapid reconfiguration, during which NexentaConnect View Edition automatically rebalances DVM and associate storage resources. Based on performance data, calibration settings define the expected ranges for DVMs. The manager identifies the number of DVMs created and added to the pool with each successful iteration. Collected data iteratively improves the threshold knowledge. The Benchmark tools provide unprecedented performance testing through NexentaStor VSA, allowing administrators to continually monitor the performance for the deployed pool. The Calibration capabilities allow administrators to use the benchmark results to finetune the resource allocations in order to continually meet performance goals. NexentaConnect View Edition Hardware Configuration Reference Guide 3
Introduction Advantages of using NexentaStor VSA There are significant advantages to deploying NexentaConnect View Edition. These include: Improved scalability NexentaConnect View Edition scales well. As more hypervisors are added to the infrastructure to support more DVMs, more NexentaConnect View Edition ESXi hosts can be added, each with its own NexentaStor VSA. Reduced network load NexentaStor VSA delivers I/O directly to the hypervisor from within the hypervisor, decreasing network traffic and making I/O performance more consistent. Because no additional network ports are needed for the hypervisor to communicate to storage, fewer slots are needed on the server. Storage hardware independence Since NexentaConnect View Edition runs on any supported VMware-compatible hardware, customers can deploy a cost-effective NexentaStor VSA solution without concerns of hardware support. Integrating SSD components either locally or in an attached NFS appliance enable caching, which optimizes VDI performance. Use of ZFS technology Under the direction of NexentaConnect View Edition, NexentaStor VSA abstracts and pools VMFS volumes as a VSA using ZFS technology. Since VMFS understands which devices are HDDs and SSDs, ZFS can construct a hybrid storage pool, applying fast SSDs as caching devices. The ZFS pool is then exported using NFS to the ESXi hypervisor, allowing the DVMs to access underlying VSA storage. Automatic reconfiguration and resource balancing NexentaConnect View Edition optimizes the compute, memory, and storage parameters and directs VMware vcenter to create DVMs based on these parameters, characterizing end-to-end performance of each configuration, and reconfiguring and automatically rebalancing resources as necessary. Benchmarking and tuning NexentaConnect View Edition offers significant advantages in its ability to benchmark and tune VDI deployments and optimize performance for VDI workloads. 4 NexentaConnect View Edition Hardware Configuration Reference Guide
2 Deployment Scenarios This section includes the following topics: About Deployment Scenarios Floating Desktops Dedicated Desktops About Deployment Scenarios NexentaConnect View Edition is extremely flexible and can be implemented in small to large VMware deployments using floating DVMs, dedicated DVMs, or a combination of both. Multiple storage volumes can be exported to multiple VMware hypervisors. In this way, storage can be shared across hypervisors to enhance availability. The resources required to deploy DVMs vary depending upon the type (floating or dedicated) and number of DVMs being deployed. Each NexentaConnect View Edition cluster in the deployment can have a floating user pool or a dedicated user pool. If a NexentaConnect View Edition cluster contains multiple NexentaConnect View Edition ESXi hosts, the pool can be shared across the hosts in that cluster.! Note: NexentaConnect View Edition 1.0 supports one DVM pool per cluster. Each pool must be for floating or for dedicated users; the two user types cannot be combined in the same pool. Floating Desktops Floating desktops, also known as stateless desktops, are used when the capacity requirement is small and the DVM state is not maintained after a user logs out. A floating desktop environment is usually deployed from a single ESXi host and often uses linked clones. The DVMs are cookie-cutter images that contain no personal settings or user-specific data. NexentaConnect View Edition Hardware Configuration Reference Guide 5
Deployment Scenarios Floating desktops are built on an as-needed basis, based on the attributes of the user group. Examples are kiosks, classrooms, and office DVMs. This scenario is deal for companies interested in quickly implementing a VDI initiative or populating satellite offices. Dedicated Desktops Dedicated desktops are persistent, providing each user with his or her own personalized DVM. Any changes made by a user are stored on a network file share or VMware Horizon View persistent disk. When users log in again, they are presented with their own unique DVMs. Dedicated desktops are typically assigned to users who need to make changes to their DVM images, such as installing additional applications, customizing settings, and saving data within the desktop image itself rather than to a persistent NFS share or VMware Horizon View disk. 6 NexentaConnect View Edition Hardware Configuration Reference Guide
3 System Requirements This section includes the following topics: VMware VDI Prerequisites Server Agent, Desktop Agent, and Management Appliance Requirements NexentaConnect View Edition ESXi Host Requirements VMware VDI Prerequisites Before NexentaConnect View Edition can be installed, the customer must have a fully installed, configured, and functioning VMware VDI environment that includes the following VMware components, along with all required supporting hardware, software, and networking elements: VMware vsphere 5 with VMware vcenter Server 5.x VMware Horizon View 5.x with Composer The VMware documentation contains a full description of the required hardware, software, and networking elements for the VDI environment. Server Agent, Desktop Agent, and Management Appliance Requirements The NexentaConnect View Edition Server Agent, Desktop Agent, and Management Appliance are installed as shown below. Table 3-1: Server Agent, Desktop Agent, and Management Appliance Requirements Component Installation Location Additional Requirements NexentaConnect Server Agent NexentaConnect Desktop Agent VMware Horizon View Connection Server VMware Horizon View PowerCLI on the VMware Horizon View Connection Server Desktop template Microsoft.Net Framework 3.5 or later in the desktop template NexentaConnect View Edition Hardware Configuration Reference Guide 7
System Requirements Table 3-1: Server Agent, Desktop Agent, and Management Appliance Requirements Component Installation Location Additional Requirements NexentaConnect Management Appliance Any ESXi host in the network None NexentaConnect View Edition ESXi Host Requirements NexentaConnect View Edition requires at least one dedicated physical machine hosting an ESXi server in the VMware VDI environment, called the NexentaConnect View Edition ESXi host. This physical machine cannot be used to host additional ESXi servers or any other software or network components required by the VMware VDI environment. NexentaConnect View Edition ESXi hosts cannot be in a cluster with non-nexentaconnect View Edition ESXi servers. If there is only one NexentaConnect View Edition ESXi host, it must be in a cluster by itself. All desktops managed by NexentaConnect View Edition are deployed on a NexentaConnect View Edition ESXi host. Each NexentaConnect View Edition ESXi host contains NexentaStor VSA and the desktop template, and must meet the NexentaStor VSA Requirements and the DVM Requirements. NexentaStor VSA Requirements Physical Machine Requirements to run NexentaStor VSA lists the physical machine resources necessary to run NexentaStor VSA on the ESXi server. Table 3-2: Physical Machine Requirements to run NexentaStor VSA Physical Resource Description Notes HBA (host bus adapters) Minimum: Two 1 GB controllers Recommended: One 10 GB Ethernet controller CPU 4 physical cores minimum per 100 normal floating desktops: 64 bit x86 CPUs, 2.13 GHz or faster, Intel Xeon or AMD Barcelona families The HBAs must support virtualization In addition to VMware ESXi host and DVM deployment requirements Memory 4GB minimum In addition to VMware ESXi host and DVM deployment requirements DVM Requirements Each deployed DVM requires a certain amount of memory and other resources. The total amount of DVM resources required for a deployment depends on the performance requirements, number of DVMs, number of NexentaConnect View Edition ESXi hosts, type of deployment (floating or dedicated), and other factors. 8 NexentaConnect View Edition Hardware Configuration Reference Guide
System Requirements Virtual Machine Requirements For Each Deployed Desktop lists the minimum resources required for a single DVM on the NexentaConnect View Edition ESXi host. Table 3-3: Virtual Machine Requirements For Each Deployed Desktop Resource Requirement Notes Operating system Microsoft Windows 7 license NIC One virtual NIC For example, a VMXNET3 network adapter HBA LSI logic SAS CPU One vcpu Can be accommodated using hypertrading Memory 800MB to 2GB Allocated, not physical Storage (HDD or SDD) 16GB minimum If you already have a list of the resources required for the NexentaConnect View Edition ESXi host, and the resources meet the minimum requirements shown above for each DVM, you can use your existing list. If you do not know the resources you need, see the DVM Deployment Recommendations section for guidelines for sizing the DVM resources, along with worksheets and examples. NexentaConnect View Edition Hardware Configuration Reference Guide 9
System Requirements This page intentionally left blank 10 NexentaConnect View Edition Hardware Configuration Reference Guide
4 DVM Deployment Recommendations This section includes the following topics: General DVM Deployment Recommendations Recommendations for Allotting Physical Resources for Floating Desktops Worksheet for Estimating ESXi Host Resources for Normal Floating Users Example of Estimating ESXi Host Resources for 100 Normal Floating Users General DVM Deployment Recommendations Each NexentaConnect View Edition ESXi host must have enough resources to deploy the required number of desktops on that host. The actual resource requirements depend on the type of DVM (floating or dedicated) and the expected level of use (normal user or power user). In addition, there are several default NexentaConnect View Edition settings and recommendations that can affect how you deploy NexentaConnect View Edition in a VDI environment. The following recommendations are used in the formulas and examples in this section. The default NexentaStor VSA setting for floating desktops uses RAID10 (mirrored stripes). All formulas for estimating disk size in this document include the extra space required for RAID10. Use 15K RPM disks. A normal floating desktop requires 7-20 IOPS with an average read and write latency under 20 ms. Install NexentaStor VSA on SSD. It is recommended to mirror the cache. For physical resources, always round up the result to the next available commercial size. In general, increasing memory, disk, and cache size improves performance. When estimating physical resources, you can generally overbook the allocation based on the typical number of concurrent users. If you overbook, you can use fewer resources than calculated. When estimating core requirements for DVM usage, you can lower the calculated estimate by up to 50% if hypertrading is enabled on the ESXi host. NexentaConnect View Edition Hardware Configuration Reference Guide 11
DVM Deployment Recommendations If you are using shared pools, divide the number of DVMs by the number of ESXi hosts in the pool to obtain the number of DVMs per host used in the calculations. For example, if two ESXi hosts share a pool of 100 DVMs, use 50 DVMs per ESXi host in the calculations. Recommendations for Allotting Physical Resources for Floating Desktops Recommendations for Physical Resources for Floating Normal Desktops provides recommendations for allotting physical resources to normal floating desktops on an ESXi host, based on the DVM requirements shown in Virtual Machine Requirements For Each Deployed Desktop. These resources are in addition to the requirements for an ESXi host and for NexentaStor VSA. Table 4-1: Recommendations for Physical Resources for Floating Normal Desktops Resource Recommendation Notes Physical CPUs on ESXi Host Recommended: 8 DVMs per core Maximum: 14 DVMs per core Physical Memory on ESXi Host DVM requirement + NexentaConnect View Edition requirement + ESXi requirement = total RAM to run NexentaConnect View Edition and deploy DVMs on this ESXi host where the DVM requirement = #DVMs x (RAM per DVM) x (ESXi RAM multiplier) and the NexentaConnect View Edition requirement = (125MB x #DVMs) + 4GB and the ESXi requirement = 0.8GB Do not round up this result Physical HDD Requirement on ESXi Host Preliminary HDD Requirement #DVMs x (template size + 4GB) x [performance multiplier] x 2 = preliminary HDD requirement to deploy DVMs on this ESXi host Do not round up this result A normal floating DVM utilizes 15% to 25% of a physical CPU. 70%-80% of DVM memory is in physical memory. For the DVM requirement: For DVMs requiring less than 1GB RAM, use 1 (1GB per DVM). For DVMs requiring more than 1GB RAM, use 2 (2GB per DVM). The typical ESXi RAM multiplier is 1.1. The NexentaConnect View Edition requirement is 4GB, with an additional 125MB per DVM. The ESXi requirement is 800MB (0.8GB). The 4GB is for user information. The performance multiplier leaves room on the disk for the VMware overhead. The minimum recommended multiplier is 1.25 to give 20% free space. You can increase the multiplier to leave more free space for higher performance. Multiply by 2 because of RAID10. 12 NexentaConnect View Edition Hardware Configuration Reference Guide
DVM Deployment Recommendations Table 4-1: Recommendations for Physical Resources for Floating Normal Desktops Resource Recommendation Notes Number of Physical HDDs #DVMs / [number of DVMs per HDD] = number of HDDs on this ESXi host Round up result to the next whole number Size of Each Physical HDD [preliminary HDD requirement] / [number of HDDs] = preliminary size of each HDD on this ESXi host Round up result to the next commercially available HDD size Total Physical HDD Requirement [rounded-up HDD size] x [roundedup number of HDDs on this host] = total physical HDD requirement to deploy DVMs on this ESXi host Physical SDD Requirement on ESXi Host 4GB + [template size] + cache = recommended SDD size where the cache = 4GB + (#DVMs x 125MB) OR 2 x (4GB + (#DVMs x 125MB)) Round up result to the next commercially available SSD size. SSD-only deployment Physical size requirements are the same as in the HDD deployment. Allocate 10 (recommended) to 13 (maximum) DVMs per HDD. This provides a good level of IOPS for normal users on floating desktops. Round up the result to the nearest whole number. For example, if you have 100 DVMs and are allocating 13 DVMs per HDD, the result is 7.69. Round this up to 8 HDDs on this ESXi host. Round up the preliminary HDD size to the nearest commercially available size. For example, if the preliminary HDD requirement is 3.8TB and you need 8 HDDs, the preliminary HDD size is 486.4GB. Round this up to 600GB. Using the examples above, the full HDD requirement to deploy DVMs on this host would be 8 HDDs at 600GB each. The 4GB is the installed size of NexentaStor VSA. The formula (#DVMs x 125MB) is the size of the NexentaStor VSA memory store. If the cache is mirrored, multiply the NexentaStor VSA memory store by 2 before adding it to installed size of NexentaStor VSA and the template size. SSD-only deployment has the following limitations: No HDD must be available on the ESXi server that you plan to use for the desktop pool deployment. Cache and log devices are disabled by default. Remote replication is disabled by default Worksheet for Estimating ESXi Host Resources for Normal Floating Users Worksheet for Estimating ESXi Host Resources for Normal Floating Users is a worksheet to assist you in estimating the physical resources needed to deploy a given number of floating desktops for normal users on a NexentaConnect View Edition ESXi host. The worksheet is based on the recommendations listed in Recommendations for Physical Resources for Floating NexentaConnect View Edition Hardware Configuration Reference Guide 13
DVM Deployment Recommendations Normal Desktops. The resources calculated here must be installed on the NexentaConnect View Edition ESXi host in addition to any other resources. You need the following information for this worksheet: Number of DVMs on this host (if there are two or more ESXi hosts in the cluster, this is the number of DVMs divided by the number of ESXi hosts): Memory (RAM) per DVM (recommended 1GB or 2GB): GB ESXi RAM multiplier (recommended 1.1): Desktop template size: GB Performance multiplier (recommended 1.25): Number of DVMs per HDD (recommended 10, maximum 13): Does the cache on the SDD use mirroring? Yes or No 14 NexentaConnect View Edition Hardware Configuration Reference Guide
DVM Deployment Recommendations Table 4-2: Worksheet for Estimating ESXi Host Resources for Normal Floating Users Physical Cores on this ESXi Host #DVMs / 14 = cores rounded up to cores minimum for DVMs / 8 = cores rounded up to cores recommended for DVMs #DVMs Physical Memory on this ESXi Host #DVMs x GB x = GB RAM for DVMs RAM per DVM ESXi RAM multiplier 125MB x = MB / 1024 = GB + 4GB = #DVMs (A) from above (A) (B) GB RAM for NexentaConnect View Edition GB + GB + 0.8GB = GB total RAM required on this ESXi host (B) from above Physical HDDs on this ESXi Host Preliminary HDD Requirement on this ESXi Host: template size GB + 4GB = GB x = GB x = #DVMs performance multiplier Number of Physical HDDs on this ESXi Host: #DVMs / = rounded up to HDDs needed for DVMs DVMs per (D) HDD Size of Each Physical HDD on this ESXi Host: GB / = GB rounded up to GB per HDD for DVMs (C) from (D) from (E) above above Quantity and Size of HDDs on this ESXi Host: HDDs at GB each Physical SDD Requirements on this ESXi Host (D) from above (E) from above (C) GB preliminary HDD size for DVMs #DVMs x 125MB = MB / 1024 = GB + 4GB = GB cache (F) If the cache uses mirroring, multiply: GB x 2 = GB (F) from (G) above NexentaConnect View Edition Hardware Configuration Reference Guide 15
DVM Deployment Recommendations Table 4-2: Worksheet for Estimating ESXi Host Resources for Normal Floating Users Size of SDDs on this ESXi Host: template size GB + 4GB + GB = GB rounded up to (F) or (G) from above GB SDD size for DVMs 16 NexentaConnect View Edition Hardware Configuration Reference Guide
DVM Deployment Recommendations Example of Estimating ESXi Host Resources for 100 Normal Floating Users Example of Estimating ESXi Host Resources for 100 Normal Floating Users shows an example of worksheet calculations to determine the resources required for 100 normal floating desktops in a single desktop pool on one NexentaConnect View Edition ESXi host. These resources would be installed on the host in addition to any other resources. The resources calculated here would be installed on the NexentaConnect View Edition ESXi host in addition to any other resources. The following values were used for this worksheet: Number of DVMs on this host: 100 Memory (RAM) per DVM (recommended 1GB or 2GB): 1GB ESXi RAM multiplier (recommended 1.1): 1.1 Desktop template size: 30GB Performance multiplier (recommended 1.2): 1.2 Number of DVMs per HDD (recommended 10, maximum 13): 10 Does the cache on the SDD use mirroring? Yes NexentaConnect View Edition Hardware Configuration Reference Guide 17
DVM Deployment Recommendations Table 4-3: Example of Estimating ESXi Host Resources for 100 Normal Floating Users Physical Cores on this ESXi Host 100 / 14 = 7.14 cores rounded up to 8 cores minimum for DVMs #DVMs 100 / 8 = 12.5 cores rounded up to 13 cores recommended for DVMs #DVMs Physical Memory on this ESXi Host 100 x 1 GB x 1.1 = 110 GB RAM for DVMs #DVMs RAM per DVM ESXi RAM multiplier 125MB x 100 = 12500 MB / 1024 = 12.2 GB + 4GB = 16.2 #DVMs (B) (A) GB RAM for NexentaConnect View Edition 110 GB + 12.2 GB + 0.8GB = 123 GB total RAM required on this ESXi host (A) from above (B) from above Physical HDDs on this ESXi Host Preliminary HDD Requirement on this ESXi Host: 30 GB + 4GB = 34 GB x 100 = 3400 GB x 1.25 = 4250 GB template size #DVMs performance multiplier Number of Physical HDDs on this ESXi Host: 100 / 10 = 10 rounded up to 10 HDDs needed for DVMs #DVMs DVMs per HDD (D) Size of Each Physical HDD on this ESXi Host: 4250 GB / 10 = 425.1 GB rounded up to 600 GB per HDD for DVMs (C) from (D) from (E) above above Quantity and Size of HDDs on this ESXi Host: 10 HDDs at 600 GB each (D) from above (E) from above Physical SDD Requirements on this ESXi Host 100 x 125MB = 12500 MB / 1024 = 122.1 GB + 4GB = 126.1 GB cache #DVMs (F) If the cache uses mirroring, multiply: 126.1 GB x 2 = 252.2 GB (F) from (G) above (C) preliminary HDD size for DVMs 18 NexentaConnect View Edition Hardware Configuration Reference Guide
DVM Deployment Recommendations Table 4-3: Example of Estimating ESXi Host Resources for 100 Normal Floating Users Size of SDDs on this ESXi Host: 30 GB + 4GB + 252.2 GB = 282.2 GB rounded up to 300 template size (F) or (G) from above GB SDD size for DVMs NexentaConnect View Edition Hardware Configuration Reference Guide 19
DVM Deployment Recommendations This page intentionally left blank 20 NexentaConnect View Edition Hardware Configuration Reference Guide
5 Example Configurations and Performance This section includes the following topics: Sizing the ESXi Host for 100 Normal Floating DesktopsExample of Physical Server with 100 Floating Normal Desktops Performance Results for Example Configuration Sizing the ESXi Host for 100 Normal Floating Desktops To obtain the total amount of resource requirements for a NexentaConnect View Edition ESXi host, add the requirements for NexentaStor VSA from Physical Machine Requirements to run NexentaStor VSA and the estimated requirements for deploying DVMs from Example of Estimating ESXi Host Resources for 100 Normal Floating Users. These requirements are in addition to the VMware requirements for an ESXi host. Table 5-1: Example of Total Requirements for an ESXi Host with 100 Normal Floating Desktops Resource NexentaStor VSA Requirement (from Physical Machine Requirements to run NexentaStor VSA) ESXi Host Requirement (from Example of Estimating ESXi Host Resources for 100 Normal Floating Users) Total Requirement HBA 4 1GB controllers (minimum) 1 10GB Ethernet controller (recommended) 4 1GB controllers to 1 10GB Ethernet controller Physical Cores 4 8 (minimum) to 13 (recommended) 12 to 17 physical cores Memory 4GB minimum 100GB 104GB minimum HDDs 10 HDDs at 600GB each 10 HDDs at 600GB each SDDs 300GB 300GB NexentaConnect View Edition Hardware Configuration Reference Guide 21
Example Configurations and Performance Example of Physical Server with 100 Floating Normal Desktops This section contains an example of the requirements for a NexentaConnect View Edition ESXi host. The example in Example of Physical Server: 100 Normal Floating Desktops below is based on the requirements shown in the following tables: Physical Machine Requirements to run NexentaStor VSA Virtual Machine Requirements For Each Deployed Desktop Example of Estimating ESXi Host Resources for 100 Normal Floating Users Example of Total Requirements for an ESXi Host with 100 Normal Floating Desktops The example below assumes a configuration of: One physical ESXi server with one desktop pool containing 100 normal floating desktops NexentaStor VSA is installed on SDD Desktop template size is 30GB Each DVM requires 1GB RAM Hypertrading is enabled, allowing the number of physical cores required for DVMs to be reduced (in this example, reduced by 33% to 8 physical cores) DVMs are overbooked, allowing a reduction in the total amount of memory needed Table 5-2: Example of Physical Server: 100 Normal Floating Desktops Component Description Qty Basic Server Hardware Chassis 2U rackmount chassis with 720W (1+1) power 1 supply, black Motherboard Intel Platform E-ATX Dual LGA1366 Socket 1 Motherboard HBA LSI SAS 9211-8I 8PT INT SAS/SATA 6.0Gb/s PCI-E 1 Host Bus Adapter Card NIC STD Dual-port 10G Ethernet w/ SFP+ & CDR 1 Physical Resources Operating System Windows 7 Enterprise Edition 32-bit CPU 6-Core Intel Xeon E5645-2.4 GHz 12M processor 2 (12 cores total) Memory 16GB 1333MHz DDR3 ECC Reg CL9 Kt 6 (96GB total) Disks (HDDs) ST3300657SS 600GB 15K SAS 10 (6TB total) SSDs INTEL 320 Series 160 GB SSD-Reseller Box 2 (320GB total) 22 NexentaConnect View Edition Hardware Configuration Reference Guide
Example Configurations and Performance Performance Results for Example Configuration VMware Horizon View Planner Results for Example System lists the performance results obtained by VMware Horizon View Planner for the system described above, with 100 concurrent DVMs. Table 5-3: VMware Horizon View Planner Results for Example System Measurement IO Meter Benchmark: 100% nonsequential write operations IO Meter Benchmark: 100% nonsequential read operations QoS (Quality of Service) goal: less than 1.5 seconds Results 45 IOPS 135 IOPS 0.862929 seconds NexentaConnect View Edition Hardware Configuration Reference Guide 23
Example Configurations and Performance This page intentionally left blank 24 NexentaConnect View Edition Hardware Configuration Reference Guide
Glossary A I Desktop pool A group of identically configured virtual machines. When you deploy a desktop pool, you create multiple copies that are based on a specified desktop template. Desktop pool backup A snapshot of a desktop pool that is stored on the external NexentaStor. Desktop pool snapshot A read-only copy of a desktop poolat a particular point in time. Full Backup Full, complete replica of all of the datasets in the specified source. Provides for a more secure method for backing up data. In case of disk failure, the files are easily restored from a single backup set. Full Clone A full clone is an independent virtual machine, with no need to access the parent. Full clones do not require an ongoing connection to the parent virtual machine. Because a full clone does not share virtual disks with the parent virtual machine, full clones generally perform better than linked clones. However, full clones take longer to create than linked clones. Creating a full clone can take several minutes if the files involved are large. Golden Image A Microsoft Windows desktop template that you can use for the desktop pool deployment. Before you create a golden image, you need to install on the virtual machine all required software, as well as the NexentaConnect components. NexentaConnect supports the following golden images: Windows 7 Enterprise and Professional Editions (x86 and x64) Windows Vista Business and Enterprise Editions SP 1 and SP2 (x86) Windows XP Professional SP3 (x86) Windows 8 Consumer Preview (x86 and x64) Incremental Backup Backs up only the changes since the last backup operation. This is less secure than a full backup. In order to restore a file, all of the incremental backups must be present. NexentaConnect View Edition Hardware Configuration Reference Guide 25
Glossary IO Meter Iometer is an industry standard I/O subsystem storage performance measurement and characterization tool implemented into NexentaConnect. It is used to measure storage performance of the individual virtual desktops. J Q Link Clone A linked clone is a snapshot of a replica disk that is accessed by users. This snapshot only consumes the storage resources as it is used. A linked clone is made from a snapshot of the parent. All files available on the parent at the moment of the snapshot continue to remain available to the linked clone. Ongoing changes to the virtual disk of the parent do not affect the linked clone, and changes to the disk of the linked clone do not affect the parent. A linked clone must have access to the parent. Without access to the parent, a linked clone is disabled. NexentaStor Nexenta Systems is a fully featured NAS/SAN open storage appliance, that leverages the advantages of ZFS. NexentaStor VSA A virtual storage appliance (VSA) that provides storage for the NexentaConnect environment. Nexenta NAS VAAI plugin Nexenta NAS VAAI plug-in implements VMware API primitives such as full file clone, lazy file clone, reserve space, extended file statistics for NAS device offloads. It speeds up certain operations on virtual hosts images located on NFS share that is mounted on the ESXi host. The plug-in effectively offloads network-intensive NFS operations to clone files. NexentaConnect View Edition (NexentaConnect) The name of the Nexenta product for VDI deployments. NexentaConnect Desktop Agent Provides benchmarking, calibration capabilities and communication between NexentaConnect Management Appliance and the desktops. NexentaConnect Management Appliance A virtual appliance that provides the NexentaConnect management functions and user interface. NexentaConnect Server Agent Provides communication between NexentaConnect and the VMware Horizon View Connection Server NMC The Nexenta Management Console (NMC) is a command line interface that enables you to execute all NexentaStor functions. NMS The Nexenta Management Server is a service that controls all NexentaStor services and runners. It receives and processes requests from NMC and NMV and returns the output. 26 NexentaConnect View Edition Hardware Configuration Reference Guide
Glossary NMV The Nexenta Management View (NMV) is a web-based graphical User interface that enables you to perform most NexentaStor functions. R Z Replicated High Availability (RHA) A NexentaConnect service that provides the functionality to create backups of a desktop pool on a standalone external NexentaStor appliance and perform a failover of the ESXi datastore to a designated backup storage in case of an emergency Stateful Desktop Pool Stateful/Persistent virtual desktops preserve user settings, customization and data. When users login, they retrieve their designated desktop. Stateless Desktop Pool Stateles virtual desktops do not contain any personal settings or data. When users login, they are assigned a desktop randomly. User data can be created and stored on a network file share or on a VMware Horizon View desktop persistent disk. SQLIO: SQLIO is I/O capacity determination tool provided by Microsoft. It s implemented into NexentaConnect and used to measure storage performance of the individual virtual desktops. VAAI vstorage API for Array Integration (VAAI) is an application program interface (API) framework from VMware that enables certain storage tasks, such as thin provisioning, to be offloaded from the VMware server virtualization hardware to the storage array. VBackstore Nexenta VM Datastore, VBackstore, is the storage allocated to a given VM. A Vbackstore may be: a shared Zvol or a Folder. VBackstore is NexentaStor-provided storage. vcenter VMware virtualization management platform VDI Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server.vdev vhost Nexenta Virtual Factory, VHost, is a generic term for the hypervisor platform (e.g. VMware, Citrix Xen, Microsoft Hyper-V). VHosts may be: ESX Cluster, Xen pool, or Hyper-V cluster. Virtual Desktop Standard desktop operating system that runs on a virtual machine. NexentaConnect View Edition Hardware Configuration Reference Guide 27
Glossary vmotion Nexenta VM Motion is a generic term for migration of Virtual Machines, regardless of the virtualization environment. Vmotion is a VMware term for Hyper-v/Xen Live Migration. VMware ESXi An enterprise class hypervisor that provides a software virtualization environment. VMware ESXi Cluster A collection of two or more ESXi hosts. In the NexentaConnect environment, you use the ESXi cluster to load balance and better utilize any resources. VMware vcenter Server It is the centralized management tool for the vsphere suite. VMware vcenter Server enables you to manage multiple ESX servers and virtual machines (VMs) from different ESXi servers through a single console application. VMware vsphere client A Microsoft Windows desktop application that enables you to access VMware ESXi and VMware vcenter. VMware Horizon View Administrator View Administrator is the Web interface through which you configure VMware Horizon View Connection Server and manage any View desktops. VMware Horizon View Composer VMware Horizon View Composer is a key component of VMware vsphere. It is tightly integrated with VMware Horizon View to provide advanced image management and storage optimization. View composer is required to use Linked-clones, refresh, recompose and rebalance capabilities. VMXNET3 Type of supported network driver for virtual machines VStorage Virtualized Storage. VStorage, refers to the storage managed by NexentaStor. VStorage is virtual disk storage as seen from the perspective of VHost. VHost and its VMs see VDisks stored on VStorage. In ESX terminology VStorage is a "datastore", in XEN it is an "SR" (Storage repository). ZFS Zettabyte File System (ZFS) is 128-bit file system that provides features, such as data integrity verification, disk management, snapshots, transactional operations, and so on. ZIL ZFS Intent Log is a component of a hybrid storage pool that speeds up write operation. Usually, SSD drives are used as ZIL devices. Zvol Interface layers available in the ZFS 28 NexentaConnect View Edition Hardware Configuration Reference Guide
Global Headquarters 455 El Camino Real Santa Clara, California 95050 Nexenta EMEA Headquarters Camerastraat 8 1322 BC Almere Netherlands Nexenta Systems Italy Via Vespucci 8B 26900 Lodi Italy Nexenta Systems China Room 806, Hanhai Culture Building, Chaoyang District, Beijing, China 100020 Nexenta Systems Korea Chusik Hoesa 3001, 30F World Trade Center 511 YoungDongDa-Ro GangNam-Gu, 135-729 Seoul, Korea 5000-nex_con-v2.3-000012-A