VMware vcenter Server 6.0 Cluster Performance



Similar documents
Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER

VMware vcloud Automation Center 6.0

What s New in VMware vsphere 4.1 VMware vcenter. VMware vsphere 4.1

Top 10 Reasons to Virtualize VMware Zimbra Collaboration Server with VMware vsphere. white PAPER

Getting Started with Database Provisioning

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware vsphere 5.0 Evaluation Guide

Configuration Maximums

VMware vsphere: Install, Configure, Manage [V5.0]

Getting Started with ESXi Embedded

Introduction to VMware EVO: RAIL. White Paper

VMware vcenter 4.0 Database Performance for Microsoft SQL Server 2008

WHITE PAPER 1

Scalability Tuning vcenter Operations Manager for View 1.0

Configuration Maximums VMware vsphere 4.0

VMware vcloud Automation Center 6.1

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

What s New in VMware Data Recovery 2.0 TECHNICAL MARKETING DOCUMENTATION

JOB ORIENTED VMWARE TRAINING INSTITUTE IN CHENNAI

VMware vrealize Automation

Configuration Maximums VMware Infrastructure 3

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Understanding Data Locality in VMware Virtual SAN

Configuration Maximums VMware vsphere 4.1

vsphere Upgrade vsphere 6.0 EN

VMware vsphere: [V5.5] Admin Training

VMware vrealize Automation

Configuration Maximums

Reducing the Cost and Complexity of Business Continuity and Disaster Recovery for

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Site Recovery Manager Installation and Configuration

End Your Data Center Logging Chaos with VMware vcenter Log Insight

Kronos Workforce Central on VMware Virtual Infrastructure

VMware vcenter Update Manager Administration Guide

Running VirtualCenter in a Virtual Machine

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

Introduction to VMware vsphere Data Protection TECHNICAL WHITE PAPER

HCIbench: Virtual SAN Automated Performance Testing Tool User Guide

VMware Data Recovery. Administrator's Guide EN

VMware vcenter Update Manager Performance and Best Practices VMware vcenter Update Manager 4.0

Configuration Maximums

Citrix XenApp Server Deployment on VMware ESX at a Large Multi-National Insurance Company

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

Design Implement Troubleshoot. VMware Virtualisation Strategies Private/Public/Hybrid Cloud Computing.

VMware vsphere Data Protection 6.0

VirtualclientTechnology 2011 July

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

VMware vcenter Server 5.1 Database Performance Improvements and Best Practices for Large-Scale Environments

Management of VMware ESXi. on HP ProLiant Servers

The VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Oracle Database Scalability in VMware ESX VMware ESX 3.5

AlwaysOn Desktop Implementation with Pivot3 HOW-TO GUIDE

VMware vcloud Service Definition for a Private Cloud

Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

VMware Workspace Portal Reference Architecture

Microsoft Exchange Solutions on VMware

Cloud Infrastructure Licensing, Packaging and Pricing

vsphere Host Profiles

Microsoft SharePoint 2010 on VMware Availability and Recovery Options. Microsoft SharePoint 2010 on VMware Availability and Recovery Options

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Remote PC Guide Series - Volume 1

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server

VMware vsphere-6.0 Administration Training

Pivot3 Reference Architecture for VMware View Version 1.03

vsphere Replication for Disaster Recovery to Cloud

The VMware Administrator s Guide to Hyper-V in Windows Server Brien Posey Microsoft

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

Upgrading Horizon Workspace

VMware Auto Deploy GUI. VMware Auto Deploy Gui 5.0 Practical guide

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

What s New in VMware vsphere 5.5 Networking

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VMware vcenter Server Performance and Best Practices

PassTest. Bessere Qualität, bessere Dienstleistungen!

VMWARE COURSE OUTLINE. Revision 1.0 Prepared by: See CY

VMware vshield App Design Guide TECHNICAL WHITE PAPER

Microsoft SMB File Sharing Best Practices Guide

VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014

Installing and Administering VMware vsphere Update Manager

vrealize Operations Manager Customization and Administration Guide

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

VMware Virtual SAN Proof of Concept Guide

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

How to Use a LAMP Stack on vcloud for Optimal PHP Application Performance. A VMware Cloud Evaluation Reference Document

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Installing and Configuring vcenter Support Assistant

VMware vsphere Replication Administration

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

VMware vsphere with Operations Management and VMware vsphere

What s New in VMware vsphere Flash Read Cache TECHNICAL MARKETING DOCUMENTATION

Technical Note. vsphere Deployment Worksheet on page 2. Express Configuration on page 3. Single VLAN Configuration on page 5

VMware vsphere 5.1 Advanced Administration

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Performance of Virtualized SQL Server Based VMware vcenter Database

VMware Virtual SAN Hardware Guidance. TECHNICAL MARKETING DOCUMENTATION v 1.0

Transcription:

VMware vcenter Server 6.0 Cluster Performance Performance Study TECHNICAL WHITE PAPER

Table of Contents Introduction... 3 Experimental Setup... 3 Layout... 3 Software Configuration... 5 Performance Benchmark... 5 Workload... 5 vcenter Server 6.0 Cluster Performance: A Case Study... 6 vcenter 5.5 Cluster vs. vcenter 6.0 Cluster... 6 Workload Scaling... 8 vcenter 6.0 with ESXi 6.0 Hosts... 9 vcenter Server Appliance vs. vcenter Server on Windows... 11 vcenter Cluster Performance and Scalability... 12 Performance Considerations... 16 References... 17 TECHNICAL WHITE PAPER / 2

Introduction Clusters in VMware vcenter Server are an important way to group ESXi hosts. When a host is added to a cluster, the host's resources become part of the cluster's resources. Within a cluster, resource management elements can be defined; for example, a resource pool, reservation, or limit. vcenter Server also provides features like DRS and HA at a cluster level [1]. The performance of clustering in vcenter Server 6.0 has improved over 5.5 in two key areas: Increase in supported single cluster limit: 64 ESXi hosts and 8,000 virtual machines (in 5.5, this was 32 ESXi hosts and 3,000 virtual machine) Linear throughput scaling with an increase in cluster size and workload Other performance highlights include impressive improvements: Up to 66% improvement in operational throughput over vcenter Server 5.5 Improved performance under very heavy workloads Performance parity between vcenter Server on Windows and VCSA Significantly faster virtual machine operations Central to this paper is a case study that shows the performance of vcenter Server 6.0 clusters and illustrates the performance improvements over vcenter Server 5.5. This paper also compares the performance of vcenter Server on Windows and the vcenter Server Appliance (VCSA). Experimental Setup Layout The base configuration used vcenter Server 6.0 managing a cluster of ESXi 5.5 hosts. vcenter Server and the ESXi hosts were installed on 24-core Intel Haswell-based servers. The storage backend was an SSD-based array linked to all the hosts by NFS (Figure 1) on a 10 gigabit Ethernet (GbE) link. TECHNICAL WHITE PAPER / 3

SDK Clients vcenter Server ESXi Hosts NFS 24 Cores, 256 GB Haswell E5-2670v3 24 Cores, 256 GB Haswell E5-2670v3 64x 24 Cores, 256 GB Haswell E5-2670v3 EMC VNX5800, SSDs VAAI + Jumbo Frames Figure 1. Physical testbed layout The cluster was configured with vsphere DRS and vsphere HA, and it contained a single root resource pool with a set of resource pools under it, as depicted in Figure 2. Figure 2. Cluster resource pool structure TECHNICAL WHITE PAPER / 4

Software Configuration vsphere Component VMware Software Version VM Configuration ESXi Hosts vsphere 5.5 U2 / vsphere 6.0 -- vcenter Server Appliance (Linux) vcenter Server 6.0 VM: 16 vcpus, 24GB vram 1 vcenter Server 5.5 U2 VM: 8 vcpus, 24GB vram 1 vcenter Server Windows vcenter Server 6.0 VM for vcenter: 16 vcpus, 32GB vram 1, Microsoft Windows Server 2012 R2 vcenter Server 5.5 U2 VM for vcenter DB: 8 vcpus, 48GB vram 1, Microsoft Windows Server 2012 R2, Microsoft SQL Server 2012 Table 1. Software configuration for testbed Performance Benchmark A benchmarking tool was used to stress the performance and scalability of vcenter Server. This tool allows performance engineers to specify a list of management operations for the vsphere Web Services SDK to deploy and allows the testing team to vary the number of concurrent operations that are issued to the server. The mix of management operations was chosen based on data collected from a wide range of customer installations of vcenter Server [2]. Workload The workload uses the vsphere Web Services SDK to simulate a high-churn vcenter environment [2]. Each client repeatedly issues a series of management and provisioning operations to vcenter Server. Table 1 lists the operations performed in this workload. These operations and their relative frequency were chosen from a sampling of representative customer data. The intensity of the SDK workload (the performance benchmark) in a given cluster is based on the number of concurrent clients run in that cluster. Workloads of varying intensity were run to study the scalability of the server. Operation Add Port Group Remove Port Group Description Create a new port group on a vsphere Standard Virtual Switch. Remove a port group from a vsphere Standard Virtual Switch. Clone VM Create a clone of a virtual machine. 2 Create Folder Delete Folder Create Snapshot Delete Snapshot Create a new folder in the vcenter inventory hierarchy. Delete a folder from the vcenter inventory hierarchy. Create a snapshot of a virtual machine. Delete a snapshot of a virtual machine. 1 This configuration supports an inventory of up to 1,000 hosts and 10,000 virtual machines 2 The performance of a clone operation depends on the size of the VM being cloned. TECHNICAL WHITE PAPER / 5

Group Power-On VMs vmotion VM Power On VM Power Off VM Power on several virtual machines in a single operation. Move a powered-on virtual machine to a different host. Power on a single virtual machine in a DRS cluster. Power off a single virtual machine. Reconfigure VM Edit a virtual machine s configuration settings. 3 Register VM Unregister VM Relocate VM Remove VM Reset VM Suspend VM Resume VM Add a.vmx file from a datastore to the vcenter inventory. Remove a virtual machine from the vcenter inventory without deleting its files from its datastore. Move a powered-off virtual machine to a different host. Delete a virtual machine from the vcenter inventory, and delete its files from its datastore. Reset a single virtual machine. Suspend a single virtual machine. Resume a single virtual machine. Table 2. List of management operations issued by performance benchmark vcenter Server 6.0 Cluster Performance: A Case Study To study the cluster scalability and performance of vcenter Server 6.0, a series of experiments were undertaken: Comparison of vcenter Server 5.5 with vcenter Server 6.0 version, using ESXi 5.5 hosts. Performance of vcenter Server 6.0 managing a cluster of all ESXi 6.0 hosts. Comparison of the performance of vcenter Server 6.0 on Windows with the vcenter Server Appliance (VCSA) 6.0. Scaling vcenter Server 6.0 with inventory size and workload. vcenter 5.5 Cluster vs. vcenter 6.0 Cluster The setup for this test was a vcenter Server cluster (vcenter Server 5.5 Linux appliance vs. vcenter Server Appliance 6.0). A high-churn workload, as indicated in the previous section, was run against each vcenter Server cluster. The maximum supported hosts and virtual machines with DRS enabled was used in each case: The vcenter 5.5 environment had 32 ESXi hosts and 3,000 virtual machines. The vcenter 6.0 environment had 64 ESXi hosts and 8,000 virtual machines. 3 This benchmark s Reconfigure operation uses VM Memory Shares as a representative configuration setting. TECHNICAL WHITE PAPER / 6

Benchmark Throughput 120 100 Operations/minute (Higher is better) 80 60 40 20 0 Heavy Load vcenter 5.5, 32 Hosts, 3000 VMs vcenter 6.0, 64 Hosts, 8000 VMs Figure 3. vcenter 5.5 vs. 6.0 performance comparison As Figure 3 illustrates, vcenter 6.0 can support a larger cluster size with increased operational throughput relative to vcenter 5.5. Moreover, despite the larger cluster size, the individual operational latencies have also improved from 5.5 to 6.0. Figure 4 shows a sample of these improvements: as the figure indicates, vmotion, poweron and clone latency have improved 59%, 25%, and 18% respectively. TECHNICAL WHITE PAPER / 7

Latency Relative to vcenter 5.5 100 Latency As Percentage (6.0/5.5) 80 60 40 20 82 75 41 0 clone poweron vmotion Figure 4. Improvement in operational latency in vcenter 6.0 (lower is better) The performance of many key operations in a DRS cluster, including poweron and vmotion, depend on the performance of DRS itself. Improvements to DRS data collection and calculation of virtual machine placement have resulted in lower latencies for such operations in vcenter 6.0. Workload Scaling To study the impact of workload intensity on cluster performance, different workload configurations were run against both vcenter 5.5 and 6.0. In both cases, the cluster size remained constant (which was the maximum size supported for the given vcenter version). Each workload was defined as light, medium, or heavy based on the number of concurrent clients in each case. Figure 5 illustrates the improvements in concurrency in vcenter 6.0. At a light load and with a larger inventory than 5.5, vcenter 6.0 can drive 3 times the number of concurrent operations. At a heavy load, vcenter 6.0 can drive more than 2 times the number of concurrent operations compared to 5.5. TECHNICAL WHITE PAPER / 8

Workload Scaling - vcenter 5.5 vs. vcenter 6.0 Operations/minute 160 140 120 100 80 60 40 20 0 vcenter 5.5, 32 Hosts, 3000 VMs vcenter 6.0, 64 Hosts, 8000 VMs Light Load Medium Load Heavy Load Figure 5. Benchmark throughput scaling with varying workloads and constant cluster size, illustrating better performance in vcenter 6.0 (higher is better) vcenter 6.0 with ESXi 6.0 Hosts The previous sections used vcenter Server 6.0 to manage ESXi 5.5 hosts. This section demonstrates the performance improvements when using ESXi 6.0 hosts in conjunction with vcenter Server 6.0. The testbed configuration was exactly the same as in the previous test. TECHNICAL WHITE PAPER / 9

Throughput - vcenter 6.0, ESXi 5.5 vs. ESXi 6.0 Operations/minute 180 160 140 120 100 80 60 40 20 0 vcenter 6.0, 64 hosts, 8000 VMs ESXi 5.5 hosts ESXi 6.0 hosts Figure 6. Throughput comparing vcenter 6.0 and ESXi 5.5 vs. vcenter 6.0 and ESXi 6.0 (higher is better) The throughput of vcenter Server goes up by about 60% when using ESXi 6.0 hosts, compared to ESXi 5.5 hosts (Figure 5). vsphere 6.0 includes a number of significant improvements to the management agents on the ESXi host to reduce resource usage and lower the latency across key operation code paths, and the result is reduced per-host operational latencies. Figure 7 lists the amount of latency improvement in vcenter 6.0 with ESXi 6.0 over ESXi 5.5 for some of the operations. TECHNICAL WHITE PAPER / 10

Latency Relative to vcenter 6.0 and ESXi 5.5 Latency As Percentage (ESXi 6.0/ESXi 5.5) 100 90 80 70 60 50 40 30 20 10 0 clone poweron vmotion Figure 7. Improvement in operational latency in vcenter 6.0 with ESXi 6.0 (lower is better) vcenter Server Appliance vs. vcenter Server on Windows The test in this section compares the performance of vcenter Server running in a Windows virtual machine with a Microsoft SQL Server database to the vcenter Server Appliance (VCSA), which runs in a Linux virtual machine and has an embedded vpostgres database. vcenter Server on Windows was configured according to best practice recommendations described in vcenter Server 6.0 Performance and Best Practices [2]. A SQL Server database was used because it can support more than 20 ESXi hosts and 200 virtual machines. VCSA was configured mostly with out-of-the-box settings, including using the embedded vpostgres database. TECHNICAL WHITE PAPER / 11

vcenter Windows vs. VCSA Operations/minute 160 140 120 100 80 60 40 20 0 Throughput - Heavy Load VCSA vcenter Windows Figure 8. vcenter 6.0 Windows vs. VCSA throughput comparison Figure 8 illustrates that with the same heavy workload, vcsa, with an embedded database, outperforms vcenter Windows with an external SQL Server database. One reason for this is that the database is on the same node as the vcenter Server itself, reducing the latency of database-related operations. For more best practices, see vcenter Server 6.0 Performance and Best Practices [2]. vcenter Cluster Performance and Scalability Previous tests show the performance of vcenter at its cluster scale limits. The tests in this section illustrate how vcenter scales with inventory size and applied load. TECHNICAL WHITE PAPER / 12

Inventory and Workload Scaling Operations/minute 120 100 80 60 40 20 0 Operation Throughput 4 Hosts, 500 VMs, 1 Client 8 Hosts, 1000 VMs, 2 Clients 16 Hosts, 2000 VMs, 4 Clients 32 Hosts, 4000 VMs, 8 Clients 64 Hosts, 8000 VMs, 16 Clients Figure 9. vcenter 6.0 throughput with inventory and workload scaling As shown in Figure 9, vcenter Server 6.0 displays a steady increase in throughput as it is scaled out in terms of more inventory (ESXi hosts and virtual machines) and more workload-generator clients. It is also useful to know vcenter CPU usage and memory as a function of workload and inventory size to demonstrate the scalability of vcenter Server. TECHNICAL WHITE PAPER / 13

vcenter Server CPU Usage as a Function of Inventory Size and Workload 700 vcenter CPU Used% 600 500 400 300 200 100 0 20 4Hosts/512VMs, 1 Client 45 8Hosts/1KVMs, 2 Clients 98 16Hosts/2KVMs, 4 Clients 232 572 32Hosts/4KVMs, 64Hosts/8KVMs, 8 Clients 16 Clients Figure 10. CPU usage increases when the inventory size and workload are increased (vcenter Server 6.0) Figure 10 shows vcenter CPU usage as a function of increasing workload and inventory size. As the graph shows, the CPU usage of vcenter Server 6.0 is strongly related to the number of management operations being performed (as the inventory size and workload size are increased, the amount of CPU used also increases). Figure 11 shows that the memory usage of vcenter Server grows as the inventory is scaled up: the more objects that vcenter Server is managing, the larger its memory footprint. Figure 12 shows the memory usage growth of vcenter Server as the workload is increased while the inventory size remains constant. TECHNICAL WHITE PAPER / 14

vcenter Server Memory Usage as a Function of Inventory Size and Workload 6000 5456 vcenter Memory Used (MB) 5000 4000 3000 2000 1000 442 594 1198 2458 0 4 Hosts/512 VMs 8 Hosts/1K VMs 16 Hosts/2K VMs 32 Hosts/4K VMs 64 Hosts/8K VMs Figure 11. Memory usage increases when the inventory size and workload are increased vcenter Memory Usage as a Function of Workload Only 6200 6000 6036 vcenter Memory Used (MB) 5800 5600 5400 5200 5000 4800 5054 5456 4600 4400 8 Clients 16 Clients 32 Clients Figure 12. Memory usage increases as the workload is increased (with a constant inventory size) Memory usage of vcenter Server 6.0 depends more on the inventory size than on the workload. This illustrates the importance of proper sizing when provisioning a vcenter Server. For specific guidelines on sizing vcenter Server, refer to the vcenter Server deployment guide [3]. TECHNICAL WHITE PAPER / 15

Performance Considerations The tests in this paper reveal several performance considerations to keep in mind: In a DRS-enabled cluster, as the cluster size (number of ESXi hosts and virtual machines) is increased, some operations are impacted more than others. The latency of power on (vm.poweron, datacenter.poweron), vmotion (vm.migrate), and relocate (vm.relocate) increase as the cluster size grows and the workload increases. This is mostly because DRS needs to do more work in a larger cluster to determine where to place a virtual machine that is being powered on. Running vcenter Server 6.0 with ESXi 6.0 hosts results in better performance than with ESXi 5.5 hosts. Consider vcenter Server memory requirements when scaling a vcenter cluster. TECHNICAL WHITE PAPER / 16

References [1] VMware, Inc. (2013) vsphere 5.5 Resource Management. https://pubs.vmware.com/vsphere- 55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-resource-management-guide.pdf [2] Mike Stunes and Ravi Soundararajan. (2015, August) VMware vcenter 6.0 Performance and Best Practices. https://communities.vmware.com/docs/doc-29203 [3] Mike Brown. (2015, February) VMware vcenter Server 6.0 Deployment Guide. https://www.vmware.com/files/pdf/techpaper/vmware-vcenter-server6-deployment-guide.pdf TECHNICAL WHITE PAPER / 17

About the Authors Sai Manohar Inabattini is a performance engineer with the vcenter Server performance group. He works on vcenter Server performance and scalability, with special focus on the Distributed Resource Scheduling (DRS) algorithm. Sai holds a bachelor s degree in computer science from the National Institute of Technology at Warangal, India. Adarsh Jagadeeshwaran works in the vcenter Server performance group in VMware, leading a team that is focused on vcenter Server resource management and scalability. Adarsh holds a master s degree in computer science from Syracuse University, Syracuse, USA. Acknowledgements The authors would like to express their sincere thanks to Shruthi Kodi for her efforts in running all the performance experiments and collecting data for this paper. The authors would also like to thank Satyen Abrol and Lei Chai for their help with experiments, and Ravi Soundararajan, Priya Sethuraman, Maarten Wiggers, Naveen Nagaraj, and Zhelong Pan for their invaluable reviews and support of this paper. VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright 2015 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Date: 27-Aug-15 Comments on this document: https://communities.vmware.com/docs/doc-30167