1 White Paper Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM September, 2013 Author Sanhita Sarkar, Director of Engineering, SGI Abstract This paper describes how to implement Red Hat Enterprise Linux 6.3 s Kernel-based Virtual Machine (KVM) on a SGI UV system to consolidate many heterogeneous software environments. The resultant environment provides a complete virtual machine for each guest with integrated I/O capabilities and a large external storage pool. This paper presents benchmarks and explains best practices derived from case studies. These benchmarks show that with minimal configuration near-native performance is easily achieved on the SGI UV 2000.
2 TABLE OF CONTENTS 1.0 Introduction Using Red Hat Enterprise Linux 6.3 s KVM Heterogeneous Software Environments with Virtualization Performance Benchmarks on SGI UV Benchmark Configuration Benchmark workloads Performance Metrics KVM Configuration Descriptions Benchmark Results Memory Latency and Bandwidth I/O Throughput Running SPECjbb KVM Configuration Comparisons Using SPECjbb 32 cores KVM Configuration Comparisons Using SPECjbb 160 cores KVM Scaling Comparisons Using SPECjbb Running 1 JVM/node KVM vs. Native SPECjbb 2005 Scaling Comparison Multiple JVM Mode Comparison with SPECjbb Performance Summary Best Practices Summary 16 Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 2
3 1.0 Introduction Despite the advantages of virtualized environments, careful planning is necessary for a successful implementation. Some considerations include: The consolidation of multiple VMs onto one physical machine can magnify the impact of hardware failures. Careful planning is required for disaster recovery. Incorrect VM implementation can degrade application performance. Following best practices is critical to avoid severe performance penalties. There is a trade-off between consolidation and performance that needs to be balanced depending on the use case. Applying system management tools in a virtual environment is non-trivial. Unneeded or over-specified virtual machines in a data center waste the resources of the virtual servers hosts. The above considerations stress the importance of proper planning before implementing a virtualized solution. The best results will only be achieved by abiding by best practices and being aware of the specific needs of a virtualized environment. 2.0 Using Red Hat Enterprise Linux 6.3 s KVM Red Hat Enterprise Linux (RHEL) 6.3 s Kernel-based Virtual Machine (KVM) is an excellent virtualization solution, as it solves several major problems: The KVM, being a kernel-level virtualization hypervisor, allows Linux Guest OSes to run with the same level of hardware support as on bare-metal with no additional guest drivers necessary. The KVM is actively supported by the open-source community. Any modern Linux distribution should run on top of the KVM as a guest OS without needing additional software or drivers. The KVM allows for many customization options for the VM. The GUI Virtualization Manager in RHEL 6.3 KVM allows for easy creation and management of VMs. For example, one can create virtual machines using a wizard and manage multiple virtual machines within a GUI interface. The KVM supports mature system administration tools, making it easier to administer many VMs in a datacenter. For example, a command line tool, virsh, can be used to manage virtual machines. The virsh tool has an option called vcpupin to dynamically map virtual cores in a running VM to physical cores on the system. For more control and tuning capabilities, one can use the virsh edit command to edit the XML files that define the VM configuration. For example, the cpuset and vcpupin directives can be used to assign a VM to a particular set of CPUs and pin virtual CPUs to physical cores on the system. As of the date of this paper (July 2013), RHEL 6.3 KVM supports 160 Virtualized CPU cores (vcpus) per Guest OS (VM), allowing massive scalability. A SGI UV running RHEL is perfectly suited for consolidating many applications onto a single system. It provides the largest single-system image on the market, allowing unparalleled performance and scalability. 2.1 Heterogeneous Software Environments with Virtualization Virtualization using RHEL 6.3 KVM technology provides a complete virtual machine for each guest. A virtual machine appears to be an actual hardware system to the operating system and applications running on it. Unlike Linux Containers, KVM virtual machines require the installation of a guest operating system. This guest operating system is completely independent from the host and other installed VMs. Each guest is Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 3
4 installed, managed, maintained, and updated independently. Completely different versions of software can be installed on each guest. The result is an ideal environment for server consolidation. 1 Figure 1 shows the following components of a virtualized environment: Host The physical computer on which the virtual machine is loaded. Virtual Machine The software environment which appears to a guest OS as hardware. It consists of computing power (CPU), memory, network interface(s), and storage. Virtualization Layer The KVM software layer (hypervisor) that makes available the physical hardware resources to the virtual machines. host OS The operating system installed on the bare physical computer (host). Server Hardware The hardware components of the physical computer, shown here also attached to some external storage. Virtual Machines Application 1 Application 2 Application 3 H O S T OS 1 OS 2 OS 3 Virtualization Layer: RHEL 6.3 KVM Hypervisor RHEL 6.3 Host OS Server Hardware External Storage Figure 1: Red Hat Enterprise Linux 6.3 KVM consolidating heterogeneous software environments 3.0 Performance Benchmarks on SGI UV This section describes the hardware and software configurations used for running a few performance benchmarks on an Intel Xeon E processor-based SGI UV server platform, as well as the benchmark results. 3.1 Benchmark Configuration The following benchmark configuration is used: Server: -- 1 x SGI UV system with: 32 x Intel Xeon E GHz processors 256 x 16 GB DIMMS (4 TB Total) External Storage: -- 2 x SGI IS5500, each with 4 LUNs, each LUN with 7 x 600 GB 10K RPM SAS drives in RAID0 -- Storage is attached via 8 Gb Fibre Channel HBAs OS and Virtualization Environment: 1 Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 4
5 -- Host & Guest OS: Red Hat Enterprise Linux Server release 6.3 kernel el6.x86_64 SMP -- KVM software: libvirt el6.x86_64; qemu-kvm el6.x86_64; virt-manager el6.x86_64 -- Java Virtual Machine (JVM) Version -- Oracle Java HotSpot 1.6.0_23-b05 (64-bit) 3.2 Benchmark workloads The following benchmark workloads were executed on the SGI UV in different modes once on bare metal and then under different RHEL 6.3 KVM configurations to show (a) the relative performance of a virtualized environment to the bare physical hardware (i.e., native performance) and (b) the benefits of using different RHEL 6.3 KVM configuration options: LMbench 2 3.0, measuring the memory latency and bandwidth on UV and KVM; IOzone 3 v3.414, assessing the I/O capabilities of external storage on UV and KVM; SPECjbb v1.07, measuring the throughput and scalability of Java applications on the UV NUMA architecture and assessing the overall efficiency of the KVM environment with such applications. 3.3 Performance Metrics The performance metrics for the LMbench, IOzone and SPECjbb 2005 benchmarks are, respectively: Memory latency (ns) and memory bandwidth (MB/s); I/O throughput (MB/s); BOPS (business operations per second). 3.4 KVM Configuration Descriptions Four different VM configurations 5 were used for the testing: Default VM - The default VM as created using virt-manager. In this case, each vcpu is free to use any core on any socket (Figure 2). VM with NUMA nodes This VM is created by editing the VM definition file (an XML file in /etc/libvirt/ qemu) and using the cell construct. When the VM boots, the cells appear as NUMA nodes within the VM (Figure 3). VM with vcpu pinning This VM is created by editing the VM definition file and using the cpuset and vcpupin constructs. These constructs assign vcpus to physical cores on the host on a 1:1 basis and pins them there (Figure 4). Multiple VMs with 1 VM per node (socket) 6 Each node is assigned only one VM, with the number of VMs created determined by the demands of the application. The number of nodes sets an upper limit for the number of VMs that can be created. Each VM is thus aligned to a specific socket (Figure 5) For visual simplicity, Figures 2-5 show a hypothetical system with 8 sockets (nodes), with each node containing 8 cores and with hyperthreading turned off. 6 A NUMA node, or node, relates to a physical socket on the system. There are 8 cores per node; enabling hypterthreading doubles this to 16 cores per node. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 5
6 Figure 2: Depiction of a Default VM with no vcpu Pinning Figure 3: Depiction of a VM with NUMA Nodes with vcpu Pinning Figure 4: Depiction of a VM with vcpu Pinning but no NUMA Nodes Figure 5: Depiction of Multiple VMs with 1 VM per Node 3.5 Benchmark Results This section presents benchmark results for a variety of different system configurations. These results guide the creation of best practices for UV configuration Memory Latency and Bandwidth Results from the LMBench 3.0 latency tests show that the main memory latency under the RHEL 6.3 KVM is nearly the same as Native, with random memory latency only 4% higher than Native. The KVM VM is running with 16 vcpus pinned to one CPU socket with HT enabled (Figure 6). 7 Latency (ns) Lower is Be*er KVM vs. Memory Latency Comparison Running lmbench on 1 socket, HT=On KVM L1 L2 Main mem Rand mem Figure 6: LMbench 3.0 Memory Latency Tests on SGI UV 2000: RHEL 6.3 KVM vs. Native 7 For many of the test results in this paper, the difference in performance from setting hyperthreading on or off is negligible; thus, only one case is shown. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 6
7 Similarly, the LMBench 3.0 bandwidth tests (Figure 7) show the memory bandwidth for KVM as just 5% less than Native. Figure 7: LMbench 3.0 Memory Bandwidth Tests on SGI UV 2000: RHEL 6.3 KVM vs. Native However, the file reread test gave anomalous results, with the KVM being more than 400% faster than Native. This discrepancy is due to the kernel setting memory_spread_page. This setting controls how the file buffer cache is allocated. If set to 1 (buffer spread), the file buffer cache is spread among all nodes. If set to 0 (buffer local), then the file buffer cache is allocated local to the same node where the request is made to create a file. Changing memory_spread_page from buffer spread to buffer local (1 to 0) improved Native s file reread result by 445% without much impact to the other Native and KVM results. Changing this kernel setting does not impact KVM performance, since a VM pinned to a node automatically mimics the local cache behavior as set to buffer local. The memory_spread_slab setting behaves similarly for the allocation of kernel slab memory. After setting these kernel settings to 0, all the results show that KVM performance is less than 5% below Native I/O Throughput This section describes the I/O performance tests executed on a SGI UV directly attached to an external SGI IS5500 storage subsystem. Following are some benchmark specifics: The IOzone benchmark was run with Direct I/O using 8 threads writing 50 GB files to xfs filesystems on 8 LUNs on two IS5500s. Virtual Machine s virtual hard disk drive was created using virtio device driver with an XFS filesystem datastore and raw disk image. Within the VM, the virtual devices (/dev/vd*) were formatted with xfs filesystem. For Native tests, the SCSI (/dev/sd*) LUN devices were formatted with xfs filesystems. Note: This I/O test was done in a localized fashion, because even on bare metal non-localized I/O and (especially) buffered I/O exhibits very random behavior since the location of the buffer cache is not controllable. Figure 8 shows bandwidth results for file reread operations that are similar to the results for memory bandwidth reread operations (see Figure 7). This is before adjusting the kernel settings for memory_spread_page and memory_spread_slab (i.e., buffer spread). That is, the KVM is outperforming Native. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 7
8 Figure 8: IOzone Benchmark on SGI UV with Direct-attached External SGI IS5500 Storage: RHEL 6.3 vs. Native After adjusting the kernel settings to buffer local, Native throughput improved % while KVM throughput basically remained the same. This led to Native results being approximately 0-18% better than KVM results depending on the test. 8 The KVM has slightly worse throughput with HT on (by about 1-17%) than with HT off (Figure 9). However, the throughput for KVM with HT off is still slightly below the throughput for Native with HT off or HT on for most tests. For the stride read test, the KVM is barely better than Native but the difference (about 3-6%) is likely statistically insignificant. Figure 9: IOzone Benchmark on SGI UV 2000 Comparing HT Off and On: RHEL 6.3 vs. Native 8 The KVM drives were configured to use the virtio driver instead of passthrough. Using passthrough should improve performance. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 8
9 3.5.3 Running SPECjbb 2005 For the purposes of these tests, SPECjbb 2005 was used in a couple different modes: Single JVM Multiple JVM Parallel Single JVM The first two ways are the official methods for running SPECjbb The Single JVM method consists of using only one JVM for generating the workload. The Multiple JVM method runs multiple JVMs and sums the score for each to generate an overall score for the test. This method is often used for NUMA architectures. We assigned JVMs to specific vcpus (on VMs) or CPUs (on Native) using the numactl command. This allows the JVMs to be assigned to NUMA nodes, eliminating internode communication on the host. The final mode (Parallel Single JVM) is a hybrid that is not an official mode for running SPECjbb 2005 but is used in order to drive multiple VMs. SSH is used to run multiple single JVM runs in parallel on different VMs. This exercises multiple VMs which are assigned to NUMA nodes. The score is the sum of each single JVM run and is analogous to how the Multiple JVM method score is calculated. The Multiple JVM mode was used for Native SPECjbb 2005 runs. For KVM runs, either the Multiple JVM mode or the Parallel Single JVM mode was used depending on the purpose of the experiment KVM Configuration Comparisons Using SPECjbb 32 cores The following KVM configurations were used for these SPECjbb 2005 experiments: Default VM VM with NUMA nodes VM with vcpu pinning Multiple VMs with 1 VM per node 32 cores are the maximum number of vcpus that can be used in a VM when using the cell construct to create NUMA nodes within a VM. This is much lower than the maximum number of cores that can be used in a VM with RHEL 6.3 KVM (160 cores), which is covered in the next section. Figure 10 and Figure 11 show the results for HT off and HT on, respectively. With HT off, 32 vcpus are spread across 4 nodes while, with HT on, 32 cores are spread across only 2 nodes. Native results are also included for comparison. As NUMA awareness increases, SPECjbb throughput increases. This increase is more pronounced with hyperthreading off. With hyperthreading on a VM is using 16 cores per node (versus 8 cores per node with HT off); the resultant additional resource contention decreases the impact of additional NUMA awareness. Diminishing returns in performance take effect after using NUMA nodes on the VM with 2 nodes (~28% with HT on). Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 9
10 2.5 VM Configura/on Comparisons using SPECjbb 32 cores, 1 JVM/8c, 8c/node, HT=Off 1 Default VM Total BOPS (Millions) Higher is Be*er 4 Node NUMA VM 1 vcpupin VM 1 VM/Node Na/ve 24% 47% 6% 17% Configura/ons Figure 10: KVM Configuration Comparisons Using SPECjbb cores, HT Off 1.6 VM Configura1on Comparisons using SPECjbb 32 cores, 1 JVM/16c, 16c/node, HT=On Total BOPS (Millions) Higher is Be*er 1 Default VM 2 Node NUMA VM 1 vcpupin VM 1 VM/Node Na1ve 28% 6% 18% Configura1ons Figure 11: KVM Configuration Comparisons Using SPECjbb cores, HT On KVM Configuration Comparisons Using SPECjbb 160 cores The configurations used in these experiments are unchanged from the previous section (32 cores) except not using the VM with NUMA nodes configuration, which is not supported for more than 32 cores. 160 cores is the maximum number of cores supported for a VM in RHEL 6.3 KVM which is why 160 cores were used in these tests. Figures 12 and 13 show the results for hyperthreading off and on, respectively. With HT off, 160 vcpus are spread across 20 nodes; with HT on, 160 cores are spread across 10 nodes. As with 32 cores, increased NUMA awareness increases throughput. Again, this effect is more pronounced with HT off. Pinning vcpus has less impact than it did on 32 cores, likely because the VMs are spread across more nodes. Switching to 1 VM per node substantially increases throughput performance; as was the case with the 32 core experiment, performance is within 18% of Native. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 10
11 12 VM Configura0on Comparisons using SPECjbb 160 cores, 1 JVM/8c, 8c/node, HT=Off Total BOPS (Millions) Higher is Be*er 1 Default VM 1 vcpupin VM 1 VM/Node Na0ve 491% 17% 2 8% 0 Configura0ons Figure 12: KVM Configuration Comparisons Using SPECjbb cores, HT Off 7 VM Configura2on Comparisons using SPECjbb 160 cores, 1 JVM/16c, 16c/node, HT=On 6 Higher is Be*er 1 Default VM 1 vcpupin VM 1 VM/Node Total BOPS (Millions) % Na2ve 162% 18% 1 0 Configura2ons Figure 13: KVM Configuration Comparisons Using SPECjbb cores, HT On These results highlight the superiority of using 1 VM per node on KVM, instead of the default VM or vcpu pinning configurations. In the 1 VM per node configuration, the system s resources are better isolated and utilized by the VMs, thus increasing throughput. Since there is a bigger increase in the number of nodes being used, the delta between the 1 VM per node case and the other cases is much larger. That is, for the 32 core experiments, the configuration change is moving from 1 VM to only 4 or 2 VMs (for HT off and HT on respectively). For the 160 core experiments, the configuration change results in going from 1 VM to 20 or 10 VMs (for HT off and HT on respectively) KVM Scaling Comparisons Using SPECjbb Running 1 JVM/node For these experiments, the same three KVM configurations are uses as in the 160 core experiments: Default VM VM with vcpu pinning Multiple VMs with 1 VM per node Figure 14 and Figure 15 (for HT off and HT on respectively) show that the default VM and the VM with vcpu pinning configurations see little benefit from using more than four nodes. This is in stark contrast to the scaling seen on the 1 VM per node and Native configurations. This further shows that running 1 VM per node offers the maximum performance with RHEL 6.3 KVM. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 11
12 12 VM Scaling Comparison using SPECjbb Running 1 JVM per 8 cores or node, HT=Off Total BOPS (Millions) Higher is Be*er 1 Default VM 1 vcpupin VM 1 VM/Node NaTve Number of Nodes Used Figure 14: KVM Scaling Comparisons using SPECjbb JVM/node, HT Off Total BOPS (Millions) Higher is Be*er VM Scaling Comparison using SPECjbb Running 1 JVM per 16 cores or node, HT=On 1 Default VM 1 vcpupin VM 1 VM/Node NaSve Number of Nodes Used Figure 15: KVM Scaling Comparisons using SPECjbb JVM/node, HT On KVM vs. Native SPECjbb 2005 Scaling Comparison These tests compare the optimal KVM configuration (1 VM per node) to Native for increasing numbers of JVMs. By increasing the number of VMs (and thus JVMs), more NUMA nodes are utilized on the SGI UV system. Figure 16 shows KVM and Native scaling linearly with the number of JVMs. Turning on HT increased performance by about 10% on both configurations. Native performance is roughly 17% better than KVM performance. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 12
13 Figure 16: KVM vs. Native SPECjbb 2005 Scaling Comparison HT Off and On Multiple JVM Mode Comparison with SPECjbb 2005 The previous experiments used different methods of running SPECjbb 2005 to find the best KVM VM configurations. By contrast, this experiment was designed to observe the performance of SPECjbb 2005 using the same configuration on KVM and Native. The Multiple JVM method, which is optimal for the Native environment, is used. Since KVM is confined to 32 cores when configuring NUMA nodes within a VM, 32 core VMs with either 4 or 2 NUMA nodes (for HT off and on, respectively) are used for comparison. The Multiple JVM SPECjbb 2005 method is run on a single VM with multiple nodes and on Native with multiple nodes. Figure 17 shows similar results as in previous experiments using multiple VMs: Both KVM and Native performance scaled linearly as the number of JVMs, or nodes, was increased. On both KVM and Native, having HT on increased performance by about 11%. As a VM is configured to use more physical nodes, its performance does not improve as much as tests run on Native nodes using more physical nodes (see trend lines in Figure 16), even with vcpu pinning. Native performance is about 17% better than KVM performance for both HT off and HT on when using a single node, but Native throughput is 23-26% better than KVM when 2 nodes are used and 57% better when 4 nodes are used (HT off). Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 13
14 Figure 17: KVM vs. Native SPECjbb 2005 Multiple JVM Mode Comparison 3.6 Performance Summary To summarize on performance: Measured by memory latency and bandwidth, KVM performance is % of Native performance. With Java applications, measured via SPECjbb (using 1 JVM per node), KVM is 83% of Native performance. Hyperthreading adds 10% more throughput for both KVM and Native. For I/O throughput measured via IOzone (with HT on), KVM performance is % of Native performance. Experiments show that kernel parameters memory_spread_page and memory_spread_slab play an important role on the Native I/O throughput and Native file buffer memory bandwidth. Performance improvements of approximately % and 445%, respectively, can be achieved by toggling these parameters. Changing these parameters does not impact KVM performance since a VM pinned to a node automatically sees optimal cache behavior. Cpuset and vcpupin do a good job of keeping vcpus pinned to cores and using memory local to those cores. Running SPECjbb on a 4 node VM and monitoring the memory used on the host appears to show that the memory is used on the correct physical nodes. However, the same memory utilization is seen when using vcpupin without the use of cells (and with better performance), so using vcpupin instead of cells is currently recommended. This could change if cells are updated to support more vcpus and more closely align vcpus and memory. For a 160 vcpu VM, the overall throughput of the system is increased by using the vcpupin feature, but levels off after 4 JVMs. Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 14
15 4.0 Best Practices The best practices for achieving optimal performance on a virtualized SGI UV system can be described as follows: The size of host and guest OSes supported by RHEL 6.3 KVM today (September 2013) are 160 CPUs and 160 vcpus, respectively. It is recommended to stay within this limit on UV, even though UV can scale up to 4096 physical cores. Table 1 shows the RHEL 6.3 KVM official limits and recommendations on UV. For the best performance, use virtual machines with vcpu counts and memory sizes less than or equal to the physical hardware of the NUMA node it is running on. For example, if a UV system has a NUMA node with 8 cores and 64 GB of memory, a VM should be less than or equal to this size. Figure 18 shows an example configuration of a 256 core UV with 32 virtual machines configured on each of the 32 NUMA nodes having 8 vcpus. Similarly, a 64-core UV may be configured with 8 virtual machines, each configured on one of the 8 NUMA nodes to use 16 vcpus (with HT on). One could theoretically configure a maximum of 512 virtual machines on a 512-core UV system having 4 TB of memory by restricting each VM to 1 core and 8 GB of memory. When configuring a VM that must span more than one NUMA node, it is important to use vcpupin directives to keep the vcpus pinned to physical cores. This can be done within Virt-Manager and virsh vcpupin or through configuration in the VM xml definition file via virsh edit. Enabling transparent hugepages (THP) on the UV host, which enables THP on the KVM, results in a better KVM performance. When using hyperthreading-capable CPUs on UV, it is recommended to allocate a number of vcpus to a VM which is less than or equal to the number of threads that exist on a CPU. For example, if a CPU has 8 cores/16 threads, a VM with 16 or fewer vcpus should be used. Be sure to configure the VM to use the correct CPU number associated to that socket. Use virtual hard disks via the virtio driver to attach external I/O to a VM on UV. Host Virtual Machine (Official) Virtual Machine on UV 2000 Max Cores 160/ Up to the Number of cores on a NUMA Node Max Memory 2 TB/64 TB 2 TB Up to the amount of memory on a NUMA Node Table 1: RHEL 6.3 KVM official limits and SGI UV recommendations Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 15
16 Application RHEL st VM on 1 st NUMA Node with 8 cores DB Server 1 Red Hat Enterprise Linux 6.3 Application Server Java Server SUSE Linux Enterprise Server 11 DB Server 2 Standby Server Red Hat Enterprise Linux 5.0 Application DB Server 3 Red Hat Enterprise Linux 6.3 Application Server 32 nd VM on 32 nd NUMA Node with 8 cores Red Hat Enterprise Linux 5.0 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 SUSE Linux Enterprise Server 11 RHEL 6.3 KVM Hypervisor Layer Physical server UV 2000 (256) Physical server UV 2000 (256c) One physical server running one OS and one Application One physical server configured as 32 Virtual Machines running 32 different types of workloads on different OSes Figure 18: Consolidation of Heterogeneous Workloads on SGI UV using Red Hat 6.3 KVM 5.0 Summary The SGI UV 2000 makes full use of the new virtualization features of Red Hat Enterprise Linux 6.3 KVM up to the KVM s maximum host size limits. This enables the consolidation of multiple physical servers onto one UV. This increases utilization of computer resources, eases manageability, improves energy savings and lead to a more cost-effective solution. SGI UV with RHEL 6.3 KVM enables easily manageable, high performance installations that can run multiple database deployments, collections of commercial applications, development/debugging environments, and IT infrastructures. Global Sales and Support: sgi.com/global 2013 SGI. SGI and registered trademarks or trademarks of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. SPEC and SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation (SPEC). All other trademarks are property of their respective holders Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM 16
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
Improved Virtualization Performance with 9th Generation Servers David J. Morse Dell, Inc. August 2006 Contents Introduction... 4 VMware ESX Server 3.0... 4 SPECjbb2005... 4 BEA JRockit... 4 Hardware/Software
KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS Mark Wagner Principal SW Engineer, Red Hat August 14, 2011 1 Overview Discuss a range of topics about KVM performance How to improve out of the box experience
VMware vsphere 6 and Oracle Database Scalability Study Scaling Monster Virtual Machines TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Test Environment... 3 Virtual Machine
Performance brief for IBM WebSphere Application Server.0 with VMware ESX.0 on HP ProLiant DL0 G server Table of contents Executive summary... WebSphere test configuration... Server information... WebSphere
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
HP Technology Forum & Expo 2009 Produced in cooperation with: 2972 Linux Options and Best Practices for Scaleup Virtualization Thomas Sjolshagen Linux Product Planner June 17 th, 2009 2009 Hewlett-Packard
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
RED HAT ENTERPRISE VIRTUALIZATION SCALING UP LOW LATENCY, VIRTUALIZATION, AND LINUX FOR WALL STREET OPERATIONS CHUCK DUBUQUE Senior Product Marketing Manager Red Hat, Inc. 1 RED HAT ENTERPRISE VIRTUALIZATION
RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,
IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark
CON9577 Performance Optimizations for Cloud Infrastructure as a Service John Falkenthal, Software Development Sr. Director - Oracle VM SPARC and VirtualBox Jeff Savit, Senior Principal Technical Product
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India firstname.lastname@example.org 2 Department
Whitepaper Consolidation EXECUTIVE SUMMARY At this time of massive and disruptive technological changes where applications must be nimbly deployed on physical, virtual, and cloud infrastructure, Red Hat
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer email@example.com Agenda Session Length:
Automatic NUMA Balancing Rik van Riel, Principal Software Engineer, Red Hat Vinod Chegu, Master Technologist, HP Automatic NUMA Balancing Agenda What is NUMA, anyway? Automatic NUMA balancing internals
Scaling Microsoft Exchange in a Red Hat Enterprise Virtualization Environment LoadGen Workload Microsoft Exchange Server 2007 Microsoft Windows Server 2008 Red Hat Enterprise Linux 5.4 (with integrated
EXTEND PERFORMANCE FOR ENTERPRISE DATABASE WORKLOADS WITH RED HAT ENTERPRISE LINUX TECHNOLOGY OVERVIEW Executive summary Today's economic challenges, along with rising operational and capital cost pressures,
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
Performance tuning Xen Roger Pau Monné firstname.lastname@example.org Madrid 8th of November, 2013 Xen Architecture Control Domain NetBSD or Linux device model (qemu) Hardware Drivers toolstack netback blkback Paravirtualized
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Datasheet Brings the performance and reliability of mainframe virtualization to blade computing BladeSymphony is the first true enterprise-class
KVM Virtualized I/O Performance Achieving Unprecedented I/O Performance Using Virtio-Blk-Data-Plane Technology Preview in SUSE Linux Enterprise Server 11 Service Pack 3 (SP3) Khoa Huynh, IBM Linux Technology
Performance & Scalability of SAS Business Analytics on an NEC Express5800/A1080a (Intel Xeon 7500 series-based Platform) using Red Hat Enterprise Linux 5 SAS Business Analytics Base SAS for SAS 9.2 Red
RED HAT ENTERPRISE LINUX 7 TECHNICAL OVERVIEW Scott McCarty Senior Solutions Architect, Red Hat 01/12/2015 1 Performance Tuning Overview Little's Law: L = A x W (Queue Length = Average Arrival Rate x Wait
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK AT A GLANCE The performance of Red Hat Enterprise Virtualization can be compared to other virtualization platforms using the SPECvirt_sc2010
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
Virtualizing a Virtual Machine Azeem Jiva Shrinivas Joshi AMD Java Labs TS-5227 Learn best practices for deploying Java EE applications in virtualized environment 2008 JavaOne SM Conference java.com.sun/javaone
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
Giuseppe Paterno' Solution Architect Jan 2010 Red Hat Milestones October 1994 Red Hat Linux June 2004 Red Hat Global File System August 2005 Red Hat Certificate System & Dir. Server April 2006 JBoss April
Performance Study of Oracle RAC on VMware vsphere 5.0 A Performance Study by VMware and EMC TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Test Environment... 3 Hardware... 4 Software... 4 Workload...
Virtualization and the U2 Databases Brian Kupzyk Senior Technical Support Engineer for Rocket U2 Nik Kesic Lead Technical Support for Rocket U2 Opening Procedure Orange arrow allows you to manipulate the
Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Enterprise-Class Virtualization with Open Source Technologies Alex Vasilevsky CTO & Founder Virtual Iron Software June 14, 2006 Virtualization Overview Traditional x86 Architecture Each server runs single
Achieving a Million I/O Operations per Second from a Single VMware vsphere 5.0 Host Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Software and Hardware...
Storage I/O Performance on VMware vsphere 5.1 over 16 Gigabit Fibre Channel Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Setup... 3 Workload... 4 Results...
Scaling in a Hypervisor Environment Richard McDougall Chief Performance Architect VMware VMware ESX Hypervisor Architecture Guest Monitor Guest TCP/IP Monitor (BT, HW, PV) File System CPU is controlled
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
Running vtserver in a Virtual Machine Environment Technical Note 2015 by AVTware Table of Contents 1. Scope... 3 1.1. Introduction... 3 2. General Virtual Machine Considerations... 4 2.1. The Virtualization
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7 Yan Fisher Senior Principal Product Marketing Manager, Red Hat Rohit Bakhshi Product Manager,
White Paper Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability) White Paper July, 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public
RPM Brotherhood: KVM VIRTUALIZATION TECHNOLOGY Syamsul Anuar Abd Nasir Fedora Ambassador Malaysia 1 ABOUT ME Technical Consultant for Warix Technologies - www.warix.my Warix is a Red Hat partner Offers
Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Dr. Johann Pohany, Virtualization Virtualization deals with extending or replacing an existing interface so as to
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Exploiting The Latest KVM Features For Optimized Virtualized Enterprise Storage Performance Dr. Khoa Huynh (email@example.com) IBM Linux Technology Center Overview KVM I/O architecture Key performance challenges
Red Hat enterprise virtualization 3.0 feature comparison at a glance Red Hat Enterprise is the first fully open source, enterprise ready virtualization platform Compare the functionality of RHEV to VMware
Parallels VDI Solution White Paper VDI SIZING A Competitive Comparison of VDI Solution Sizing between Parallels VDI versus VMware VDI www.parallels.com Parallels VDI Sizing. 29 Table of Contents Overview...
Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster Ryousei Takano Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology
International Journal of Advancements in Research & Technology, Volume 1, Issue6, November-2012 1 VIRTUALIZATION Vikas Garg Abstract: The main aim of the research was to get the knowledge of present trends
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark
Lab Validation Report Workload Performance Analysis: Microsoft Windows Server 2012 with Hyper-V and SQL Server 2012 Virtualizing Tier-1 Application Workloads with Confidence By Mike Leone, ESG Lab Engineer
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
The Foundation of Virtualizatio Virtualizing SAP Software Deployments A Proof of Concept by HP, Intel, SAP, SUSE, and VMware Solution provided by: Virtualizing SAP Software Deployments A Proof of Concept
RUNNING vtvax FOR WINDOWS IN A AVT / Vere Technologies TECHNICAL NOTE AVT/Vere Technical Note: Running vtvax for Windows in a Virtual Machine Environment Document Revision 1.1 (September, 2015) 2015 Vere
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin firstname.lastname@example.org IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency
Red Hat Enterprise Virtualization Performance Mark Wagner Senior Principal Engineer, Red Hat June 13, 2013 Agenda Overview Features that help with Performance Tuning RHEV + RHS Migration to RHEV Wrap Up
Tuning for Web Serving on the Red Hat Enterprise Linux 6.4 KVM Hypervisor Barry Arndt - Linux Technology Center, IBM Version 1.0 March 2013 1 IBM, the IBM logo, and ibm.com are trademarks or registered
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the
Servervirualisierung mit Citrix XenServer Paul Murray, Senior Systems Engineer, MSG EMEA Citrix Systems International GmbH email@example.com Virtualization Wave is Just Beginning Only 6% of x86
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
Database Virtualization David Fetter Senior MTS, VMware Inc PostgreSQL China 2011 Guangzhou Thanks! Jignesh Shah Staff Engineer, VMware Performance Expert Great Human Being Content Virtualization Virtualized
High-Performance Nested Virtualization With Hitachi Logical Partitioning Feature olutions Enabled by New Intel Virtualization Technology Extension in the Intel Xeon Processor E5 v3 Family By Hitachi Data
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
Next Generation Now: Virtualization A Unique Cloud Approach Jeff Ruby Channel Manager firstname.lastname@example.org Introducing Extensive improvements in every dimension Efficiency, scalability and reliability Unprecedented