Performance Evaluation of Intel EPT Hardware Assist VMware ESX builds 140815 & 136362 (internal builds)



Similar documents
Virtualizing Performance-Critical Database Applications in VMware vsphere VMware vsphere 4.0 with ESX 4.0

Oracle Database Scalability in VMware ESX VMware ESX 3.5

W H I T E P A P E R. Performance and Scalability of Microsoft SQL Server on VMware vsphere 4

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Full and Para Virtualization

Configuration Maximums VMware Infrastructure 3

Microsoft Exchange Solutions on VMware

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Scaling in a Hypervisor Environment

Benchmarking Database Performance in a Virtual Environment

Understanding Memory Resource Management in VMware vsphere 5.0

IT Business Management System Requirements Guide

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

P E R F O R M A N C E S T U D Y. Scaling IBM DB2 9 in a VMware Infrastructure 3 Environment

vrealize Business System Requirements Guide

Microsoft Exchange Server 2007

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

Configuration Maximums VMware vsphere 4.0

EMC Unified Storage for Microsoft SQL Server 2008

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Performance of Enterprise Java Applications on VMware vsphere 4.1 and SpringSource tc Server

Understanding Full Virtualization, Paravirtualization, and Hardware Assist. Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D

VMware VMmark V1.1.1 Results

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Kronos Workforce Central on VMware Virtual Infrastructure

EMC Business Continuity for Microsoft SQL Server 2008

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

Using esxtop to Troubleshoot Performance Problems

Improving Scalability for Citrix Presentation Server

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

Performance Analysis Methods ESX Server 3

W H I T E P A P E R. Optimized Backup and Recovery for VMware Infrastructure with EMC Avamar

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Configuration Maximums

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

EMC Backup and Recovery for Microsoft SQL Server

Hardware Based Virtualization Technologies. Elsie Wahlig Platform Software Architect

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

Configuration Maximums VMware vsphere 4.1

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Zeus Traffic Manager VA Performance on vsphere 4

Virtualization and the U2 Databases

VMware vcenter Update Manager Performance and Best Practices VMware vcenter Update Manager 4.0

EMC Backup and Recovery for Microsoft SQL Server

Balancing CPU, Storage

SQL Server Virtualization

BridgeWays Management Pack for VMware ESX

Virtualizing a Virtual Machine

Enabling Technologies for Distributed and Cloud Computing

Performance tuning Xen

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Scaling Microsoft Exchange in a Red Hat Enterprise Virtualization Environment

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Enabling Technologies for Distributed Computing

SAN Conceptual and Design Basics

Ready Time Observations

WHITE PAPER. AMD-V Nested Paging. AMD-V Nested Paging. Issue Date: July, 2008 Revision: 1.0. Advanced Micro Devices, Inc.

Vocera Voice 4.3 and 4.4 Server Sizing Matrix

Virtualization. Types of Interfaces

Hybrid Virtualization The Next Generation of XenLinux

Citrix XenApp Server Deployment on VMware ESX at a Large Multi-National Insurance Company

VMware Consolidated Backup

Performance of Virtualized SQL Server Based VMware vcenter Database

VMware vcenter Server 6.0 Cluster Performance

PARALLELS CLOUD SERVER

Hyper-V R2: What's New?

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Performance Best Practices for VMware Workstation VMware Workstation 7.0

Esri ArcGIS Server 10 for VMware Infrastructure

IOS110. Virtualization 5/27/2014 1

Virtuoso and Database Scalability

Performance Best Practices for VMware vsphere 4.0 VMware ESX 4.0 and ESXi 4.0 vcenter Server 4.0 EN

Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER

HP ProLiant DL585 G5 earns #1 virtualization performance record on VMmark Benchmark

VXLAN Performance Evaluation on VMware vsphere 5.1

Virtualization. ! Physical Hardware. ! Software. ! Isolation. ! Software Abstraction. ! Encapsulation. ! Virtualization Layer. !

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

Virtual Machines. COMP 3361: Operating Systems I Winter

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Virtualization Performance Analysis November 2010 Effect of SR-IOV Support in Red Hat KVM on Network Performance in Virtualized Environments

Host Power Management in VMware vsphere 5.5

System Requirements Table of contents

Performance and scalability of a large OLTP workload

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Understanding Data Locality in VMware Virtual SAN

Design Considerations for Increasing VDI Performance and Scalability with Cisco Unified Computing System

Configuration Maximums

Benchmarking Microsoft SQL Server Using VMware ESX Server 3.5

Best Practices for Running Allscripts Emergency Department on VMware vsphere

Getting Started with ESXi Embedded

Monitoring Databases on VMware

Transcription:

Performance Evaluation of Intel Hardware Assist VMware ESX builds 140815 & 136362 (internal builds) Introduction For the majority of common workloads, performance in a virtualized environment is close to that in a native environment. Virtualization does create some overheads, however. These come from the virtualization of the CPU, the MMU (Memory Management Unit), and the I/O devices. In some of their recent x86 processors AMD and Intel have begun to provide hardware extensions to help bridge this performance gap. In 2006, both vendors introduced their first-generation hardware support for x86 virtualization with AMD-Virtualization (AMD-V ) and Intel VT-x technologies. Recently Intel introduced its second generation of hardware support that incorporates MMU virtualization, called Extended Page Tables (). We evaluated performance by comparing it to the performance of our software-only shadow page table technique on an -enabled Intel system. From our studies we conclude that -enabled systems can improve performance compared to using shadow paging for MMU virtualization. provides performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks. We have also observed that although increases memory access latencies for a few workloads, this cost can be reduced by effectively using large pages in the guest and the hypervisor. NOTE Many of the workloads presented in this paper are similar to those used in our recent paper about AMD RVI performance (Performance Evaluation of AMD RVI Hardware Assist). Because the papers used different ESX versions, however, the results are not directly comparable. Background Prior to the introduction of hardware support for virtualization, the VMware virtual machine monitor (VMM) used software techniques for virtualizing x86 processors. We used binary translation (BT) for instruction set virtualization, shadow paging for MMU virtualization, and device emulation for device virtualization. With the advent of Intel-VT in 2006 the VMM used VT for instruction-set virtualization on Intel processors that supported this feature. Due to the lack of hardware support for MMU virtualization in older CPUs, the VMM still used shadow paging for MMU virtualization. The shadow page tables stored information about the physical location of guest memory. Under shadow paging, in order to provide transparent MMU virtualization the VMM intercepted guest page table updates to keep the shadow page tables coherent with the guest page tables. This caused some overhead in the virtual execution of those applications for which the guest had to frequently update its page table structures. With the introduction of, the VMM can now rely on hardware to eliminate the need for shadow page tables. This removes much of the overhead otherwise incurred to keep the shadow page tables up-to-date. We describe these various paging methods in more detail in the next section and describe our experimental methodologies, benchmarks, and results in subsequent sections. Finally, we conclude by providing a summary of our performance experience with. Copyright 2008-2009 VMware, Inc. All rights reserved. 1

MMU Architecture and Performance In a native system the operating system maintains a mapping of logical page numbers (LPNs) to physical page numbers (PPNs) in page table structures (see Figure 1). When a logical address is accessed, the hardware walks these page tables to determine the corresponding physical address. For faster memory access the x86 hardware caches the most recently used LPN->PPN mappings in its translation lookaside buffer (TLB). Figure 1. Native System Memory Management Unit Diagram Process 1 Process 2 Logical Physical In a virtualized system the guest operating system maintains page tables just as the operating system in a native system does, but in addition the VMM maintains a mapping of PPNs to machine page numbers (MPNs), as described in the following two sections, Software MMU and Hardware MMU. Software MMU In shadow paging the VMM maintains PPN->MPN mappings in its internal data structures and stores LPN->MPN mappings in shadow page tables that are exposed to the hardware (see Figure 2). The most recently used LPN->MPN translations are cached in the hardware TLB. The VMM keeps these shadow page tables synchronized to the guest page tables. This synchronization introduces virtualization overhead when the guest updates its page tables. Figure 2. Shadow Page Tables Diagram Virtual Machine #1 Virtual Machine #2 Process 1 Process 2 Process 1 Process 2 Logical Physical Shadow Page Table Entry Machine Copyright 2008-2009 VMware, Inc. All rights reserved. 2

Hardware MMU Using, the guest operating system continues to maintain LPN->PPN mappings in the guest page tables, but the VMM maintains PPN->MPN mappings in an additional level of page tables, called nested page tables (see Figure 3). In this case both the guest page tables and the nested page tables are exposed to the hardware. When a logical address is accessed, the hardware walks the guest page tables as in the case of native execution, but for every PPN accessed during the guest page table walk, the hardware also walks the nested page tables to determine the corresponding MPN. This composite translation eliminates the need to maintain shadow page tables and synchronize them with the guest page tables. However the extra operation also increases the cost of a page walk, thereby impacting the performance of applications that stress the TLB. This cost can be reduced by using large pages, thus reducing the stress on the TLB for applications with good spatial locality. For optimal performance the ESX VMM and VMkernel aggressively try to use large pages for their own memory when is used. Figure 3. Hardware Memory Management Unit Diagram Virtual Machine #1 Virtual Machine #2 Process 1 Process 2 Process 1 Process 2 Logical Physical Machine Copyright 2008-2009 VMware, Inc. All rights reserved. 3

Experimental Methodology This section describes the experimental configuration and the benchmarks used in this study. Hardware Configuration A This configuration was used for all experiments except Order-Entry. Intel OEM system (prerelease) CPUs: Two Quad-Core Intel Xeon X5560 Processors ( Nehalem ) BIOS: American Megatrends Inc. ver. 4.6.3.2 RAM: 36GB Networking: Two Intel Corporation 82576 Gigabit network controllers Storage Controller: Intel Corporation ICH10 6 port SATA AHCI controller Disk: One 7200 RPM 500GB Seagate Barracuda SATA 3.0Gb/s Hard Drive Configuration B This configuration was used for the Order-Entry experiments. Intel OEM system (prerelease) CPUs: Two Quad-Core Intel Xeon X5570 Processors ( Nehalem ) BIOS: American Megatrends Inc. version 4.6.3 RAM: 36GB Networking: Intel Corporation 82571EB Gigabit Ethernet controllers (rev06) Storage Controller: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03) External Storage: 4Gb/sec Fibre Channel switch Two EMC CLARiiON CX3-80 arrays, each with 240 Seagate STT14685 CLAR146 146GB 15K RPM drives One EMC CLARiiON CX3-40 array with 30 Hitachi HUS1511 CLAR146 146GB 15K RPM drives Virtualization Software All experiments except Order-Entry were performed with VMware ESX build 140815 (an internal build). The Order-Entry experiment was performed with VMware ESX build 136362 (an internal build). The tests in this study show performance differences between the VT VMM and the VMM as a way of comparing shadow paging with. Benchmarks In this study we used various benchmarks that to varying degrees stress MMU-related components in both software and hardware. These include: Kernel Microbenchmarks: A benchmark suite for system software performance analysis. Apache Compile: Compiling and building an Apache web server. Oracle Swingbench: An OLTP workload that uses Oracle Database Server on the backend. SPECjbb 2005: An industry standard server-side Java benchmark. Order-Entry benchmark: An OLTP benchmark with a large number of small transactions. SQL Server Database Hammer: A database workload that uses Microsoft SQL Server on the backend. Copyright 2008-2009 VMware, Inc. All rights reserved. 4

Citrix XenApp: A workload that exports client sessions along with configured applications. We ran these benchmarks in 32-bit and 64-bit virtual machines with a combination of Windows and Linux guest operating systems. Table 1 details the guest operating system used for each of these benchmarks. Table 1. Guest Operating Systems Used for Benchmarks Benchmark Operating System Kernel Microbenchmarks 32-bit Red Hat Enterprise Linux 5, Update 1 64-bit Red Hat Enterprise Linux 5, Update 1 Apache Compile 32-bit Red Hat Enterprise Linux 5, Update 3 64-bit Red Hat Enterprise Linux 5, Update 1 SPECjbb2005 32-bit Windows Server 2008 64-bit Windows Server 2008 Oracle Server Swingbench 64-bit Red Hat Enterprise Linux 5, Update 1 Order-Entry benchmark 64-bit Red Hat Enterprise Linux 5, Update 1 SQL Server Database Hammer 64-bit Windows Server 2008 Citrix XenApp 64-bit Windows Server 2003 It is important to understand the performance implications of as we scale up the number of virtual processors in a virtual machine. We therefore ran some of the benchmarks in multiprocessor virtual machines. Copyright 2008-2009 VMware, Inc. All rights reserved. 5

Experiments In this section, we describe the performance of as compared with shadow paging for the workloads mentioned in the previous section. MMU-Intensive Kernel Microbenchmarks Kernel microbenchmarks comprise a suite of benchmarks that stress different subsystems of the operating system. These microbenchmarks are not representative of application performance; however they are useful for amplifying the performance impact of different subsystems so that they can be more easily studied.they can be broadly divided into system-call intensive benchmarks and MMU-intensive benchmarks. In these experiments we ran both 32-bit and 64-bit microbenchmarks. In our experiments we found that system-call intensive benchmarks performed equivalently with and without enabled (for brevity these results are not included in this paper). However, with enabled we observed gains of up to 600% on MMU-intensive microbenchmarks. Figure 4 and Figure 5 show the 32-bit and 64-bit results, respectively, for the following kernel microbenchmarks: segv: A C program that measures the processor fault-handling overhead in Linux by repeatedly generating and handling page faults. pcd: A C program that measures Linux process creation and destruction time under a pre-defined number of executing processes on the system. nps: A C program that measures the Linux process-switching overhead. fw: A C program that measures Linux process-creation overhead by forking processes and waiting for them to complete. Figure 4. 32-bit MMU-Intensive Kernel Microbenchmark Results (Lower is Better) 1.2 Time (Normalized to SW MMU) 0.8 0.6 0.4 0.2 SW MMU 0.52 0.15 0.40 0.14 segv pcd nps fw Kernel Microbenchmark (32-bit) Copyright 2008-2009 VMware, Inc. All rights reserved. 6

Figure 5. 64-bit MMU-Intensive Kernel Microbenchmark Results (Lower is Better) 1.2 Time (Normalized to SW MMU) 0.8 0.6 0.4 0.2 SW MMU 0.50 0.40 0.19 0.14 segv pcd nps fw Kernel Microbenchmark (64-bit) Copyright 2008-2009 VMware, Inc. All rights reserved. 7

Apache Compile The Apache compile workload compiles and builds the Apache web server. This particular application is at an extreme of compilation workloads in that it is comprised of many small files. As a result many short-lived processes are created as each file is compiled. This behavior causes intensive MMU activity, similar to the MMU-intensive kernel microbenchmarks, and thus benefits greatly from in both 32-bit and 64-bit guests, as shown in Figure 6 and Figure 7, respectively. The improvement provided by increases with larger numbers of vcpus; in the four vcpu case performed 48% better than VT. Figure 6. 32-bit Apache Compile Time (Lower is Better) Time (Normalized to 1 vcpu SW MMU) 1.2 0.8 0.6 0.4 0.2 0.66 0.67 0.40 1vCPU 2vCPUs 4vCPUs 8vCPUs 0.46 0.27 SW MMU 0.37 0.19 Figure 7. 64-bit Apache Compile Time (Lower is Better) Time (Normalized to 1 vcpu SW MMU) 1.2 0.8 0.6 0.4 0.2 0.62 0.65 0.38 0.48 0.25 SW MMU 1vCPU 2vCPUs 4vCPUs 8vCPUs 0.43 0.19 Copyright 2008-2009 VMware, Inc. All rights reserved. 8

SPECjbb2005 SPECjbb2005 is an industry-standard server-side Java benchmark. It has little MMU activity but exhibits high TLB miss activity due to Java's usage of the heap and associated garbage collection. Modern x86 operating system vendors provide large page support to enhance the performance of such TLB-intensive workloads. Because further increases the TLB miss latency (due to additional paging levels), large page usage in the guest operating system is imperative for high performance of such applications in an -enabled virtual machine, as shown for 32-bit and 64-bit guests in Figure 8 and Figure 9, respectively. Figure 8. 32-bit SPECjbb2005 Results (Higher is Better) Business Operations per Second (Normalized to 1vCPU SW MMU (Small )) 5 4 3 2 1 0 SW MMU (Small ) (Small ) SW MMU (Large ) (Large ) 0.87 8 1.10 3.17 2.76 3.70 3.63 4.03 3.36 1vCPU 4vCPUs 8vCPUs 4.73 4.56 Figure 9. 64-bit SPECjbb2005 Results (Higher is Better) Business Operations per Second (Normalized to 1vCPU SW MMU (Small )) 7 6 5 4 3 2 1 0 SW MMU (Small ) (Small ) SW MMU (Large ) (Large ) 0.85 1.14 1.14 3.24 3.00 4.20 4.16 4.46 3.70 6.34 1vCPU 4vCPUs 8vCPUs 6.41 Copyright 2008-2009 VMware, Inc. All rights reserved. 9

Oracle Server Swingbench Swingbench is a database workload for evaluating Oracle database performance. We configured this experiment so that our database was fully cached in memory. As a result we used a small number of users connecting to the database and saturating the CPU. For this configuration Swingbench does not show significant MMU activity thereby gaining no performance due to, as shown in Figure 10. We use large pages in the guest for all our runs. Because the database is fully cached, a small number of users with no think time are sufficient to saturate the CPU. If the number of users is increased further, each user steps on the progress of another user and the benchmark scales negatively. In such a misconfiguration of this benchmark heavy context switching occurs between the user processes which can result in showing significant performance boost as compared to VT. Figure 10. Oracle Server Swingbench Results (Higher is Better) 3.0 2.70 2.71 Transactions per Second (Normalized to 1vCPU SW MMU) 2.5 2.0 1.5 0.5 SW MMU 1.75 1.74 1vCPU 2vCPUs 4vCPUs Copyright 2008-2009 VMware, Inc. All rights reserved. 10

Order-Entry Benchmark The Order-Entry benchmark is an OLTP benchmark with a large number of small transactions. Of the five transaction types, three update the database and two (of relatively low frequency) are read-only. The I/O load is very heavy and consists of small access sizes (2K-16K). The SAN accesses consist of random reads and writes with a 2:1 ratio in favor of reads. This experiment was performed using a trial version of Oracle 11g R1, and guest large pages were enabled. NOTE The Order-Entry benchmark is a non-comparable implementation of the TPC-C business model. Our results are not TPC-C compliant, and not comparable to official TPC-C results. Deviations from the TPC-C specification: batch implementation; an undersized database for the observed throughput. Figure 11 shows the results of the experiments. Figure 11. Order-Entry Benchmark Results (Higher is Better) 1.2 Transactions per Minute (Normalized to SW MMU) 0.8 0.6 0.4 3 SW MMU 0.2 8vCPU Copyright 2008-2009 VMware, Inc. All rights reserved. 11

SQL Server Database Hammer Database Hammer is a database workload for evaluating Microsoft SQL Server database performance. As shown in Figure 12, we observed that Database Hammer with lower vcpu counts is not MMU intensive, resulting in similar performance with and without. However as we scale up the number of vcpus we do see some MMU activity, thereby favoring. We configured the guest to use large pages for all our Database Hammer runs. Figure 12. SQL Server Database Hammer Results (Higher is Better) Transactions per Second (Normalized to 1vCPU SW MMU) 4.5 4.0 3.5 3.0 2.5 2.0 1.5 0.5 SW MMU 1.94 2.13 3.62 4.06 1vCPU 2vCPUs 4vCPUs Copyright 2008-2009 VMware, Inc. All rights reserved. 12

Citrix XenApp Citrix XenApp is a presentation server or application session provider that enables its clients to connect and run their favorite personal desktop applications. To run Citrix, we used the Citrix Server Test Kit (CSTK) 2.1 workload generator for simulating users. Each user was configured as a normal user running the Microsoft Word workload from Microsoft Office 2000. This workload requires about 70MB of physical RAM per user. Due to heavy process creation and inter-process switching, Citrix is an MMU-intensive workload. As shown in Figure 13, we observed that provided a boost of approximately 30%. Figure 13. Citrix XenApp Results (Lower is Better) 1.2 CPU Utilization at 150 ICA Users (Normalized to SW MMU) 0.8 0.6 0.4 0.2 0.70 SW MMU 1vCPU Copyright 2008-2009 VMware, Inc. All rights reserved. 13

Conclusion Intel -enabled CPUs offload a significant part of the VMM's MMU virtualization responsibilities to the hardware, resulting in higher performance. Results of experiments done on this platform indicate that the current VMware VMM leverages these features quite well, resulting in performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks. We recommend that TLB-intensive workloads make extensive use of large pages to mitigate the higher cost of a TLB miss. About the Author Nikhil Bhatia is a Performance Engineer at VMware. In this role, his primary focus is to evaluate and help improve the performance of the VMware Virtual Machine Monitor (VMM). Prior to VMware, Nikhil was a researcher at Oak Ridge National Laboratory (ORNL) in the Computer Science and Mathematics Division. Nikhil received a Master of Science in Computer Science from the University of Tennessee, where he specialized in tools for performance analysis of High Performance Computing (HPC) applications. Acknowledgements The author would like to thank the VMM developers for their tireless efforts in developing and optimizing the VMM. The author would also like to thank Reza Taheri and Priti Mishra from the VMware Performance group for the Order-Entry Benchmark results. Finally he would like to thank members of the VMware VMM and Performance groups who have helped shape the document to its current state: Ole Agesen, Jennifer Anderson, Richard Brunner, Jim Mattson, and Aravind Pavuluri. NOTE All information in this paper regarding future directions and intent are subject to change or withdrawal without notice and should not be relied on in making a purchasing decision concerning VMware products. The information in this paper is not a legal obligation for VMware to deliver any material, code, or functionality. The release and timing of VMware products remains at VMware's sole discretion. If you have comments about this documentation, submit your feedback to: docfeedback@vmware.com VMware, Inc. 3401 Hillview Ave., Palo Alto, CA 94304 www.vmware.com Copyright 2008-2009 VMware, Inc. All rights reserved. Protected by one or more of U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149, 843, 7,155,558, 7,222,221, 7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998, 7,277,999, 7,278,030, 7,281,102, 7,290,253, 7,356,679, 7,409,487, 7,412,492, 7,412,702, 7,424,710, 7,428,636, 7,433,951, and 7,434,002; patents pending. SPEC and the benchmark name SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation. VMware, the VMware boxes logo and design, Virtual SMP, and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Revision: 20090330; Item: EN-000194-00 14