Monitoring VirtualBox Performance

Similar documents
Performance Measuring and Comparison of VirtualBox and VMware

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers

Resource usage monitoring for KVM based virtual machines

Using a Virtualization Techniques Based Platform for Advanced Studies on Operating Systems

Virtual Machines.

Full and Para Virtualization

In-Band Methods of Virtual Machine Detection

Muse Server Sizing. 18 June Document Version Muse

Technical Investigation of Computational Resource Interdependencies

Performance Comparison of VMware and Xen Hypervisor on Guest OS

A Performance Analysis of Secure HTTP Protocol

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

Enabling Technologies for Distributed Computing

IOS110. Virtualization 5/27/2014 1

Enabling Technologies for Distributed and Cloud Computing

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Improved metrics collection and correlation for the CERN cloud storage test framework

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:

Enterprise Deployment: Laserfiche 8 in a Virtual Environment. White Paper

Installing & Using KVM with Virtual Machine Manager COSC 495

International Journal of Computer & Organization Trends Volume20 Number1 May 2015

System Requirements Table of contents

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

Comparison of Windows IaaS Environments

Efficient Load Balancing using VM Migration by QEMU-KVM

Evaluating HDFS I/O Performance on Virtualized Systems

USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES

Virtualization and Other Tricks.

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

A Comparison of Oracle Performance on Physical and VMware Servers

Analysis of VDI Storage Performance During Bootstorm

Optimization of Cluster Web Server Scheduling from Site Access Statistics

Virtualization. Michael Tsai 2015/06/08

Virtualization. Dr. Yingwu Zhu

Comparing Free Virtualization Products

Utilization-based Scheduling in OpenStack* Compute (Nova)

Directions for VMware Ready Testing for Application Software

Virtualization in Linux a Key Component for Cloud Computing

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks

Business white paper. HP Process Automation. Version 7.0. Server performance

HPSA Agent Characterization

COS 318: Operating Systems. Virtual Machine Monitors

LSM-based Secure System Monitoring Using Kernel Protection Schemes

Week Overview. Installing Linux Linux on your Desktop Virtualization Basic Linux system administration

A Comparison of Oracle Performance on Physical and VMware Servers

Intro to Virtualization

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

A Comparison of VMware and {Virtual Server}

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Detecting the Presence of Virtual Machines Using the Local Data Table

Benchmarking Hadoop & HBase on Violin

Chapter 2 Addendum (More on Virtualization)

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM

Virtuoso and Database Scalability

Delivering Quality in Software Performance and Scalability Testing

Virtualization. Jukka K. Nurminen

Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

WHITE PAPER Optimizing Virtual Platform Disk Performance

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Streaming and Virtual Hosted Desktop Study

Overlapping Data Transfer With Application Execution on Clusters

Basics of VTune Performance Analyzer. Intel Software College. Objectives. VTune Performance Analyzer. Agenda

NAS 249 Virtual Machine Configuration with VirtualBox

Dynamic resource management for energy saving in the cloud computing environment

Performance Management for Cloudbased STC 2012

Improving the Database Logging Performance of the Snort Network Intrusion Detection Sensor

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Virtualization Technologies (ENCS 691K Chapter 3)

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

nanohub.org An Overview of Virtualization Techniques

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

Optimizing Network Virtualization in Xen

White Paper. Better Performance, Lower Costs. The Advantages of IBM PowerLinux 7R2 with PowerVM versus HP DL380p G8 with vsphere 5.

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

9/26/2011. What is Virtualization? What are the different types of virtualization.

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version Fix Pack 2.

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms

Virtualization. Pradipta De

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1

COM 444 Cloud Computing

Implementing Probes for J2EE Cluster Monitoring

Chapter 14 Virtual Machines

Virtualization: What does it mean for SAS? Karl Fisher and Clarke Thacher, SAS Institute Inc., Cary, NC

How To Test The Power Of Ancientisk On An Ipbx On A Quad Core Ios (Powerbee) On A Pc Or Ipbax On A Microsoft Ipbox On A Mini Ipbq

Transcription:

1 Monitoring VirtualBox Performance Siyuan Jiang and Haipeng Cai Department of Computer Science and Engineering, University of Notre Dame Email: sjiang1@nd.edu, hcai@nd.edu Abstract Virtualizers on Type II s have been popular among non-commercial users due to their easier installation and use along with lower cost than those on Type I s. However, the overall performance of these virtualizers was mostly found worse than the latter from VM user s point of view and, the reasons remain yet to be fully investigated. In this report, we present a quantitative study of performance in VirtualBoxin order to examine the performance bottleneck in this representative Type II virtualizer. Primary performance metrics for and VMs are separately collected and analyzed and, implications of the results are discussed, with the monitoring overhead quantified through well-known CPU, memory and I/O benchmarks. Results show that takes only a marginal portion of resources within the whole virtualizer for the case of VirtualBox and, our monitoring comes with merely a negligible performance overhead and perturbation to the virtualizer. 1 INTRODUCTION Virtual machine monitor () is a system that provides virtual environments where other programs can run in a same manner as they run directly in the real environments. A virtual machine (VM) is used to represent the virtual environment that a provides. The software running upon a VM is called guest software. The operating system running upon a VM is called guest operating system (guest OS). s are categorized into two types [1]. Type I s run directly on hardware, which means they have specific hardware requirements. Type II s run upon operating systems, which means they are like other programs and do not require extra effort in installation and use. Although convenient and widely used by common users, Type II s suffer from significant performance issues. As King [2] has showed, VMs running on a Type II (UMLinux) take more than 250 times more time than those running on a hybrid between Type I and Type II (VMware Workstation [3]) to execute a null system call in average. In this work, the performance of s is estimated by running benchmarks or some particular system calls upon VMs and comparing their running time under different s. In contrast to this approach, we focus on investigating the performance bottleneck caused by unbalanced resource usage. We aim at monitoring performance metrics of and VM separately, because we believe a better understanding of the overhead of Type II s can lead to a practical and effective improvement on design. For this study, firstly, we choose VirtualBox 1 as our object because it is an professional, open-source project with a large user group. Secondly, we implement several performance collectors inside VirtualBox to record performance metrics, such as memory usage, of 1. VirtualBox is an open source software under the terms of the GNU General Public License (GPL) version 2. Virtual Machine 1 Virtual Machine 2 Instrumented Performance Collectors Host OS Performance Monitor Fig. 1: Interactions between our project and VirtualBox itself and the VMs, which are running on the. Thirdly, we implement a performance monitor to organize and aggregate the data collected from performance collectors. By comparing the resource usage of the and that of the VMs, we investigate how much the costs compared with the total cost. Figure 1 shows the overall architecture of our project. We inspect the internal running state of VirtualBox by instrumenting performance monitoring agents in the source code. Pertinent information collected by those monitoring agents is then gathered in Performance Collectors. Then, Performance Collectors sends information to our Performance Monitor where designated performance metrics are calculated in runtime. The experimentation includes three parts. The first one is running one or two VMs on the instrumented. Monitored metrics of the and those of the VMs are collected respectively. The monitored metrics are examined along with the running situations of VMs at the time, e.g. the startup phase of operating system. The second part is to see how the overhead of may increase if VM uses more resources. Lastly, the overhead introduced by the instrumentation will also be gauged roughly by comparing major performance indicators attributed to VirtualBox processes from the legacy system monitor running on the host OS prior to and after the instrumentation is applied.

2 2 RELATED WORK We address two categories of previous work related to our project topic including those on performance,which is the main theme of our project, and those on source code instrumentation, which is the primary approach to our implementation of the project proposal. The performance characteristics of virtual machines are one of the major concerns in design [1]. However, virtual machines running on Type II s can suffer a great performance lost compared to running directly on standalone systems [2] to the extent that the efficiency property has been treated as one of the three properties, as formal requirements, of any [4]. In this context, performance of various s has been analyzed and compared independently beyond simple functionalities for running statistics that are ported together with the release of the complete virtualizer package. To compare the performance of the of VMWare and VirtualBox, Vasudevan et. al. create two virtual machines with each of the two visualizers, one running Windows and another Ubuntu Linux. They then measure the peak value of floating point computing power in GFLOPS using the LINPACK benchmark and, the bandwidth in Mbps using the Iperf benchmarking tool [5]. A similar but earlier work was done by Che et. al. in [], where the performance of in Xen and KVM was contrasted using benchmarking tools also including the LINPACK package. Different from these performance evaluations conducted in an indirect manner via running user-level applications on the top level of the VM system hierarchy, we directly gauge the runtime dynamics of the s internals with respect to its scheduling and controlling tasks with virtual machines running on it. Another differentiation lies in the approach to measurements. While both works above are done through application-level (benchmarking) tools without modifying the or other components in the visualizer, we aim at probing through source code instrumentation. Among previous examples of approaches related to ours is to instrument in an operating system kernel so as to capture processor counters, which is applied to calculate performance metrics [7]. Further, the authors embed benchmarks into Linux kernel modules to eliminate most interferences of operating system and interrupts thus to reduce the perturbations of the instrumentation. Applied at the application-level, another example of source code instrumentation is to map dynamic events to transformations at the program source code level in the Aspect-oriented programming (AOP) [8]. By contrast, we instrument also at the source code level but for collecting performance and resource usage information at runtime. In fact, this instrumentation approach has been applied to many other areas. In SubVirt [9], a virtual machine monitor was instrumented for building a rootkit for the purpose of malware simulation. The virtual ma- VM1 VM2 VM3 client client client COM COM COM VBoxSVC Performance Collectors COM Host OS Fig. 2: The architecture of our project Monitor (GUI) Shared Memory Performance Monitor (thread) chine based rootkit (VMBR) was implemented to subvert a Window XP and Linux target systems in order to study various ways of defending the real-world rootkit attacks. For similar security research purpose, Garfinkel and Rosenblum use virtual machine monitor to isolate intrusion detection system (IDS) from monitored host in their virtual machine introspection architecture [10]. We focused on performance issues of the particular Type II s and adopted the instrumentation approach solely for this purpose. 3 IMPLEMENTATION To investigate the performance bottleneck of VirtualBox, we implement a Performance Monitor of VirtualBox to record performance metrics of the and VMs. To retrieve the relative resource usage of different parts of and VMs, we implement Performance Collectors inside VirtualBox, which collects resource usage information and sends it to Performance Monitor. The project is developed in C++ under Fedora 17 Linux with GCC 4.4.1. GUI is developed by using Qt 4.8.3. 3.1 Architecture of VirtualBox Our project is built upon VirtualBox, which is a representative of Type II products. The architecture of VirtualBox [] is showed in Figure 2. As a Type II, VirtualBox is a software running upon a host operating system (host OS). Above the host OS, there is a system service, VBoxSVC, which is the of VirtualBox, maintaining all VMs that are currently running. Each VM is working with a VirtualBox client, which helps the VM interact with the VBoxSVC. 3.2 Overall Approach Figure 2 shows how our project is implemented inside VirtualBox and how the data is transferred among the different components. Our implementation mainly has three parts: (1) Performance Collectors, (2) Performance Monitor and (3) Monitor (GUI).

3 Fig. 3: The instrumented VirtualBox, where the VBoxPerfMon (right-hand side) we extended works as an integral component. First, the Performance Collectors, each for one of the three main categories of metrics including (1) CPU usage, (2) memory usage and (3) I/O traffic, were implemented inside VBoxSVC. They sends raw metrics to Performance Monitor via COM (Component Object Model). The three performance collectors were inserted into the existing COM interface (named IPerformanceCollector) provided in the original source package. More precisely, since we performed experiments in Fedora 17 Linux, we extended the IPerformanceCollector service to cover the metrics of our interests for Linux only (in main/src server/linux/p erf ormancelinux). Second, Performance Monitor, a child thread created in VBoxSVC, maintains all metrics it has received. Third, the visualizer of the performance metrics was built as an extended GUI interface (named Monitor) upon the existing Virtual Machine Manager GUI (anager) and, precisely, as a non-modal child dialog of it. As regards to the runtime mode of working, all performance collectors work in a single COM server to be consistent with the original framework of IPerformanceCollector while the Performance Monitor and Monitor run as a child QtGui thread created by the main thread of the original anager. This instrumented QtGui thread itself then hosted a renderer thread and separate worker threads each for a category of metrics running as a COM client to the extended IPerformanceCollector COM service, where the renderer and workers communicated through the legacy Qt4 mechanism of inter-thread signal and slots. 4 EVALUATION We have implemented the source code instrumentation approach to performance monitoring for VirtualBox. With current configurations of the platform (see Section 4.3) where we develop and run all experiments, a complete build of the source package costs about 15 minutes without noticeable extra overhead in this regard introduced by our work. Figure 3 shows a screenshot of the running instrumented VirtualBox. 4.1 Metrics of Measurement Currently two major categories of result have been collected and analyzed: (1) performance measurement of both and running VMs; and (2) performance overhead and perturbation of our instrumentation approach. For the first category, primary metrics, CPU and memory usage, were monitored over a period of time. With a user-defined interval t, measurement of these metrics was updated in the runtime every t seconds by retrieving the related dynamic records received from the instrumented IPerformanceCollector service and, the results were pushed to the VBoxPerfMon frontend hosting simple time-varying visualizations. These metrics have been chosen because they are the well-recognized strong indicators connected to the overall performance of the holistic virtualizer that common users can directly experience. Therefore, exploring these metrics, in particular those attributed to the, is what answering our motivating questions asks for. With the second category of metrics, we were concerned about the aggregate instrumentation overhead and perturbations to the, including those of the performance collector interface extension and the VBox- PerfMon frontend). This was realized by running the original VirtualBox and the instrumented one on a given set of VM workloads separately and then comparing the CPU and memory statistics provided by top on host OS and benchmark scores obtained on guest OSes. These tests were included in our experimental design because it is important to inform if our work would cause or not too much overall performance penalties and how acceptable our approach would be in terms of the extraneous costs concerned by both system analysts and end users. More importantly, a heavy overhead of our work itself would even affect the accuracy of the performance metrics we obtained for and VMs in addition to the whole virtualizer. 4.2 Experimental Design Since our goal is to investigate possible reasons that would account for the unideal performance lying in the core that is we supposed attributed to the unsatisfactory performance of the Type II virtualizer like VirtualBox, we separately measured performance metrics associated with running VMs and those purely dedicated to the core alone. To do this, we tested the fluctuations of related metrics as responses to the gradual increase in the number of running VMs from 0 but up to 2 (our tests were limited to 2 VMs running concurrently due to the processor and physical memory limitations of our test platform). During the tests, we observed the metric changes on the part of against that of the whole virtualizer at different overloads, including hosting the VM without applications running inside (i.e. on the guest OS) and that with benchmarks running inside. We used SciMark2 [12] as CPU benchmark and h2 and fop in the Dacapo benchmark suite [13] as memory and I/O disk benchmarks respectively for our experiments.

4 When comparing the performance original and instrumented virtualizer on a same set of tasks in order to measure the instrumentation overhead and perturbation, we run two VMs, one running guest OS of Windows XP sp3 and another Fedora 17 Linux, concurrently on the virtualizer with both executing a same benchmark for each test, totalling 3 groups of tests each on one of the three benchmarks described above. In each test, we collected both the aggregate CPU and memory usage stats associated with all VirtualBox processes from the host OS s point of view, and the time spent finishing the benchmark test in both VMs reported by the corresponding benchmark program. Due to the limited test platform resources, we run each benchmark 10 times and took the average as the quantities for analysis. 4.3 Experimental Setup The VirtualBox source code was instrumented and then rebuilt using the building scripts ported with the source package. During the experimentation, host machine was a portable HP R COMPAQ Presario CQ0 Notebook running Fedora 17 while mounted a single-core Intel R Celeron R 2.20GHZ processor with a 1024KB cache and, 2GB DDR2 physical memory. The Windows XP sp3 VM was assigned 2MB main memory, 1MB VRAM and 10G IDE virtual HDD. For the Linux VM, we configured 78MB main memory for it along with a 12MB VRAM and 10G IDE virtual HDD. 4.4 Results and Analysis To demonstrate the different resource usage patterns, we monitoring VirtualBox under four situations, which are running VirtualBox with no VM started, running VirtualBox starting one VM, running VirtualBox starting two VMs, running VirtualBox with two VMs starting one benchmark, scimark2. Figure 4 and Figure 5 exhibit the results of our monitoring memory usage and cpu usage in the four situations. In Figure 4a, 4b, 5a, 5b, time slots spanning a period of 7 seconds are represented on the x-axes while in Figure 4c, 4d, 5c, 5d, time slots span a period of 1 seconds. In Figure 4, the y-axes are the memory costs of corresponding series. In the legends of Figure 4, is the core in VirtualBox; VirtualBox total is the entire VirtualBox which includes and all other components, such as VMs, frontends; XP+ is the sum of the memory cost of VM(with Windows XP operating system) and one of the core ; Fedora+XP+ is the sum of the memory cost of the two VMs(one with XP operating system and one with Fedora operating system) and one of the core. In Figure 5, the y-axes are the CPU usage percentage of corresponding series. In the legends of Figure 5, still is the core in VirtualBox; user-level includes all the components in VirtualBox that are not VMs and the core ; XP is the VM with XP operating system; Fedora is the VM with Fedora operating system. Comparing the four figures in Figure 4, we can see there is always a around GB gap between the total memory cost of VirtualBox and the memory of cost of and VMs. The gap is slightly reduced after two VMs are launched in VirtualBox which is understandable because VMs use some of memory that VirtualBox has already allocated. The main observation in Figure 4 is the steady low memory cost of core except when there is VM to start. When there is one VM to start, only uses memory less than 0.1 GB for 10 seconds, while uses almost the same amount of memory but for more than 0 seconds to start two VMs at the same time. Overall, the memory cost of is almost zero to other components. For CPU usage analysis, different from memory usage, we can see is the major CPU resource user of VirtualBox in Figure 5a which is reasonable because with no VM started, VirtualBox has only started the service underneath while other higher-level components are not launched. On the other hand, in the other three figures in Figure 5, we can see the CPU usage percentage of VMs and are stay low all the time while the total CPU usage of VirtualBox is random and much higher. This leads to a conclusion that the CPU cost of is low in most situations. The second evaluation is conducted for estimating the overhead of our monitoring. We ran three benchmarks on the two VMs and recorded the finish time of each benchmarks, as showed in Figure. There are six columns in Figure, each represents a finish time of one benchmark in one VM. The solid black area represents the amount of time by which our monitoring has increased the finish time. Two VMs are running benchmarks at the same time under the same VirtualBox, so the overhead of our monitoring showed in Figur is bigger than the overhead when VMs does not run concurrently. The proportion of the overhead in running fop benchmark is larger than other benchmarks, because the finish time is relatively short while there is certain fixed overhead f our method, such as initialization of metrics collection. Overall, the overhead of monitoring is between 1.% and 39.0%. Additionally, we also logged the entire resource usage of VirtualBox being monitored and not being monitored respectively, under the four circumstances: (1) two VMs running fop (2) two VMs running h2 (3) two VMs running scimark2 and (4) two VMs doing nothing. Table 1 and Table 2 shows the average usage of responding resources comparing the situations with our monitoring with those without our monitoring. The average usage is increased by around 5%. 5 CONCLUSION We have presented a preliminary quantitative study on the performance-wise dynamics in the component of the open source virtualizer VirtualBox as a representative of the Type II that has been reported

Finish time(sec.) 1 1 2 3 4 5 1 7 8 9 10 1 1 2 3 4 5 1 7 8 9 10 1 5 1. VirtualBox Total 1. VirtualBox total XP+ 0. 0. 1 1 2 3 4 5 1 1 1 2 3 4 5 1 (a) No VM running (b) One VM running 1. 0. VirtualBox total XP+ Fedora+XP+V MM 1. 0. VirtualBox total XP+ Fedora+XP+V MM (c) Two VMs running (d) Two VMs running with benchmarks Fig. 4: Memory usage monitoring in four situations Benchmark Monitored(%) Not-monitored(%) fop 19..38 h2 12.85 12.10 scimark2 12. 10.15 none 13 7.37 TABLE 1: Average total CPU usage percentage of VirtualBox Benchmark Monitored(%) Not-monitored(%) fop 73.50 72.89 h2 73.29 72.90 scimark2 72.25 8.2 none. 7. TABLE 2: Average total Memory usage of VirtualBox to have performance issues in practical application. To do this, we developed a runtime performance monitor of the VirtualBox by instrumenting the source code of the, inserting performance inspectors that 90 80 70 0 50 40 30 20 10 0 benchmark/guest-os Difference between the time of instrumented VM and that of uninstrumented VM Finish time of uninstrumented VM Fig. : The finish time of running benchmarks in different VMs communicate with the core module via COM interfaces. The monitor itself was designed as a separate module running as a child thread created by the main VirtualBox thread (VBoxSVC) and launched performance monitoring along with the start of the VirtualBox Virtual Machine Manager, the typical bootstrapping interface of the client used by common users. We have measured the primary performance metrics, including memory usage and CPU usage, collected on the basis of the resources usage solely consumed by the compared to that of the whole virtualizer. And, based on the data we have retrieved, we have described an analysis that is expected to inform the answers to our research questions proposed as what has motivated this project at the beginning. Our results obtained have implied that should not be the real culprit of the overall unsatisfactory performance of the Type II virtualizer. In addition, our measure of the overhead incurred by the instrumentation approach we implemented has been an evidence of a negligible cost, hence the promising practicality of the present work. REFERENCES [1] R. Goldberg, Survey of virtual machine research, IEEE Computer, vol. 7, no., pp. 34 45, 1974. [2] S. T. King, G. W. Dunlap, and P. M. Chen, Operating system support for virtual machines, in Proceedings of USENIX Annual Technical Conference, Berkeley, CA, USA, 2003, pp. 84. [3] J. Sugerman, G. Venkitachalam, and B. Lim, Virtualizing i/o devices on vmware workstations hosted virtual machine monitor, in USENIX Annual Technical Conference, 2001, pp. 1 14. [4] G. Popek and R. Goldberg, Formal requirements for virtualizable third generation architectures, Communications of the ACM, vol. 17, no. 7, pp. 2 4, 1974.

1 1 2 3 4 5 1 7 8 9 10 1 1 2 3 4 5 1 7 8 9 10 10 9 8 10 9 8 XP 7 7 5 5 4 4 3 3 2 2 1 1 1 1 2 3 4 5 1 1 1 2 3 4 5 1 10 9 8 7 5 4 (a) No VM running XP Fedora 10 9 8 7 5 4 (b) One VM running XP Fedora 3 3 2 2 1 1 (c) Two VMs running (d) Two VMs running with benchmarks Fig. 5: Cpu usage percentage monitoring in four situations [5] V. M. S., B. R. Mohan, and D. K. Damodaran, Performance measuring and comparison of VirtualBox and VMware, in International Conference on Information and Computer Networks, vol. 27, 2012, pp. 42 47. [] J. Che, Q. He, Q. Gao, and D. Huang, Performance measuring and comparing of virtual machine monitors, in Proceedings of the 2008 IEEE/IFIP International Conference on Embedded and Ubiquitous Computing, vol. 2, 2008, pp. 3 38. [7] H. Najafzadeh and S. Chaiken, Source code instrumentation and its perturbation analysis in Pentium II, State University of New York at Albany, Albany, NY, USA, Tech. Rep., 2000. [8] R. Filman and K. Havelund, Source-code instrumentation and quantification of events, in Workshop on Foundations of Aspect- Oriented Languages, 1st International Conference on Aspect-Oriented Software Development (AOSD), Enschede, Netherlands, 2002. [9] P. Chen and K. Samuel, Subvirt: Implementing malware with virtual machines, in 200 IEEE Symposium on Security and Privacy, 200, pp. 14 27. [10] T. Garfinkel, M. Rosenblum et al., A virtual machine introspection based architecture for intrusion detection, in Proc. Network and Distributed Systems Security Symposium, 2003. [] Oracle VM VirtualBox User Manual, Oracle Corporation, https://www.virtualbox.org/manual/usermanual.html, 2012. [12] R. Pozo and B. Miller, Scimark 2.0, http://math.nist.gov/scimark2, 2012. [13] Dacapo benchmark suite, The Dacapo Group, http://dacapobench.org, 2012.