A Survey of Virtualization Performance in Cloud Computing



Similar documents
Key words: cloud computing, cluster computing, virtualization, hypervisor, performance evaluation

Xen Live Migration. Networks and Distributed Systems Seminar, 24 April Matúš Harvan Xen Live Migration 1

Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD

The Effect of Live Migration on Networked Virtual Machines

Cloud Computing. Alex Crawford Ben Johnstone

A Migration of Virtual Machine to Remote System

Evaluate the Performance and Scalability of Image Deployment in Virtual Data Center

Performance of the Cloud-Based Commodity Cluster. School of Computer Science and Engineering, International University, Hochiminh City 70000, Vietnam

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Dynamic resource management for energy saving in the cloud computing environment

benchmarking Amazon EC2 for high-performance scientific computing

Performance Comparison of VMware and Xen Hypervisor on Guest OS

Infrastructure as a Service (IaaS)

Downtime Analysis of Virtual Machine Live Migration

A Survey Paper: Cloud Computing and Virtual Machine Migration

Cloud Computing and Amazon Web Services

VIRTUALIZATION, The next step for online services

The Impact of Virtualization on Network Performance of Amazon EC2 Data Center

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Amazon EC2 XenApp Scalability Analysis

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Enabling Technologies for Distributed and Cloud Computing

Efficient Load Balancing using VM Migration by QEMU-KVM

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

From Grid Computing to Cloud Computing & Security Issues in Cloud Computing

Virtualization benefits in High Performance Computing Applications

Enabling Technologies for Distributed Computing

Introduction to Cloud Computing

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

Data Centers and Cloud Computing

9/26/2011. What is Virtualization? What are the different types of virtualization.

Migration of Virtual Machines for Better Performance in Cloud Computing Environment

Two-Level Cooperation in Autonomic Cloud Resource Management

An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment

A Hybrid Approach To Live Migration Of Virtual Machines


PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

On the Performance-cost Tradeoff for Workflow Scheduling in Hybrid Clouds

Performance Measuring and Comparison of VirtualBox and VMware

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Variations in Performance and Scalability when Migrating n-tier Applications to Different Clouds

Network performance in virtual infrastructures

How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox (Xeen) Xen-Virtualization (X

Building a Private Cloud with Eucalyptus

Review on Virtualization for Cloud Computing

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Analyzing PAPI Performance on Virtual Machines. John Nelson

Impact of Virtualization on Network Performance The TCP Case

Virtual Machine Migration with an Open Source Hypervisor

Figure 1. The cloud scales: Amazon EC2 growth [2].

Performance Analysis of VM Scheduling Algorithm of CloudSim in Cloud Computing

Eucalyptus: An Open-source Infrastructure for Cloud Computing. Rich Wolski Eucalyptus Systems Inc.

EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications

A Novel Method for Resource Allocation in Cloud Computing Using Virtual Machines

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

High performance computing network for cloud environment using simulators

HPC performance applications on Virtual Clusters

Lecture 02a Cloud Computing I

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based Infrastructure

Grid Computing Vs. Cloud Computing

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

Development of Web-Based Remote Desktop to Provide Adaptive User Interfaces in Cloud Platform

Balancing Server in Public Cloud Using AJAS Algorithm

Effective Resource Allocation For Dynamic Workload In Virtual Machines Using Cloud Computing

Dynamic Creation and Placement of Virtual Machine Using CloudSim

Evaluation Methodology of Converged Cloud Environments

CS 695 Topics in Virtualization and Cloud Computing and Storage Systems. Introduction

CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers

Xen Virtualization: Xen (source) and XenServer

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Eucalyptus: An Open-source Infrastructure for Cloud Computing. Rich Wolski Eucalyptus Systems Inc.

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

A Review of Load Balancing Algorithms for Cloud Computing

SLA Driven Load Balancing For Web Applications in Cloud Computing Environment

CS 695 Topics in Virtualization and Cloud Computing. Introduction

Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster

Transcription:

A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota Duluth April 2014 PLEASE NOTE: This article was written as coursework and is not peer reviewed. Abstract Virtualization is an important resource of cloud computing. Many virtualization technologies allow workload consolidation, multiple operating systems, and fault tolerance mechanisms. The number of applications moving to the cloud is increasing, and cloud computing is becoming a prominent distributed computing framework. Virtual machines provide many benefits to these applications. Thus, it is critically important to examine and review the performance effects of virtualization in cloud computing environments. This survey outlines research and experiments that have been done to test the effects virtual machines in cloud computing environments. Applications such as Web 2.0 and high performance computing are considered. Benchmarking and experimental tools are introduced. Results from studies have show that CPU sharing in Amazon EC2 small instances degrades network performance. Live server migration may affect Web 2.0 technologies if operating at peak workloads. HPC benchmarking tools show Xen may have significant variance in performance compared to KVM and VirtualBox. A. Cloud Computing I. INTRODUCTION Cloud computing is paving the way for the future of distributed computing. Online interactive systems like Reddit, Expedia, Pinterest, and many others employ cloud infrastructures like Amazon EC2 to meet user demands [1]. More and more, the benefits of Cloud Computing draw attention from other fields, like large scale scientific simulations and high performance computing (HPC) [2]. To have computing resources that are seemingly endless, scalable, and outsourced is enticing for a wide range of applications. The term cloud computing often refers to a special case of distributed computing, encapsulating both the hardware and software portions of the overall system. Thus, the cloud represents the fuzzy notion of networked computing resources. Users can allocate chunks of these computing resources depending on the task or demand. But cluster, grid, and other forms of multi-node computing infrastructures have existed for many years. So what sets apart cloud computing from these other environments? Armbrust et. al. attempt to classify cloud computing by having three new aspects that separate it from classic distributed computing systems. These features are on-demand computing resources, no up-front commitment, and short-term resource rental [2]. These three aspects revolve around one key point: cloud computing resources are leased by a customer from a provider. They can reduce or expand resources without having to invest in hardware. For example, if a lessee runs an online website that sees 3x traffic during the weekends, they only have to pay for those extra resources at those periods of high traffic. Providers offer their computing resources as a cloud in different ways. Typically these resources are offered as a service (aas) in the forms of infrastructure (IaaS), platform (PaaS), or software (SaaS). Other services such as storage, private, and hybrid clouds are also available [3]. All studies showin in this survey relate to IaaS providers. In general, many cloud environments are able to offer expansive computing capabilities by virtualizing processing, storage, and network resources. B. Virtualization Virtual computing resources are often controlled using a hypervisor, or virtual machine (VM) monitor which sits underneath the operating system layer. The hypervisor allocates resources to individual VMs and controls their execution state. There are two dominant virtualization techniques, full virtualization and paravirtualization. With full virtualization the VM is given the semblance of acting on the physical machine hardware with an isolated operating system (OS). In this sense, the guest OS is separated from the hypervisor, allowing more secure and solitary computing. Paravirtualization, on the other hand, works with the hypervisor in which the guest operating system is modified to know it is being virtualized. This allows the hypervisor to more adequately schedule computing resources to the VM for increased performance. Many hypervisors have different requirements in terms of hardware and software. Virtual machines provide many benefits, and are often used to better utilize all of the hardware resources available in powerful servers and hardware. Some of the major benefits include [4]: Workload Consolidation: Virtual machines can be moved and reorganized as units. This allows better machine utilization and less machines need to be active at a time. Updated Applications: Operating systems are loaded at the time the virtual machine is initialized. This means software doesn t need to be manually updated, and users can choose what operating system and

software they use. This reduces the burden on the server administrator. Simultaneous Operating Systems: In most cases, each virtual machine is running a seperate copy of an operating system. This allows a single server to have multiple different operating systems, expanding usability. Machine Isolation: Often machine resources are guaranteed for an instance of a virtual machine. This guaranteed resource allocation can provide a higher quality of service than many other time-shared compute environments. the cloud with expectations of being virtualized would only be beneficial if the performance gains are tangible. II. PERFORMANCE MEASURES Performance studies of virtualization techniques in cloud computing environments is challenging for a number of reasons. First, there are many aspects of performance that need to be considered, such as networking, CPU utilization, disc I/O speeds, and more. Second, there is rarely a best way to benchmark these computing tasks. Third, the diversity software such as operating systems and core applications lends many results questionable or inconclusive. There are many different hypervisors, cloud providers, operating systems, and benchmark software suites to choose from. Depending on which hypervisor is considered, there are certain hardware software requirements that must be adhered to. Without taking into account all of the potential options a performance study may feel incomplete. Despite these challenges, it is important both for the progress of cloud computing and validation of procedures that these studies exist. Fig. 1. Full virtualization and paravirtualization. C. Performance These benefits are what make virtualization and cloud computing a perfect match. But before an application should move to the cloud, it s important to characterize the possible performance gains or losses. This is especially relevant when a customer is being billed for computing resources by the hour. In addition, many HPC and Web 2.0 applications rely on performance. Reworking or restructuring applications for A. Network Performance of Amazon EC2 A prominent study is the paper The Impact of Virtualization on Network Performance of Amazon EC2 Data Center by Guohui Wang and T. S. Eugene Ng. [5]. By narrowing the scope of what resource is being measured and restricting the environment to a major cloud provider, the authors were able to give a more complete study on network performance. The study focused network measurements on the Amazon Elastic Compute Cloud (EC2), which uses the Xen open source hypervisor. Processing power on Amazon EC2 is broken into EC2 compute units in which one compute unit is equivalent to a 1.1 GHz 2007 Intel Xeon processor. Two instance types were considered in this study, small and medium, in which an instance is guest virtual machine. Small instances have 1.7 GB memory, 1 EC2 compute unit, and 160GB of storage. Medium instances have 3.75GB of memory and 2 EC2 compute units. The major finding of the study was that abnormally unstable network performance skewed measurements for small instances. Medium instances were less affected. The unstable network performance in small instances was most often attributed to processor sharing. Wang et. al. considered four primary measurements: CPU consistency, TCP/UDP throughput, packet delay, and packet loss. CPU consistency: A loop of one million iterations was ran to test CPU utilization, in which gettimeofday() was called each iteration and the result stored in memory. They found regular gaps in time for the small instances, indicating processor sharing. This can be seen in figure 2. TCP/UDP throughput: Pairs of instances sent TCP and UDP packets to on another. TCP throughput should achieve 4Gb/s by hardware standards, and the UDP rate was capped at 1Gb/s to avoid overflow. They found that medium instance performed as expected, but small instances had much lower TCP throughput than UDP. Throughput is shown for small instances at a higher resolution in figure 3. The authors found that for small instances had gaps in connectivity, likely a result of processor sharing. This had a dramatic affect on TCP

was so high, BADABING was unsuccessful at predicting a reasonable packet loss rate. Overall, this study showed that CPU sharing can degrade network performance in virtual machines. It also negatively impacts benchmarking tools such as BADABING. Using a medium instance type dramatically reduces these performance degredations. B. Live Virtual Machine Migration in Web 2.0 Applications Live Virtual Machine Migration: Live virtual machine migration is an important tool of cloud computing environments. This technique involves moving all the contents of a virtual machine to another physical host. It is especially useful for server management in which it enables online system management, as well as workload balancing and consolidation [7]. Fig. 2. Results of iterative timestamps for virtualized and nonvirtualized machines. This figure is from [5]. Fig. 4. Live virtual machine migration on server shutdown, from [8]. The process of live VM migration involves three primary steps: Fig. 3. TCP and UDP throughput for a small instance over time on Amazon EC2. This figure is from [5]. throughput, but UDP throughput was less affected as bursts of packets were sent to maintain the capped link rate. Packet Delay: 10 ping probes were sent between instances every second, for a total of 5000 probes. By measuring the round trip time (RTT) of the packet, the delay could be determined. They found very large delays in the first set of pings, likely due to the packet being forwarded to a security device. After that, small instances showed major delay variations, likely due to processor sharing. Packet Loss: Because actual packet loss is typically very low, usually around %2, the tool BADABING was used. BAD- ABING estimates packet loss by inspecting RTT and network congestion [6]. Because the delay variation in small instances 1) Precopy memory pages: Memory from the source virtual machine is copied and moved to the destination machine. This is done without stopping the source VM. This is referred to as the warm-up phase. 2) Stop VM on source, start VM on destination: The VM is halted on the source. Memory pages that have been written to after the warm-up phase (dirty pages) are copied to the destination. Then the VM is started on the destination. This is called the stop-and-copy phase. The time between the source VM being halted and the destination VM starting is considered VM downtime. 3) Postcopy memory: The execution state is transferred from source to destination. If the destination VM accesses a page of memory that has not yet been copied, it is pulled from the source VM. The prominent performance costs of live VM migration are processor downtime in step 2, and bandwidth requirements for copying memory pages. However, many studies have concluded that VM downtime can be reduced to 60ms or lower [7]. Web 2.0 Applications: Primary adopters of cloud computing were Web 2.0 applications [10]. Web 2.0 applications are online interactive web sites such as Facebook, Wordpress, and Blogger. A server is required to process input such as logging in, posting information, and updating profiles. As online applications get more complex the amount of processing needed increases. Because cloud computing can generate new

Fig. 5. Three webserver migrations and homepage response times for a maximum workload. This plot is from [9]. machines on the fly, as new users interact with a Web 2.0 application more server instances can be allocated [11]. Live migration is shown to have variable effects on different applications, and limited studies have shown these effects on Web 2.0 applications [9]. Thus, a study was done by Voorsluys et. al. to quantify the performance degredation of VM migration in these applications. They performed the study on 6-servers with one head node and five virtual machine nodes. Each node had a Intel Xeon 2.33 GHz Quad-core processor, 4GB of memory, a 7200RPM hard drive, and were connected via gigabit ethernet switch. The head node ran Ubuntu Server 7.10, and each VM node ran Ubuntu Server 8.04 with a paravirtualized kernel. The virtual machine software was Citrix XenServer Enterprise Edition. Apache 2.2.8 was used as the webserver, and MySQL was used for the database. To conduct the study, two testing applications where used: Olio (now retired) [12] and Faban [13]. 10 and 20 minute benchmark tests were ran in two settings. One with a static number of 600 users, which was determined in preliminary tests to be the maximum workload of their software and hardware setup. The other with a scaling number of users: 100, 200, 300, 400, and 500. Olio: A Web 2.0 application developed by Sun Microsystems. It allows users to log in, log out, load specific pages, search and tag events, and other common Web 2.0 activities. Its primary purpose was to help developers test server infrastructure, and evaluate the performance of online technologies. Faban: A Markov-chain load generator. Faban can simulate users interacting with a web system. These virtual users will log in, interact, and log out. The number of virtual users customizable, allowing testers to run benchmarking tools on different size workloads. Service Level Agreements: A Service Level Agreement (SLA) is a provider and customer contract that guarantees a minimum level of service. Typically, the SLA outlines minimum server response times for certain user interactions [14]. SLA violoations are a useful performance metric due to their use in real-world applications. SLAs will differ depending provider and application. Voorsluys et. al. defined the metrics in their study as followed: Response times were recorded in five-minute windows If a response exceeded the maximum allowed, a violoation was recorded The percent of responses that caused an SLA violation was considered The maximum allowed response times are shown in figure 6. Migration During Maximum Workload: With a full workload of 600 users, they found the live virtual machine migration of the webserver had 3 second downtime over a 44 second migration. Immediately after the migration, the webserver had to catch up and respond to pending requests for 5 seconds in which 99th percentile SLA violations occurred. 90th percentile violations only occurred if multiple migrations happened back-to-back. That is, if sufficient spacing is allowed between migrations, it mitigates the number of SLA violations. The results of homepage loading time during these migrations can be seen in figure 5. Migration with Scaling Workload: The experiments

Fig. 6. Maximum response times for various user actions in seconds. This table is from [9]. Fig. 7. Maximum response times for scaling number of users by action. This table is from [9]. found that no SLA violations were recorded during a webserver migration if the number of users was less than the maximum. The maximum response time for all users during this experiment is shown in figure 7. This study showed that live virtual machine migration can impact a Web 2.0 application if it is operating at its maximum workload. However, the experiment was limited in scale. The test was done on limited hardware and a single webserver. It is not clear that scaling these tests to many webservers on a true cloud platform would yield the same results. C. VM Technologies for High Performance Computing High Performance Computing: HPC typically refers to the complex scientific algorithms which require computations that exceed the capabilities of common desktop hardware. Some examples are climate modeling, genome sequencing, and financial market modeling [16]. Different applications have different needs. Specifically, some are CPU bound and require excessive data processing, while others require much more memory or disc space. Thus, it is necessary to investigate the different VM technologies and their effect on different HPC applications. Benchmarking tools already exist that encapsulate the common functions and needs of many HPC applications, but do not benchmark the applications themselves. Two common benchmarking tools are: SPEC OMP: Standard Performance Evaluation Corporation OpenMP Benchmark Suite assesses the performance of applications that use OpenMP [17]. OpenMP is an API that handles multi-node shared memory computing. HPCC: The High Performance Computing Challenge Benchmark Suite consists of multiple tests that analyze the common functions of real-world HPC applications [18]. It is the benchmarking tool that is used to classify the Top500 list of most powerful supercomputers [19]. A study was done by Younge et. al. that compared different VM technologies using these two benchmarks [15]. In this study the FutureGrid test bed was used. FutureGrid is a workflow engine that allows researchers to examine cloud based applications on geographically distributed, heterogeneous server infrastructure [20]. This made testing between different VM technologies more simple, as well as offering tools for analyzing performance. Younge et. al. chose to run their experiments on the Indiana University Data Center (of FutureGrid) on four compute nodes. Each compute node had two Intel Xeon 5570 Quad-core processors, 24GBs of RAM, and a QDR InfiniBand connection. They used the Red Hat Enterprise Server Edition, and each node had a different hypervisor. Because of the hardware limitations imposed by the different hypervisors, each virtual machine was limited to 8 processor cores and 16GB of ram. The hypervisors tested in this experiment (one per node) were Xen [21], Kernel- Based Virtual Machine (KVM) [22], Oracle VirtualBox [23], and a control with no hypervisor. VMware was omitted from the experiment due to the user-license forbidding performance tests to other VM technologies without authorization [15]. The major differences between VM technologies is shown in figure 8. The experiments consisted of running the benchmarking tools 20 times, and recording the average and variance of performance for each of the hypervisors. SPEC OMP: The experiments with SPEC showed that KVM performed on par with the native machine. Xen and VirtualBox where shown to have a score that was approximately %8 lower. Unfortunately, the authors did not express in any detail why they believed this was the case. Floating Point Accuracy Per Second: FLOPS is a measurment of how many operations in floating point accuracy can be done per second. Typically this is recorded in GFLOPS, or 10 9 FLOPS. For Linpack, the subtest of HPCC that conducts linear algebra performance tests, experiments found a high degree of variance with Xen. These results are shown in figure 9. On average, all VMs performed about equally well, but underperformed compared to the native machine. For the Fast Fourier Transform, a discrete mathematical solver from HPCC, Xen still showed a high degree of variance. However, Xen underperformed compared to the other hypervisors, which were about equal with native. A possible hypothesis to observation was that there are adverse affects of Intel s MPI on Xen [15]. These results can be seen in figure 10. PingPong: PingPong is a measurment of communication between processes. One thread will send a message to another. Upon receival of the message, the other thread will return the message. This was used to measure thread latency: the interval between simulation and response, and bandwidth: the number of messages that could be sent per second. The bandwidth experiments showed a larger variance in Xen, and

Fig. 8. Differences in virtual machine technologies. This table is from [15]. Fig. 11. Average bandwidth of PingPong test between two processors with different VM technologies. This plot is from [15]. Fig. 9. Average GFLOPS for 20 runs with different VM technologies using Linpack. This plot is from [15]. Fig. 12. Average latency of PingPong test between two processors with different VM technologies. This plot is from [15]. Fig. 10. Average GFLOPS for 20 runs with different VM technologies using Fast Fourier Transform. This plot is from [15]. that VirtualBox often well outperformed the other hypervisors and native machine. This was attributed to the possibility of messages being sent on the same physical processor core, thus taking advantage of the CPU cache. These results can be seen in figure 11. The experiments also showed that Xen had unusually high latencies while KVM and VirtualBox performed similarly well to the native machine. These results are shown in figure 12.

This is one of the few studies that compares different virtual machine hypervisors. In this regard, the results and conclusions of the authors is interesting. However, many of the findings were not adequately investigated. Unusual effects (such as the high variability in Xen) was not expanded upon. Further tests or analysis would have supported their conclusions that KVM is the ideal hypervisor for HPC. This is especially important when their findings are contrary to other studies [24]. III. CONCLUSIONS Virtual machine technologies are an important and critical component of cloud computing. They reduce administration complexity by allowing multiple operating systems, isolated compute environments, and fault tolerance. Workloads can be more easily consolidated, and keeping software updated is no longer a time consuming task. As cloud infrastructure gets more sophisticated, the number of applications moving to the cloud grows. Virtual machines provide many benefits to these applications. Now, more than ever, it critically important to examine and review the performance effects of virtualization in cloud computing infrastructure. Amazon EC2 network performance was examined by Guohui Wang and T. S. Eugene Ng. CPU sharing in small instances degrades network performance, and complicates the use of benchmarking tools. Virtual machine migration, an important tool of VM technology, was found to have little impact on Web 2.0 technologies except at peak workloads. With the ease and low cost of spawning new machines in cloud computing, Web 2.0 applications can achieve zero service level agreement violations during migrations. Voorsluys et. al. showed this can be done by increasing the number of instances during the VM migration, or sufficiently spacing the interval between migrations. High performance computing and cloud computing may still need further evaluation. Younge et. al. showed HPC benchmarking of different hypervisors, but lacked insight or explanations of the conclusions drawn. Much of the research that examines the performance of virtualization in cloud computing does not adequately adapt their tests to true cloud environments. It is not clear that small scale tests of less than ten VMs are indicative to performances of hundreds or thousands of virtual machines. Moving the experiments of the many performance tests to real-world cloud systems would be beneficial to future applications. REFERENCES [1] All aws case studies, https://aws.amazon.com/solutions/casestudies/all, accessed: 2014-05-15. [2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, A view of cloud computing, Commun. ACM, vol. 53, no. 4, pp. 50 58, Apr. 2010. [Online]. Available: http://doi.acm.org/10.1145/1721654.1721672 [3] N. Manohar, A survey of virtualization techniques in cloud computing, in Proceedings of International Conference on VLSI, Communication, Advanced Devices, Signals and Systems and Networking, ser. Lecture Notes in Electrical Engineering, V. S. Chakravarthi, Y. J. M. Shirur, and R. Prasad, Eds. Springer India, 2013, vol. 258, pp. 461 470. [4] S. Nanda and T. Chiueh, A survey on virtualization technologies, RPE Report, pp. 1 42, 2005. [5] G. Wang and T. S. E. Ng, The impact of virtualization on network performance of amazon ec2 data center, in Proceedings of the 29th Conference on Information Communications, ser. INFOCOM 10. Piscataway, NJ, USA: IEEE Press, 2010, pp. 1163 1171. [Online]. Available: http://dl.acm.org/citation.cfm?id=1833515.1833691 [6] J. Sommers, P. Barford, N. Duffield, and A. Ron, Improving accuracy in end-to-end packet loss measurement, in Proceedings of the 2005 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, ser. SIGCOMM 05. New York, NY, USA: ACM, 2005, pp. 157 168. [Online]. Available: http://doi.acm.org/10.1145/1080091.1080111 [7] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield, Live migration of virtual machines, in Proceedings of the 2Nd Conference on Symposium on Networked Systems Design & Implementation - Volume 2, ser. NSDI 05. Berkeley, CA, USA: USENIX Association, 2005, pp. 273 286. [Online]. Available: http://dl.acm.org/citation.cfm?id=1251203.1251223 [8] https://www.poweradvantage.eaton.com/ipm/default.aspx, accessed: 2014-05-15. [9] W. Voorsluys, J. Broberg, S. Venugopal, and R. Buyya, Cost of virtual machine live migration in clouds: A performance evaluation, in Cloud Computing, ser. Lecture Notes in Computer Science, M. Jaatun, G. Zhao, and C. Rong, Eds. Springer Berlin Heidelberg, 2009, vol. 5931, pp. 254 265. [10] I. Foster, Y. Zhao, I. Raicu, and S. Lu, Cloud computing and grid computing 360-degree compared, in Grid Computing Environments Workshop, 2008. GCE 08. Ieee, 2008, pp. 1 10. [11] L. Wang, G. Von Laszewski, A. Younge, X. He, M. Kunze, J. Tao, and C. Fu, Cloud computing: a perspective study, New Generation Computing, vol. 28, no. 2, pp. 137 146, 2010. [12] Olio web 2.0 toolkit, http://incubator.apache.org/projects/olio.html, accessed: 2014-05-15. [13] Faban load generator, http://faban.org/, accessed: 2014-05-15. [14] A. Keller and H. Ludwig, The wsla framework: Specifying and monitoring service level agreements for web services, Journal of Network and Systems Management, vol. 11, no. 1, pp. 57 81, 2003. [15] A. J. Younge, R. Henschel, J. T. Brown, G. von Laszewski, J. Qiu, and G. C. Fox, Analysis of virtualization technologies for high performance computing environments, in Cloud Computing (CLOUD), 2011 IEEE International Conference on. IEEE, 2011, pp. 9 16. [16] S. C. Ahalt and K. L. Kelley, Blue-collar computing: Hpc for the rest of us, Cluster World, vol. 2, no. 11, 2004. [17] Standard performance evaluation corporation, http://www.spec.org/omp/, accessed: 2014-05-15. [18] Hpc challenge benchmarking, http://icl.cs.utk.edu/hpcc/, accessed: 2014-05-15. [19] P. R. Luszczek, D. H. Bailey, J. J. Dongarra, J. Kepner, R. F. Lucas, R. Rabenseifner, and D. Takahashi, The hpc challenge (hpcc) benchmark suite, in Proceedings of the 2006 ACM/IEEE conference on Supercomputing. Citeseer, 2006, p. 213. [20] About futuregrid, https://portal.futuregrid.org/about, accessed: 2014-05-15. [21] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, Xen and the art of virtualization, SIGOPS Oper. Syst. Rev., vol. 37, no. 5, pp. 164 177, Oct. 2003. [Online]. Available: http://doi.acm.org/10.1145/1165389.945462 [22] A. Kivity, Y. Kamay, D. Laor, U. Lublin, and A. Liguori, kvm: the linux virtual machine monitor, in Proceedings of the Linux Symposium, vol. 1, 2007, pp. 225 230. [23] V. Oracle, Virtualbox user manual, 2011. [24] T. Deshane, Z. Shepherd, J. Matthews, M. Ben-Yehuda, A. Shah, and B. Rao, Quantitative comparison of xen and kvm, Xen Summit, Boston, MA, USA, pp. 1 2, 2008.