Performance Analysis of Scale-out Cloud Architecture In Multiple Geographical Regions Using Cloud Sim



Similar documents
Performance and Scale in Cloud Computing

Cloud Based Application Architectures using Smart Computing

OPTIMIZING SERVER VIRTUALIZATION

Copyright 1

A Survey on Load Balancing and Scheduling in Cloud Computing

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Windows Server 2008 R2 Hyper-V Live Migration

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

A Migration of Virtual Machine to Remote System

The Benefits of Virtualizing

Windows Server Performance Monitoring

Balancing Server in Public Cloud Using AJAS Algorithm

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Efficient DNS based Load Balancing for Bursty Web Application Traffic

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Cisco Wide Area Application Services Optimizes Application Delivery from the Cloud

Infrastructure as a Service (IaaS)

IBM Communications Server for Linux - Network Optimization for On Demand business

Microsoft Exchange Server 2003 Deployment Considerations

Relational Databases in the Cloud

HyperQ DR Replication White Paper. The Easy Way to Protect Your Data

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Xen Live Migration. Networks and Distributed Systems Seminar, 24 April Matúš Harvan Xen Live Migration 1

CLOUD COMPUTING: A NEW VISION OF THE DISTRIBUTED SYSTEM

Building the Virtual Information Infrastructure

Hadoop in the Hybrid Cloud

WHAT WE NEED TO START THE PERFORMANCE TESTING?

7 Real Benefits of a Virtual Infrastructure

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

Dynamic resource management for energy saving in the cloud computing environment

Rackspace Cloud Databases and Container-based Virtualization

Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server

Navigating Among the Clouds. Evaluating Public, Private and Hybrid Cloud Computing Approaches

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman

How To Build A Clustered Storage Area Network (Csan) From Power All Networks

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT

Business Continuity in Today s Cloud Economy. Balancing regulation and security using Hybrid Cloud while saving your company money.

Parallels Server 4 Bare Metal

Quantum StorNext. Product Brief: Distributed LAN Client

APPLICATION PERFORMANCE MONITORING

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

HyperQ Storage Tiering White Paper

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

Load Balancing using DWARR Algorithm in Cloud Computing

21/09/11. Introduction to Cloud Computing. First: do not be scared! Request for contributors. ToDO list. Revision history

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

W H I T E P A P E R. VMware Infrastructure Architecture Overview

June Blade.org 2009 ALL RIGHTS RESERVED

IBM Global Technology Services March Virtualization for disaster recovery: areas of focus and consideration.

Windows Server 2008 R2 Hyper-V Live Migration

New Features in SANsymphony -V10 Storage Virtualization Software

Cost Effective Selection of Data Center in Cloud Environment

Two-Level Cooperation in Autonomic Cloud Resource Management

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

A Novel Method for Resource Allocation in Cloud Computing Using Virtual Machines

Scala Storage Scale-Out Clustered Storage White Paper

Reducing Storage TCO With Private Cloud Storage

White Paper: Nasuni Cloud NAS. Nasuni Cloud NAS. Combining the Best of Cloud and On-premises Storage

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

High Availability with Windows Server 2012 Release Candidate

Optimize VDI with Server-Side Storage Acceleration

Performance Modeling and Analysis of a Database Server with Write-Heavy Workload

CLOUD COMPUTING: ARCHITECTURE AND CONCEPT OF VIRTUALIZATION

VMware vsphere 5.1 Advanced Administration

Table of contents. Matching server virtualization with advanced storage virtualization

Planning the Migration of Enterprise Applications to the Cloud

Virtual Machine in Data Center Switches Huawei Virtual System

PRODUCTS & TECHNOLOGY

Grid Computing Vs. Cloud Computing

VERITAS Backup Exec 9.0 for Windows Servers

IOS110. Virtualization 5/27/2014 1

A virtual SAN for distributed multi-site environments

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

VIRTUALIZATION, The next step for online services

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study

Transcription:

Performance Analysis of Scale-out Cloud Architecture In Multiple Geographical Regions Using Cloud Sim Rinkal Dabra, Tarun Kumar, Punit Dhiman Cse dept,r N C E T,panipat Abstract Poor application performance causes companies to lose customers, reduce employee productivity, and reduce bottom line revenue. Because application performance can vary significantly based on delivery environment, businesses must make certain that application performance is optimized when written for deployment on the cloud or moved from a data center to a cloud computing infrastructure. Applications can be tested in cloud and non-cloud environments for baselevel performance comparisons. Aspects of an application, such as disk I/O and RAM access, may cause intermittent spikes in performance. However, as with traditional software architectures, overall traffic patterns and peaks in system use account for the majority of performance issues in cloud computing. Capacity planning and expansion based on multiples of past acceptable performance solves many performance issues when companies grow their cloud environments. However, planning cannot always cover sudden spikes in traffic, and manual provisioning might be required. A more cost-effective pursuit of greater scalability and performance is the use of more efficient application development; this paper analyze the performance issues in a scale-out environment serviced by more easily scaled and provisioned resources. Keywords: Cloud sim, Scalability, SCSI. 1. INTRODUCTION As companies move computing resources from premisesbased data centers to private and public cloud computing [1] facilities, they should make certain their applications and data make a safe and smooth transition to the cloud. In particular, businesses should ensure that cloud-based facilities will deliver necessary application and transaction performance now, and in the future. Much depends on this migration and preparation for the transition and final cutoff. Rather than simply moving applications[3] from the traditional data center servers to a cloud computing environment and flick the on switch, companies should examine performance issues, potential reprogramming of applications, and capacity planning for the new cloud target to completely optimize application performance[3][5][8]. Applications that performed one way in the data center may not perform identically on a cloud platform [10]. Companies need to isolate the areas of an application or its deployment that may cause performance changes and address each separately to guarantee optimal transition. In many cases, however, the underlying infrastructure of the cloud platform may directly affect application performance [9]. Businesses should also thoroughly test applications developed and deployed specifically for cloud computing infrastructures. Ideally, businesses should test the scalability of the infrastructure and applications both under a variety of network and application/infrastructure conditions to make sure the new application handles not only the current business demands but also is able to seamlessly scale to handle planned or unplanned spikes in demand. 2. WHY WORRY ABOUT CLOUD COMPUTING PERFORMANCE? Sluggish access to data, applications, and Web pages frustrates employees and customers alike, and some performance problems and bottlenecks can even cause application crashes and data losses [11]. In these instances, performance or lack of performance is a direct reflection of the company s competency and reliability. People are unlikely to put their trust in applications that crashes and are reluctant to return to business sites that are frustrating to use due to poor performance and sluggish response times [12]. Intranet cloud-based applications should also maintain peak performance. Positive employee productivity relies on solid and reliable application performance to complete work accurately and quickly. Application crashes due to poor performance cost money and impact morale. If applications cannot adequately perform during an increase in traffic, businesses lose customers and revenue. Adequate current performance does not guarantee future behavior. An application that adequately serves 100 customers per hour may suddenly nose-dive in responsiveness when attempting to serve 125 users [17]. Capacity planning based on predicted traffic and system stress testing can help businesses make informed decisions concerning cost-effective and optimum provisioning of their cloud platforms [15]. In some cases, businesses intentionally over-provision their cloud systems to ensure that no outages or slowdowns occur. Regardless, companies should have plans in place for addressing increased demand on their systems to guarantee they do not lose customers, decrease employee productivity, or diminish their business reputation[14]. 3. ISOLATING THE CAUSES OF PERFORMANCE PROBLEMS Before companies can address performance issues on their cloud infrastructure, they should rule out non-cloud factors in the performance equation. First, and most obvious, is the access time from user to application over the Internet or WAN [9]. Simple Internet speed testing programs, taken from a number of geographically disbursed locations, or even network ping measurements when testing WAN connections can give companies the Volume 2, Issue 4 July August 2013 Page 33

average network access overhead. Next, businesses can also measure the performance of the application on a specific platform configuration to calculate how much the platform affects overall performance. Comparisons of the application speed on its native server in the data center compared to its response on the cloud, for example, provide an indication of any significant platform differential, but may not necessarily isolate exactly what platform factor is causing the difference [6]. 2 If an application s performance suffers once it is moved to a cloud infrastructure and network access time is not the culprit, it has found from experience that the application s architecture is likely to be at fault. Inefficiently written applications can result in poor performance on different platforms, as the application is not optimized for a particular architecture. Optimizing application code for use on cloud platforms is discussed later in this paper, because it directly relates to the more complex topic of enabling cloud computing scalability [12]. Optimum cloud computing performance is not a simple problem to solve nor is it addressed satisfactorily by many current cloud computing vendors. performance issues. While they may not always add more physical servers or memory, they frequently provision more virtual machines. Adding virtual machines may be a short-term solution to the problem, but adding machines is a manual task. If a company experiences a sudden spike in traffic, how quickly will the vendor notice the spike and assign a technician to provision more resources to the account? How much does this cost the customer? Besides the misperception that more hardware, either physical or virtual, effectively solves all performance issues, companies may have a fundamental misunderstanding of how performance, scalability, and network throughput are interdependent yet affect applications and data access in separate ways[10]. 5. UNDERSTANDING PERFORMANCE, SCALE, AND THROUGHPUT Because the terms performance, scale, and throughput are used in a variety of ways when discussing computing, it is useful to examine their typical meanings in the context of cloud computing infrastructures. 4. MISPERCEPTIONS CONCERNING CLOUD COMPUTING PERFORMANCE In a typical corporate network data center, servers, storage, and network switches perform together to deliver data and applications to network users. Under this IT scenario, applications must have adequate CPU and memory to perform, data must have sufficient disk space, and users must have appropriate bandwidth to access the data and applications [3]. Whenever IT administrators experience performance issues under this scenario, they usually resolve issues in the following way: A. Poor application performance or application hang-ups. Usually the application is starved for RAM or CPU cycles, and faster processors or more RAM is added. B. Slow access to applications and data. Bandwidth is usually the cause, and the most common solution is to add faster network connections to the mix, increasing desktops from 10 Mbps NICs to 100 Mbps NICs, for example, or faster disk drives, such as SCSI over fibre channel. While these solutions may solve data center performance issues, they may do nothing for cloud-based application optimization [8]. Furthermore, even under data center application performance enhancement, adding CPU and memory may be an expensive over-provisioning to handle simple bursts in demand. The bottom line is that cloud computing, based on virtual provisioning of resources on an Internet-based platform, does not conform to standard brute-force data center solutions to performance. When companies or cloud vendors take the simplistic more hardware solves the problem approach to cloud performance, they waste money and do not completely resolve all application issues [8]. Unfortunately, that is precisely how many cloud vendors attempt to solve A. Performance. Performance is generally tied to an application s capabilities within the cloud infrastructure itself. Limited bandwidth, disk space, memory, CPU cycles, and network connections can all cause poor performance. Often, a combination of lack of resources causes poor application performance. Sometimes poor performance is the result of an application architecture that does not properly distribute its processes across available cloud resources B. Throughput. The effective rate at which data is transferred from point A to point B on the cloud is throughput. In other words, throughput is a measurement of raw speed. While speed of moving or processing data can certainly improve system performance, the system is only as fast as its slowest element. A system that deploys ten gigabit Ethernet yet its server storage can access data at only one gigabit effectively has a one gigabit system. C. Scalability. The search for continually improving system performance through hardware and software throughput gains is defeated when a system is swamped by multiple, simultaneous demands. That 10 gigabit pipe slows considerably when it serves hundreds of requests rather than a dozen. The only way to restore higher effective throughput (and performance) in such a swamped resources scenario is to scale add more of the resource that is overloaded. For this reason, the ability of a system to easily scale when under stress in a cloud environment is vastly more useful than the overall throughput or aggregate performance of individual components. In cloud Volume 2, Issue 4 July August 2013 Page 34

environments, this scalability is usually handled through either horizontal or vertical scaling. 6. HORIZONTAL AND VERTICAL SCALABILITY When increasing resources on the cloud to restore or improve application performance, administrators can scale either horizontally (out) or vertically (up), depending on the nature of the resource constraint. Vertical scaling (up) entails adding more resources to the same computing pool for example, adding more RAM, disk, or virtual CPU to handle an increased application load. Horizontal scaling (out) requires the addition of more machines or devices to the computing platform to handle the increased demand. This is represented in the transition from Figure 1 to Figure 2, below. when deploying horizontal scaling. In essence, a scaled increase in hard or virtual resources often requires a Corresponding increase in administrative time and expenses. This administrative increase may not be a onetime configuration demand as more resources require continual monitoring, backup, and maintenance Companies with critical cloud applications may also consider geographical scaling as a means to more widely distribute application load demands or as a way to move application access closer to dispersed communities of users or customers Geographical scaling of resources in conjunction with synchronous replication of data pools is Another means of adding fault tolerance and disaster recovery to cloud based data and applications. Geographical scaling may also be necessary in environments where it is impractical to host all data or applications in one central location Figure 1: Basic, Single Silo, n-tier architecture Figure 2: Horizontally scaled load balancing and webtier. Vertically scaled database tier. Vertical scaling can handle most sudden, temporary peaks in application demand on cloud infrastructures since they are not typically CPU intensive tasks. Sustained increases in demand, however, require horizontal scaling and load balancing to restore and maintain peak performance. Horizontal scaling is also manually intensive and time consuming, requiring a technician to add machinery to the customer s cloud configuration. Manually scaling to meet a sudden peak in traffic may not be productive traffic may settle to its pre-peak levels before new provisioning can come on line. Businesses may also find themselves experiencing more gradual increases in traffic. Here, provisioning extra resources provides only temporary relief as resource demands continue to rise and exceed the newly provisioned resources 7. ADMINISTRATIVE AND GEOGRAPHICAL SCALABILITY While adding computing components or virtual resources is a logical means to scale and improve performance, few companies realize that the increase in resources may also necessitate an increase in administration particularly 8. PERFORMANCE EVALUATION In the absence of real traces from real cloud providers, we generate the input workload randomly. Casazza et al. present a workload methodology to characterize the performance of servers exploiting virtualization technologies to consolidate multiple physical servers. They combine the workload of web server, an email server, and a database application to reflect the variation of application that can be run on cloud computing. We generate synthetic data distributions workload as we now describe. As described in section III, we assume a certain number of users, jobs, and cloud providers. Jobs workload are selected randomly [28] with uniform distribution between 500 MB to 2000 MB. Each job has set of input and output files selected randomly between 1 and 6. Job length is a function to the total input size for 300D if we assume D is the total input size. Simulation is the process of imitation of the real system. Because of the difficulty in testing the proposed system in a real system, a simulation evaluation has been conducted on four datasets. Cloud sim [29] is a discrete event simulator that is used to simulate cloud environments. Cloud sim has the ability to create data centers, virtual machines and physical machines and configure system brokers, system storage, etc. The proposed algorithm has been tested and evaluated on the two real datasets. Table I specifies the simulation parameters used for our study. Two performance metrics have been used to evaluate the proposed model. Virtual machine bandwidth utilization is computed using 9. It computes the percentage of utilizing VM bandwidth. TABLE II CLOUD SIM CONFIGURATIONS Item Value Number of Data Centers 14 Number of VM 100 Number of CPU/VM 1 CPU Speed/VM 1GHz, 2GHz, 2.5GHz, 3GHz Number of Jobs 100,200,300,400,500 Volume 2, Issue 4 July August 2013 Page 35

Results: consumes resources not only in the form of physical disk space, but also disk directories and metafile systems that consume RAM and CPU cycles when users either access or upload files into the storage system. As these examples illustrate, applications can benefit from both horizontal and vertical scaling of resources on demand, yet truly dynamic scaling is not possible on most cloud computing infrastructures. Therefore, one of the most common and costly Responses to scaling issues by vendors is to overprovision customer installations to accommodate a wide range of performance issues. 11. APPLICATION DEVELOPMENT TO IMPROVE SCALABILITY Figure 3: Results of Horizontally scaled load balancing and web-tier. Vertically scaled database tier through Cloud sim and Cloud Analyst. 9. PRACTICAL AND THEORETICAL LIMITS OF SCALE While scalability is the most effective strategy for solving performance issues in cloud infrastructures, practical and theoretical limits prevent it from ever becoming an exponential, infinite solution. Practically speaking, most companies cannot commit an infinite amount of money, people, or time to improving performance. Cloud vendors also may have a limited amount of experience, personnel, or bandwidth to address customer application performance. Every computing infrastructure is bound by a certain level of complexity and scale, not the least of which is power, administration, and bandwidth, necessitating geographical dispersal. 10. ADDRESSING APPLICATION SCALABILITY For a cloud computing platform to effectively host business data and applications, however, it must accommodate a wide range of performance characteristics and network demands. Storage, CPU, memory, and network bandwidth all come into play at various times during typical application use. Application switching, for example, places demands on the CPU as one application is closed, flushed from the registers, and another application is loaded. If these applications are large and complex, they put a greater demand on the CPU. Serving files from the cloud to connected users stresses a number of resources, including disk drives; drive controllers, and network connections when transferring the data from the cloud to the user. File storage itself One practical means for addressing application scalability and to reduce performance bottlenecks is to segment applications into separate silos. Web-based applications are theoretically stateless, and therefore theoretically easy to scale all that is needed is more memory, CPU, storage, and bandwidth to accommodate them, as was depicted in Figure 2. However, in practice Web-based applications are not stateless. They are accessed through a network connection(s) that requires an IP address (es) that is fixed and therefore stateful, and they connect to data storage (either disk or database) which maintains logical state as well as requiring hardware resources to execute. Balancing the interaction between stateless and stateful elements of a Web application requires careful architectural consideration and the use of tiers and silos to allow some form of horizontal resource scaling. To leverage the most from resources, application developers can break applications into discrete tiers state or stateless processes that are executed in various resource silos. Figure 3 depicts breaking an application into two silos identified by their DNS name. By segregating state and stateless operations and provisioning accordingly, applications and systems can run more efficiently and with higher resource utilization than under a more common scenario. Figure 4: Multi-silo, n-tier, scaled architecture 12. CONCLUSION Scalability is the best solution to increasing and maintaining application performance in cloud computing environments. Cloud computing vendors often resort to brute-force horizontal scaling by adding more physical or Volume 2, Issue 4 July August 2013 Page 36

virtual machines, but this approach may not only waste resources but also not entirely solve performance issues, especially those related to disk and network I/O. In addition, customers and vendors alike have practical limits on their ability to scale, primarily constrained by costs and human resources. Smart application development can alleviate performance issues in many cases by isolating resource-intensive processes and assigning the appropriate assets to handle the load. However, for the most part scaling to meet performance demands remains a manual process and requires vigilant monitoring and real-time response. REFERENCES [1] L. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner, A break in the clouds: towards a cloud definition, ACM SIGCOMM Computer Communication Review, vol. 39, no. 1, pp. 50 55, 2008. [2] M. Kesavan, A. Ranadive, A. Gavrilovska, and K. Schwan, Active CoordinaTion (ACT)-toward effectively managing virtualized multicore clouds, in Cluster Computing, 2008 IEEE International Conference on. IEEE, 2008, pp. 23 32. [3] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, Xen and the art of virtualization, in Proceedings of the nineteenth ACM symposium on Operating systems principles. ACM, 2003, p. 177. [4] I. Foster, Y. Zhao, I. Raicu, and S. Lu, Cloud computing and grid computing 360-degree compared, in Grid Computing Environments Workshop, 2008. GCE 08. Ieee, 2008, pp. 1 10. [5] E. Bugnion, S. Devine, K. Govil, and M. Rosenblum, Disco: Running commodity operating systems on scalable multiprocessors, ACMTransactions on Computer Systems (TOCS), vol. 15, no. 4, pp. 412 447,1997. [6] D. Ongaro, A. Cox, and S. Rixner, Scheduling I/O in virtual machine monitors, in Proceedings of the fourth ACM SIGPLAN/SIGOPS international conference on Virtual execution environments. ACM, 2008, pp. 1 10. [7] K. Mathew, P. Kulkarni, and V. Apte, Network bandwidth configuration tool for xen virtual machines, in Communication Systems and Networks (COMSNETS), 2010 Second International Conference on, jan. 2010, pp.1 2. [8] V. Manetti, P. Di Gennaro, R. Bifulco, R. Canonico, and G. Ventre, Dynamic virtual cluster reconfiguration for efficient iaas provisioning, in Proceedings of the 2009 international conference on Parallel processing, ser. Euro-Par 09. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 424 433. [Online]. Available: http://portal.acm.org/citation.cfm?id=1884795.18848 44 [9] L. Wang, J. Tao, M. Kunze, A. C. Castellanos, D. Kramer, and W. Karl, Scientific cloud computing: Early definition and experience, High Performance Computing and Communications, 10th IEEE International Conference on, vol. 0, pp. 825 830, 2008. [10] B. Sotomayor, R. Montero, I. Llorente, and I. Foster, Resource leasing and the art of suspending virtual machines, in 2009 11th IEEE International Conference on High Performance Computing and Communications. IEEE, 2009, pp. 59 68. [11] B. Rimal, E. Choi, and I. Lumb, A taxonomy and survey of cloud computing systems, in 2009 Fifth International Joint Conference on INC, IMS and IDC. IEEE, 2009, pp. 44 51. [12] R. Goldberg, Survey of virtual machine research, IEEE Computer, vol. 7, no. 6, pp. 34 45, 1974. [13] R. Figueiredo, P. Dinda, and J. Fortes, A case for grid computing on virtual machines, in 23rd International Conference on Distributed Computing Systems, 2003. Proceedings, 2003, pp. 550 559. [14] C. Clark, K. Fraser, S. Hand, J. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield, Live migration of virtual machines, in Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation-Volume 2. USENIX Association, 2005, p. 286. [15] C. Sapuntzakis, R. Chandra, B. Pfaff, J. Chow, M. Lam, and M. Rosenblum, Optimizing the migration of virtual computers, ACM SIGOPS Operating Systems Review, vol. 36, no. SI, pp. 377 390, 2002. [16] M. Kozuch and M. Satyanarayanan, Internet suspend/resume, 2002. [17] T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif, Black-box and gray-box strategies for virtual machine migration, in Proc. Networked Systems Design and Implementation, 2007. [18] S. Pinter, Y. Aridor, S. Shultz, and S. Guenender, Improving machine virtualisation with hotplug memory, International Journal of High Performance Computing and Networking, vol. 5, no. 4, pp. 241 250, 2008 Volume 2, Issue 4 July August 2013 Page 37