A SURVEY ON ENERGY EFFICIENT SERVER CONSOLIDATION THROUGH VM LIVE MIGRATION



Similar documents
International Journal of Advance Research in Computer Science and Management Studies

Energy Constrained Resource Scheduling for Cloud Environment

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Green Cloud Computing 班 級 : 資 管 碩 一 組 員 : 黃 宗 緯 朱 雅 甜

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Energy Optimized Virtual Machine Scheduling Schemes in Cloud Environment

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

An Energy Aware Cloud Load Balancing Technique using Dynamic Placement of Virtualized Resources

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

Dynamic resource management for energy saving in the cloud computing environment

Future Generation Computer Systems. Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing

Two-Level Cooperation in Autonomic Cloud Resource Management

Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis

The International Journal Of Science & Technoledge (ISSN X)

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

Green Cloud: Smart Resource Allocation and Optimization using Simulated Annealing Technique

Efficient and Enhanced Load Balancing Algorithms in Cloud Computing

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

SCHEDULING IN CLOUD COMPUTING

Figure 1. The cloud scales: Amazon EC2 growth [2].

Public Cloud Partition Balancing and the Game Theory

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Infrastructure as a Service (IaaS)

Enhancing the Scalability of Virtual Machines in Cloud

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD

A Novel Switch Mechanism for Load Balancing in Public Cloud

A Survey on Load Balancing Algorithms in Cloud Environment

Optimal Service Pricing for a Cloud Cache

ENERGY-EFFICIENT TASK SCHEDULING ALGORITHMS FOR CLOUD DATA CENTERS

Comparison of PBRR Scheduling Algorithm with Round Robin and Heuristic Priority Scheduling Algorithm in Virtual Cloud Environment

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang

This is an author-deposited version published in : Eprints ID : 12902

A Survey on Energy Aware Load Balancing Techniques in Cloud Computing

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

A Novel Approach of Load Balancing Strategy in Cloud Computing

LOAD BALANCING OF USER PROCESSES AMONG VIRTUAL MACHINES IN CLOUD COMPUTING ENVIRONMENT

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

A Survey of Energy Efficient Data Centres in a Cloud Computing Environment

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Energy Efficient Resource Management in Virtualized Cloud Data Centers

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis

Statistics Analysis for Cloud Partitioning using Load Balancing Model in Public Cloud

A Survey Paper: Cloud Computing and Virtual Machine Migration

Windows Server 2008 R2 Hyper-V Live Migration

Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure

Energy Efficient Load Balancing of Virtual Machines in Cloud Environments

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Multi-dimensional Affinity Aware VM Placement Algorithm in Cloud Computing

Effective Virtual Machine Scheduling in Cloud Computing

Abstract. 1. Introduction

HRG Assessment: Stratus everrun Enterprise

Task Scheduling for Efficient Resource Utilization in Cloud

VIRTUAL RESOURCE MANAGEMENT FOR DATA INTENSIVE APPLICATIONS IN CLOUD INFRASTRUCTURES

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

International Journal of Computer & Organization Trends Volume21 Number1 June 2015 A Study on Load Balancing in Cloud Computing

CHAPTER 1 INTRODUCTION

An Approach to Load Balancing In Cloud Computing

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load

Cloud Management: Knowing is Half The Battle

9/26/2011. What is Virtualization? What are the different types of virtualization.

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers

Challenges and Importance of Green Data Center on Virtualization Environment

Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing

Energetic Resource Allocation Framework Using Virtualization in Cloud

International Journal of Digital Application & Contemporary research Website: (Volume 2, Issue 9, April 2014)

Cost Effective Automated Scaling of Web Applications for Multi Cloud Services

CDBMS Physical Layer issue: Load Balancing

Hierarchical Approach for Green Workload Management in Distributed Data Centers

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

Auto-Scaling Model for Cloud Computing System

SURVEY ON GREEN CLOUD COMPUTING DATA CENTERS

A Comparative Study of Load Balancing Algorithms in Cloud Computing

Environments, Services and Network Management for Green Clouds

Cloud Computing Simulation Using CloudSim

A Game Theory Modal Based On Cloud Computing For Public Cloud

Utilizing Round Robin Concept for Load Balancing Algorithm at Virtual Machine Level in Cloud Environment

A Migration of Virtual Machine to Remote System

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

Parallels Virtuozzo Containers

Transcription:

A SURVEY ON ENERGY EFFICIENT SERVER CONSOLIDATION THROUGH VM LIVE MIGRATION Jyothi Sekhar, Getzi Jeba, S. Durga Department of Information Technology, Karunya University, Coimbatore, India ABSTRACT Virtualization technologies which are heavily relied on by the Cloud Computing environments provide the ability to transfer virtual machines (VM) between the physical systems using the technique of live migration mainly for improving the energy efficiency. Dynamic server consolidation through live migration is an efficient way towards energy conservation in Cloud data centers. The main objective is to keep the number of power-on systems as low as possible and thus reduce the excessive power used to run idle servers. This technique of VM live migration is being used widely for various system-related issues like load balancing, online system maintenance, fault tolerance and resource distribution. Energy efficient VM migration becomes a main concern as the data centers are trying to reduce the power consumption. Aggressive consolidation may even lead to performance degradation and hence can result in Service Level Agreement (SLA) violation. Thus there is a trade-off between energy and performance. Various protocols, heuristics and architectures have been proposed for the energy aware server consolidation via live migration of VMs and are the main area for this survey. KEYWORDS: Cloud computing, Data Center, Energy Efficiency, Live Migration, Virtual Machine, VM consolidation I. INTRODUCTION Computing resources have become cheaper, powerful and ubiquitously available than ever before due to the rapid development in the processing and storage technologies and also the success of Internet. People in businesses are trying to figure out methods to cut costs while maintaining the same performance standards. Their aspiration to grow even under the peer pressure of limited resources has invited them to try new ideas and methods. This realization along with the current technological advancement have enabled the actualization of a new model for computing called cloud computing, in which the resources (e.g., CPU, storage, etc.) are provided as general utilities that can be leased and released by users through the Internet in a pay-as-you-go and on-demand basis. Therefore we can also call it as utility computing. NIST Definition of Cloud Computing Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. [19] Challenges in cloud Automated service provisioning 515 Vol. 5, Issue 1, pp. 515-525

Virtual machine migration Server consolidation Traffic management and analysis Data security Storage technologies and data management Cloud Data Center Cloud data centers are attractive because the cost is likely to be very less compared to the traditional data centers for the same set of services. Cloud data centers [18] means data centers with 10,000 or more servers on site, all devoted to running very few applications that are built with consistent infrastructure components. One of the most important factors is that cloud data centers aren t remodeled traditional data centers. The employee to server ratio is as 1:1000. This means that there is larger automation in Cloud data centers. The estimates of cost to build a cloud data center include labor costs were 6 percent of the total costs of operating the cloud data center, power distribution and cooling were 20 percent, and computing costs were 48 percent. Of course, the cloud data center has some different costs than the traditional data center (such as buying land and construction). Traditional Data Center Although each data center is a little different, the average cost per year to operate a large data center is usually between $10 million to $25 million. 42 percent: Hardware, software, disaster recovery arrangements, uninterrupted power supplies, and networking. 58 percent: Heating, air conditioning, property and sales taxes, and labor costs. The reality of the traditional data center [18] is further complicated because most of the costs maintain existing applications and infrastructure. Some estimates show 80 percent of spending on maintenance. The employee to server ratio is 1:100 in traditional data centers. Table 1: Comparison [18] on Traditional and Cloud Data Centers Traditional Corporate Data Center Cloud Data Center Thousands of different applications Mixed hardware environment Frequent application patching and updating Complex Workloads Multiple software architecture Few Applications Homogenous hardware environment Minimal application patching and updating Simple Workloads Single standard software architecture II. ENERGY MANAGEMENT IN CLOUD DATA CENTERS Power is the rate at which a system will perform some work while energy can be defined as the amount of work done in a particular period of time. This difference should be understood with clarity because the reduction in power consumption does not always reduce the energy consumed. Reduction in power consumption causes reduction in the cost of infrastructure provisioning. Improving the energy efficiency is a major issue in clouds. It has been estimated that the cost of powering and cooling accounts for 53% of the total operational expenditure of data centers. In 2006, data centers in the US consumed more than 1.5% of the total energy generated in that year, and the percentage is projected to grow 18% annually [19]. Thus there is a huge pressure to reduce energy consumption from the infrastructure providers. The goal is not only to cut down energy cost but also to maintain the Government regulations and SLA policies. Designing energy-efficient data centers has recently received considerable attention. This problem has been approached from various perspectives: Energy efficient hardware architecture Virtualization of computing resources Energy-aware job scheduling Dynamic Voltage and Frequency Scaling (DVFS) 516 Vol. 5, Issue 1, pp. 515-525

Server consolidation Switching off unused nodes A key issue in all the above methods is to achieve a trade-off between application performance and energy efficiency. Some of the service providers show the statistics of power consumption as follows: Facebook 10.52%, Google 7.74% and YouTube 3.27% of the total generated power for the data centers. According to Gartner, Cloud market opportunities in 2013 will be worth $150 billion[6]. Data centers are not only expensive to maintain but also unfriendly to the environment. High energy costs and huge carbon footprints are ascertained by the massive amounts of electricity consumed to power and cool numerous servers hosted in these data centers. Cloud service providers need to adopt measures to ensure that their profit margin is not reduced too much due to high energy costs. Virtualization; way to Live Migration Virtualization technology is the one that abstracts away the details of physical hardware and provides virtualized resources for high-level applications. An essential characteristic of a virtual machine is that the software running inside it is limited to the resources and abstractions provided by the VM. The software layer that provides the virtualization is called a Virtual Machine Monitor (VMM) or hypervisor. It virtualizes all of the resources of a physical machine, thereby defining and supporting the execution of multiple virtual machines. Virtualization can provide significant benefits in cloud computing by enabling virtual machine migration to balance load across the data center. Virtual Machine Consolidation is an approach to minimize the energy consumption in a virtualized environment by maximizing the number of inactive physical servers. Live migration is an essential feature of virtualization that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Advantages of Live Migration: Workload balancing Maximize resource utilization Fault tolerance Online system maintenance Due to variability of workloads experienced by modern applications, VM placement should be optimized continuously in an online manner. The reason for high power consumption does not just lie in the quantity of computing resources and the power inefficiency of hardware, but rather lies in inefficient usage of these resources. Even completely idle servers consume about 70% of their peak power. For each watt of power consumed by computing resources, an additional 0.5-1W is required for the cooling system. III. RELATED WORKS ON ENERGY-AWARE PROTOCOLS Papers EnaCloud: An Energy-saving Application Live Placement Approach for Cloud Computing Environments [1] Energy-Aware Virtual Machine Dynamic Provision and Scheduling for System Resources Memory, storage Table 2: Energy efficiency related papers Virtualization Goals Technique Platforms Yes Minimize energy consumption, application scheduling CPU Yes Power saving, scheduling, consolidation Energy Aware heuristic algorithm for load aggregation VM consolidation Xen VMM Eucalyptus 517 Vol. 5, Issue 1, pp. 515-525

Cloud Computing [7] Power-aware Provisioning of Cloud Resources for Real-time Services [2] Adaptive Threshold Based Approach for Energy Efficient Consolidation of Virtual Machines in Cloud Data Centers [3] Reducing Energy Consumption by Load Aggregation with an Optimized Dynamic Live Migration of Virtual machine [5] Energy- Efficient Virtual Machine Consolidation for Cloud Computing [6] Sercon: Server Consolidation Algorithm using Live Migration of virtual machines for Green Computing [9] MiyakoDori: A Memory reusing mechanism for Dynamic VM Consolidation [10] A Novel Energy Optimized and Workload CPU Yes CPU power saving in terms of VM provisioning CPU Yes Dynamic consolidation of VM with minimum SLA violation and no. of VM migration Memory, storage Yes Minimized energy consumption, minimum running physical machines Storage Yes Energy efficient storage migration and live migration of VMs CPU Yes Minimized energy consumption, minimum servers, minimum migration Memory Yes Minimized energy consumption, reduce the amount of transferred data in a live migration CPU, disk utilization, network I/O, Yes Minimized energy consumption, avoid DVFS Dynamic consolidation of VM based on adaptive utilization threshold Clustering Algorithm Distributed Replicated Block Devices for high availability data storage in Distributed System Sercon algorithm Memory reuse Workload adaptive model CloudSim CloudSim Not yet implemented Eucalyptus - KVM Xen 3.4 platform 518 Vol. 5, Issue 1, pp. 515-525

Adaptive Modeling for Live Migration [11] utilization unnecessary live migration 3.1 ENACLOUD: AN ENERGY-SAVING APPLICATION The EnaCloud [1] proposed by Bo Li et.al supports application scheduling and live migration to minimize the number of running machines in order to save energy. It also aims in reducing the number of migrations of virtual machines. EnaCloud Architecture In EnaCloud, there is a central Global Controller which runs the Concentration Manager and Job Scheduler. The Job Scheduler in it receives the workload arrival, departure and resizing events and deliver them to the other component called the Concentration Manager. The Concentration Manager then generates a series of insertion and migration operations for application placement scheme which are passed to the Job Scheduler which then dispatches them to the Virtual Machine Controller by decomposing the schemes. Each resource node consists of Virtual Machine Controller, Resource Provision Manager and Performance Monitor. The Virtual Machine Controller invokes the hypervisor to execute the commands such as VM start, stop or migrate. The Resource Provision Manager does the VM resizing based on the performance statistics collected by the Performance Monitor. Workload Arrive or Depart Fig 1: EnaCloud Architecture Here, they consider two types of nodes; computing nodes and storage nodes. Storage nodes store the data and files while the computing nodes are considered homogenous and hosts one or more VMs. The application in the VM and the underlying operating system together are termed here as workload. A server node running VMs is called open box and an idle server node is called close box. In Enacloud, the workloads are aggregated tightly to reduce the number of open boxes. The applications have varying resource demands and so it includes workload resizing. Workload resizing involves workload inflation and deflation. It seeks a solution for remapping workloads to the resource nodes through migration whenever a workload arrives, departs or resizes. The migration here has mainly two goals; first is to minimize the number of open boxes and second is to minimize the migration times. A resource pool and a sequence of workloads when given, there are three types of events for triggering the migration of applications. They include: Workload Arrival Event - This does not simply put the incoming workload into an already existing gap; rather it tries to displace the packed workloads smaller than the newcomer with the newcomer. In the same manner, the smaller workloads extruded from the node are then reinserted into the resource pool. 519 Vol. 5, Issue 1, pp. 515-525

Workload Departure Event - It tries to create a closed box by popping the workload in it into the resource pool if that is the only remaining workload. Workload Resizing Event This is equivalent to the summation of workload departure event and workload arrival event. 3.2 POWER AWARE VM PROVISIONING METHODS 3.2.1 DYNAMIC ROUND ROBIN TECHNIQUE. Dynamic Round Robin [7] is an extension to the existing Round Robin in the case of consolidation of virtual machines. The objective of this technique is to minimize the number of physical machines that are used to run the entire virtual machines. The goal is very important as the number of physical systems used causes a large increase in the total power consumed. The technique follows two rules: The first rule states that if a virtual machine has finished running inside a physical system and there are still virtual machines running, the server will no more accept a new entry of VM. This state is called the retirement state. Once all the VMs finish their work, the physical machine will be shut down. The second rule is that a physical machine continues in its retirement state for a particular period of time called the retirement threshold. Once this time is exceeded the VMs are forced to migrate to a new physical machine and the former one is shut down or taken down for maintenance. This technique was compared with three other scheduling algorithms in Eucalyptus namely Round Robin, Greedy and Power save and was found to be more energy efficient than all others. 3.2.2 DYNAMIC VOLTAGE SCALING (DVS) ENABLED REAL TIME (RT) VM PROVISIONING In data centers, the most power consuming parts include processing, disk storage, network and cooling systems. The data centers can reduce the dynamic power consumption to increase their profit. It was found that the consumption of energy can be reduced by combining DVS and proportional sharing scheduling. When the user launches any service on the VM, the resource provider provision the VM using DVS [2] schemes in order to reduce the power consumption. The latter is used for scheduling multiple virtual machines on a processor. The power aware VM provisioning schemes are: Lowest-DVS : The processor speed is adjusted to the lowest level. The RT-VMs execute their services at the required MIPS rate. If the arrival rate of RT-VM is low, it consumes least energy and can accept all the requests. δ-advanced-dvs : This scheme over-scales up to δ% of the required MIPS rate for the RT- VMs. Thus the processor speed is δ% faster to increase the possibility of accepting the incoming VM requests in Real Time. The δ% is predefined according to the load on the system. Adaptive DVS : Adaptive DVS manages the average arrival rate, service rate and the deadlines for the last service requests. Here the RT-VM arrival rate and its service time are known in advance and thus we can analyze the optimal scale. This helps in reducing the power consumption by adjusting the processor speed. Processor Speed Table 3: Comparison on energy aware DVS schemes Static (No use of DVS) Lowest DVS δ-advanced DVS Adaptive DVS Maximum Profit Same as Adaptive DVS at high arrival rate Acceptance Rate Same adaptive as Required lowest MIPS More than static at lower arrival rates δ% increase Greater than min. required More than static at lower arrival rates - More than Lowest DVS Same as static Close to static 520 Vol. 5, Issue 1, pp. 515-525

Energy Consumption - - Same as Lowest DVS Performance - - Best in terms of profit per consumed power Lesser than static at lower arrival rate Limited by the simplified queueing model 3.3 ADAPTIVE THRESHOLD-BASED APPROACH The obligations to provide high quality service to customers lead to the necessity in dealing with the energy-performance trade-off and thus a novel technique for dynamic VM consolidation based on adaptive utilization thresholds [3] was introduced which ensures a high level of Service Level Agreements (SLA). Fixed values of thresholds are unsuitable in environments dealing with dynamic and unpredictable workloads. The system should automatically adjust in such a way as to handle the workload pattern exhibited by the application. So there is a need for auto-adjustment of the utilization thresholds based on the statistical analysis of the historical data collected during the VM s lifetime. The CPU utilization of a host is the sum of utilizations of the VMs allocated to that host and can be modeled by t-distribution. The data of CPU utilization by each VM is collected separately. This, along with the inverse cumulative probability function for the t-distribution enables to set the interval of the CPU utilization. The lower threshold is the same for all the hosts. The complexity of the algorithm is proportional to the sum of the number of non over-utilized host plus the product of the number of over-utilized hosts and the VMs allocated to the over-utilized hosts. 3.4 A LOAD AGGREGATION METHOD One important method for reducing the energy consumption in data centers is to aggregate the server load within a few number of computers while switching off the rest of the server systems. Usually this is done by virtualization of the systems. This method of load aggregation proposed by Daniel Versick et.al [5] uses some ideas of K-means partitioning clustering algorithm that can compute the results very fast. The K-means chooses cluster centers within an n-dimensional space randomly and calculate the distances between clusters centers and vertices. The proposed algorithm has three phases: Initialization, Iteration and Termination. The algorithm works as follows: First, the number of clusters is calculated based on resource needs. Define some number of physical machines as cluster centers. Each cluster center represents one cluster. A cluster consists of physical machine hosting various virtual machines. Add the virtual machines to the cluster VM list of nearest cluster center that can fulfill necessary requirements. If the clusters cannot fulfill the requirements, add the virtual machine to new cluster with still unused physical machine as cluster center. Every VM is assigned to a cluster center. Calculate a new cluster center for every cluster which is the nearest physical machine. If the cluster centers changed during last iteration and if the iterations are not at its maximum, then empty clusters are used again to add the virtual machines. Else, the VMs of a cluster are migrated to the physical machines representing a cluster center. Finally shut down the physical machines which are not cluster centers. 3.5 ENERGY EFFICIENT VM CONSOLIDATION IN EUCALYPTUS This approach [6] is based on the storage synchronization. Storage synchronization and VM live migration phases are introduced explicitly instead of permanent synchronization of the disk image via network. The technique here leverages the concept of Distributed Replicated Block Device (DBRD) which is typically used for high-availability data storage in a distributed system. The DBRD module works in two modes: stand-alone and synchronized. In stand-alone mode all the disk accesses are passed to the underlying disk driver. In synchronized mode, disk writes are both passed to the underlying disk driver and sent to a back up machine through a network, while the disk reads are served locally. A multi layered root file system (MLRFS) is used for the virtual machine s root image. 521 Vol. 5, Issue 1, pp. 515-525

The basic image is cached on a local disk. A separate layer stores the local modifications and then they are overlaid with the basic image transparently and form a single logical file system using the Copy-On-Write (COW) mechanism. Thus only the local modifications are sent during the disk synchronization phase. 3.6 SERCON: SERVER CONSOLIDATION ALGORITHM This server consolidation algorithm proposed by Aziz Murtazaev et.al [9] is aimed at consolidating the virtual machines (VMs) so that minimum nodes are used. Another objective is to reduce the number of VM migrations. Certain constraints are considered in Sercon. They include: Compatible virtualization softwares in the systems Comparable types of CPUs Similar network connectivity and shared storage Choosing the right value of CPU threshold to prevent performance degradation Migration of VM is done if it results in releasing a node. The procedure goes like this. The nodes containing the VM loads are sorted in the decreasing order. The VMs in the least loaded nodes are chosen as candidates for migration, and are again sorted according to their weights. They are allocated to the most loaded nodes first and on, thus trying to compact them and so can release the least loaded nodes. By this method we can avoid several migrations which might otherwise be necessary if there are nodes that are still least loaded. These steps are continued until there are no more migrations possible. CPU and Memory are considered for representing the load in VMs. 3.7 MIYAKODORI: MEMORY REUSING MECHANISM MiyakoDori [10] proposes a memory reusing mechanism to reduce the amount of transferred data in a live migration process. In the case of dynamic VM consolidation, a VM may migrate back to the host where it was once executed and so the memory image in that host can be reused, thus contributing to shorter migration time and greater optimizations by VM placement algorithms. This technique enables to reduce the total migration time even more. The dirty pages alone need to be transferred to the former host. (1) VM memory HOST A Migrating out HOST B (2) Memory image Copy updated pages only HOST A Migrating back (Memory reusing) HOST B Fig 2: Basic idea of memory reusing 3.8 ENERGY OPTIMIZED AND WORKLOAD ADAPTIVE MODELING Taking the workload characteristics into account, the workload adaptive modeling avoids considerable unnecessary live migrations and achieves stable live migration with various workloads and thus reduces the energy usage. In order to handle the changeable workload and realize minimal energy cost, two models are established respectively: energy guided migration model and workload adaptive model. 522 Vol. 5, Issue 1, pp. 515-525

In the energy guided migration model [11], the VM with the smallest cost is selected which should satisfy the minimal energy consumption and maintain certain performance levels. Minimizing the migration cost means selecting the VM with smaller memory and larger CPU reservations. This model solves the problem of selecting one VM to be migrated to achieve minimum energy cost. When workload gets heavy for a physical node, it measures the energy cost related function and then decides on the optimal VM. Suppose there is a need to migrate two workloads, then it is preferable to migrate a VM with heavy workload. Suspending a single application is considered better than two, always. In workload adaptive model [11], three correlations are investigated: Workload and Performance: Using subtractive clustering algorithm, the workloads are characterized and is parallely executed by each VM of each physical server. Given the current workload, the performance can be measured by the performance of the cluster with the current workload. Workload and Migrate moment: After categorizing the current workload into a cluster, calculate the center and performance of the cluster after readjusting itself. If it exceeds a performance threshold, live migration is requested. Workload and Energy consumption: The cluster with minimal energy per workload is selected as the optimal cluster. The physical server with the nearest distance between the optimal cluster and the workload is selected as the best candidate. This method minimizes the VM migration vibration and selects one physical server optimizing the energy efficiency. 3.9 ENERGY AWARE RESOURCE ALLOCATION HEURISTICS Currently, resource allocation in a cloud data center has the objective to provide high performance while meeting SLAs. It does not consider the allocation of VMs to minimize energy consumption. To explore the performance and energy efficiency there are mainly three issues to be dealt with. Excessive power consumption by a server compromises its reliability. Turning off the resources is risky in a dynamic environment from the QoS perspective. Third issue is that the ensuring of SLAs is quite challenging in such a virtualized environment. In order to address such issues effective consolidation policies are required which are both energy efficient and maintains strict SLA. Thus the problem of VM allocation has two parts; First is the VM provisioning and placing them on the hosts while second is the optimization of the current VM allocation. The optimization of the current VM allocation has again two steps: selecting the VMs to be migrated and where the chosen VMs should be placed. Non Power Aware (NPA) [4] policy This policy does not have any power aware optimizations and implies that all hosts machines may run at 100% CPU utilization and consume maximum power. Single Threshold (ST) [4] policy Here an upper utilization threshold alone is set up and the VMs are placed in such a way that the total CPU utilization falls below this threshold. The main aim behind this is to preserve the free resources to prevent SLA violations due to consolidation. Certain policies have upper and lower utilization thresholds. They include: The Minimization of Migration (MM) [12] policy It selects the minimum number of VMs to migrate from a host to maintain the CPU utilization below the upper utilization threshold that is predefined. The best VM is selected if it satisfies two conditions: First, the VM should have the utilization higher than the difference between the host s overall utilization and the upper utilization threshold. Second, if the VM is migrated from the host, the difference between the upper utilization threshold and the new utilization is the minimum across the values provided by all the VMs. The algorithm stops when the new utilization of the host is below the upper utilization threshold. The Highest potential growth (HGP) [12] policy Whenever the upper threshold is violated, the HPG migrates the VMs having the lowest CPU usage relative to the CPU capacity in order to minimize the potential increase of the host s utilization and thus prevent as SLA violation. The Random Choice (RC) [12] policy 523 Vol. 5, Issue 1, pp. 515-525

The policy relies on random selection of the number of VMs needed to decrease the CPU utilization by a host below the upper utilization threshold. IV. CONCLUSION AND FUTURE WORK Energy management in Data Centers is one of the most challenging issues faced by the infrastructure providers. Due to the daily increase in the data and processing, it is not possible to keep control over the power consumption in the way required as the performance will primarily be affected. Some of the existing techniques for energy efficient VM live migration were surveyed on. All the techniques mainly focus on reducing the energy consumption to a minimum. There are various ways for achieving this objective including application scheduling, DVFS, adaptive threshold utilization, storage synchronizations etc. Also there are workload adaptive models, memory reusing techniques and various resource allocation techniques discussed. Overcoming all the barriers for energy efficiency is not possible as each of the techniques throw light on different parameters, though with certain disadvantages of their own. Techniques can be proposed to reduce the energy consumption and to overcome the energyperformance trade-offs. Various scheduling and consolidation techniques can be applied for reducing the CPU utilization and bring about drastic changes in energy efficiency. REFERENCE [1] Bo Li, Jianxin Li, Jinpeng Huai, Tianyu Wo, Qin Li, Liang Zhong, EnaCloud: An Energy-saving Application Live Placement Approach for Cloud Computing Environments, International Conference on Cloud Computing IEEE 2009. [2] Anton Beloglazov et.al, Power-aware Provisioning of Cloud resources for Real-time Services, Proceedings of the 7th International Workshop on Middleware for Grids, Clouds and e-science, ACM 2009. [3] Anton Belaglozav, R. Buyya, Adaptive Threshold-Based Approach for Energy-Efficient Consolidation of Virtual Machines in Cloud Data Centers, Proceedings of the 8th International Workshop on Middleware for Grids, Clouds and e-science, ACM 2010 [4] R.Buyya, A.Belaglozav, Jemal Abawajy, Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges, Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications, 2010 [5] Daniel Versick, Djamshid Tavangarian, Reducing Energy Consumption by Load Aggregation with an Optimized Dynamic Live Migration of Virtual Machines, International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, IEEE 2010 [6] Pablo Graubner, Matthias Schmidt, Bernd Freisleben, Energy-efficient Management of Virtual Machines in Eucalyptus, 4 th International Conference on Cloud Computing, IEEE 2011 [7] Ching-Chi Lin, Pangfeng Liu, Jan-Jan Wu, Energy-Aware Virtual Machine Dynamic Provision and Scheduling for Cloud Computing, 4 th International Conference on Cloud Computing, IEEE 2011 [8] Haikun Liu, Cheng-Zhong Xu, Jiayu Gong, Xiaofei Liao, Performance and energy modeling for live migration of virtual machines, ACM 2011 [9] Aziz Murtazaev, Sangyoon Oh, Sercon: Server Consolidation Algorithm using Live Migration of Virtual machines for Green Computing, IETE Technical Review, Vol 28, Issue 3, 2011 [10] Soramichi Akiyama, Takahiro Hirofuchi, Ryousei Takano, Shinichi Honiden, MiyakoDori: A Memory Reusing Mechanism for Dynamic VM Consolidation, Fifth International Conference on Cloud Computing, IEEE 2012 [11] Bing Wei, A Novel Energy Optimized and Workload Adaptive Modeling for Live Migration, International Journal of Machine Learning and Computing, Vol. 2, No. 2, April 2012 [12] Anton Belaglozav, Jemal Abawajy, Rajkumar Buyya, Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing, Future Generation Computer Systems, Volume 28, Issue 5, ACM 2012 [13] Jing Xu et.al, Multi-objective Virtual Machine placement in Virtualized Data Center Environments, IEEE/ACM International Conference on Green Computing and Communications, 2010 [14] Anton Belaglozav et.al, Optimal Online Deterministic Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers, [15] Hady S. Abdelsalam et.al, Analysis of Energy Efficiency in Clouds, In Proc. of the Computation World: Future Computing, Service Computation, Cognitive, Adaptive, Content, Patterns, 2009, pp.416-421. [16] H. N. Van, F. D. Tran, J. M. Menaud, Performance and Power Management for Cloud Infrastructures, 3 rd International Conference on Cloud Computing, IEEE Press, Jul. 2010 524 Vol. 5, Issue 1, pp. 515-525

[17] Daniel Gmach, Jerry Rolia, Ludmila Cherkasova, Guillaume Belrose, Tom Turicchi, and AlfonsKemper: An Integrated Approach to Resource Pool Management: Policies, Efficiency and Quality Metrics, 38th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN'2008, June 24-27 [18] Christopher Clark, Live Migration of Virtual Machines, Proceedings of the 2 nd Conference on Symposium on Networked Systems Design & Implementation, Volume 2, pages 273-286, ACM 2005 [19] Cloud Computing for Dummies, Wiley Publishing, Inc. [20] Qi Zhang et.al, Cloud Computing : State-of-the-art and research challenges, Springer 2010 [21] Anton Beloglazov, Energy Efficient Resource management in Virtualized Cloud Data Centers, In Proc. of the 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, IEEE Press, 2010, pp.826-831. [22] Virtualization Techniques & Technologies: State-of-the-art, Journal Of Global Research In Computer Science, Dec 2011 AUTHORS Jyothi Sekhar has done Btech in Information Technology from the Institute of Engineering and Technology, Kerala in 2010 and is now doing her final year Mtech in Network and Internet Engineering from Karunya University, Coimbatore. The main areas of interest include energy efficiency in cloud data centers and virtual machine live migration. Getzi Jeba is currently working as Assistant Professor in Karunya University, Coimbatore. She had completed her Mtech in Network and Internet Engineering from the same university in 2009. She has her area of interest in routing in computer networks and cloud computing. She is pursuing her PhD in Energy Efficient live migration techniques. She is a member of the Computer Society of India. S Durga is currently working as Assistant Professor in Information Technology department of Karunya University. Her areas of interest include Internetworking and Security and Pervasive computing. Currently she is pursuing her PhD in Mobile Cloud Computing. 525 Vol. 5, Issue 1, pp. 515-525