Power-aware Heuristic Vector based Virtual Machine Placement in Heterogeneous Cloud Scenarios 1 Guan Le, 2 Ke Xu, 3 Meina Song, 4 Junde Song 1, First Author Beiing University of Posts and Telecommunications, optimism1226@gmail.com *2,Corresponding Author Beiing University of Posts and Telecommunications, xu_ke@bupt.edu.cn 3 Beiing University of Posts and Telecommunications, mnsong@bupt.edu.cn 4 Beiing University of Posts and Telecommunications, dsong@bupt.edu.cn Abstract The ever-increasing energy consumption of cloud datacenter has been receiving increasing attention and advocates power-aware technologies to minimize power consumption. In resource management panel, Virtual Machine placement is a feasible way for power saving by allocating physical resource. In this paper, we propose virtual machine placement algorithm to allocate multiple resource in heterogeneous cloud scenarios for power-saving. Vector bin packing problem is formulated based on extended power model, combining power consumption of CPU and memory. After analyzing the drawback of typical vector based algorithms, we introduce Power-aware Heuristic Vector Placement (PHVP) algorithm for heterogeneous VM placement. PHVP is decomposed into two stage, active host placement and idle host activation. In process of active host placement, PHVP apply Greedy Randomized Adaptive Search Procedures (GRASP) to expand the decision space and reduce the risk of local optima. Power-aware Best-Fit Host Activation algorithm is introduced to give priority for low-power host to activate. Simulation results show that PHVP outperforms FFD and vector based approach and achieve better power saving in heterogeneous environments. 1. Introduction Keywords: Cloud Computing; VM Placement; Heuristic Algorithms Cloud computing can be classified as a new IT resource management paradigm, in which various abstract level of computing service are delivered to users in flexible way. In this paper we focus on Infrastructure as a Service (IaaS) model, which deals with dynamic provision of fundamental computing resources via Virtual Machine (VM) technology in centralized large data centers. As cloud computing becomes more widespread, the energy consumption of datacenter grows dramatically, which has been receiving increasing attention in recent years. Recent survey has showed that data centers accounted for approximately 1.2% of total United States electricity consumption in 2005[1]. High power consumption results in high energy costs, and then reduces Return on Investment (ROI) of cloud provider. Therefore, power-aware technologies should be proposed and implemented to minimize power consumption in datacenters, while ensuring service delivery. Lowering the energy usage of data centers is a challenging and complex issue due to the complexity of infrastructure and diversity of computing applications. Several technologies has been raised to save power, such as switching power of computing nodes on/off, cooling system and Dynamic Voltage and Frequency Scaling (DVFS). In resource management level, the process of VM placement is a feasible way to minimize energy consumption. VM placement technologies deal with physical resource allocation, which is required for server consolidation. When VMs do not use any resources, they can be logically resized and consolidated on a minimal number of physical nodes, while idle nodes can be switched off [2]. Most of research work has assumed that IaaS cloud consists of homogenous nodes. However, with the continue investment on IT facilities, cloud infrastructure exists heterogeneity. This paper discusses VM placement applicable to heterogeneous cloud scenarios, aiming to reduce power consumption. VM placement subect to multiple resources limitation of Physical Machines (PM), such as CPU, memory, I/O, etc. Both demands of VMs and capacity of PMs span across several dimensions. Thus the VM placement can be model as Vector Bin-packing problem, with VM as item, PM as bin. Vector Bin-packing is a combination NP-hard problem and several heuristic algorithms have been developed to cope with this problem, such as First-Fit (FF), Best-Fit (BF), First-Fit Decreasing (FFD), Best-Fit Advances in information Sciences and Service Sciences(AISS) Volume4, Number19, Oct 2012 doi: 10.4156/AISS.vol4.issue19.74 601
Decreasing (BFD). The core of these heuristic algorithms is to find the suitable node to locate VM according to specific fit condition. pmapper [3] employed FFD to minimize power and migration cost in VM placement. In [4], the author used FF for assigning VMs for placement and power consolidation of VMs Anton Beloglazov [5] modified BFD for energy-aware allocation, choosing the most powerefficient nodes first. Weiming Shi [6] selected FF policy to implement profitable VM placement. Although heuristic algorithms are quite efficient and easy to implement, bin-packing operations are independent, and the placement may fall into local optimization. Some research literatures formulated Vector Bin-packing as Linear Programming (LP) model, and used programming related technology to solve the problem. The author of [7] proposed an LPrelaxation based algorithms for server consolidation; David Breitgand [8] defined combinatorial optimization to maximize the provider s benefit, and got near optimal solution via column generation method. However, the complexity of LP is dependent with input number, and it is not feasible for VM placement. Also, meta-heuristic algorithms are raised to get near optimal placements, such as Genetic Algorithm [9], Simulated Annealing [10] and Ant Colony algorithm [11]. These algorithms optimized VM placement by iteratively trying to improve candidate solution regard to predefined quality. However, the parameters used in these algorithms should be determined after repeat experiments for a particular scenario. Especially in heterogeneous environment, this non-generic feature would be decrease the performance of VM placement. Furthermore, the solution space expands exponentially as the number of PMs and VMs increase in meta-heuristic algorithms. Therefore, to get more optimal solution, it would take more time to execute algorithms. The simulation result in [11] shows that when handling 600 VMs placement request, ant colony algorithm took 2.01 hours to execute, while FFD only took 1.75 seconds. Recently, vector based heuristic algorithms are proposed to solve VM placement problem [12-15]. Vector based methods organize multiple resource attributes as a resource vector and compute the dot product of between requested VM resource vector and PM remaining resource vector. Contrasted with simple heuristic algorithms, vector based heuristic algorithms select suitable PM based on dot production. As [12] described, vector based heuristic algorithms achieve better performance compared with simple heuristic algorithms. Singh, A [13] has designed load balancing algorithms based on dot product of resource vector. Mayank Mishra [14] analyzed the anomalies of existing VM placement solutions and proposed vector arithmetic methodology to address those limitations. Above work deal with homogenous scenarios, while Mark Stillwell [15] developed a problem formulation and used vector packed approach for heterogeneous VM placement. The main obective of placement is to maximize the minimum yield over all services. However, typical vector based heuristic methods exists local optima as well, according to the analysis of [14]. In this paper we present VM placement method in heterogeneous cloud scenario via extending vector based heuristic algorithms, aiming to reduce power consumption. The main contributions of this work can be summarized as: We give a formulation of the placement problem in heterogeneous scenarios. Power model in this formulation consider not only power consumption of CPU, but also power consumption of memory, which is reasonable for recent datacenters. Also we prove that under this formulation, power minimization problem becomes sever consolidation problem. We analyze the flaw of typical vector heuristic algorithm and proposed our novel Power-aware Heuristic Vector Placement (PHVP) algorithm. We divide VM placement into two stages: active host allocation and idle host activation. Greedy Randomized Adaptive Search Procedures (GRASP) based allocation algorithm is proposed to overcome the shortcoming of local optima. Also we design Best-Fit algorithm for activating next idle host with least power consumption. We implement proposed placement algorithm within open source cloud simulator CloudSim [16] and evaluate the performance of VM placement compared with typical vector-based and FFD algorithms. The results indicate that PHVP outperforms the evaluated approach, through better resource utilization. The remainder of this paper is organized as follows. In Section 2, we formalize the VM placement problem. Section 3 details the design of PHVP algorithm. Experiment evaluations are given in Section 4, while Section 5 concludes the paper and discuss future work. 602
2. Problem Formulation This section describes the assumptions of this work and formulates novel power model to build accurate insight to power consumption. Based on this model, we provide a formal definition of VM placement problem as a Vector Bin-Packing (VBP) problem. At last, we deduce that the obective of power minimization is equivalent to server consolidation. 2.1. Assumptions In on-demand clouds, there are two possible VM placement requests, initial placement and dynamic placement [17]. In initial placement scenario, new arrived VM requests are allocated to PM which are not completely used without no modify of previous VMs placement. However, dynamic placement deals with the migration of VMs, as PM availability changes, or the demand of hosted service in VM varies [18-19]. It is shown that initial placement algorithms can be modified to solve dynamic placement, we restrict our study to consider only initial VM placement. We assume a heterogeneous environment in which the capability of physical machines can be categorized into several types. But the power model of all PMs is identical, and the power consumption of each PM can be computed based on formula presented in section 2.2. Furthermore, to simplify the process of VM placement, we assume that the request VMs have several stable multi-dimension resource demands, like Amazon EC2 service, providing different levels of VM instances. Hosted applications and workload may change the resource requirement dynamically, but the capacity of VM would not vary once the VM has located in specified PM. What s more, the utilization of each PM can be measured timely for decision of VM placement. Finally, to express our idea fluently, we assume the notion of PM equals to host. 2.2. Power Model Power consumption by computing nodes is mostly determined by the CPU, memory, disk storage. Recent power metering of virtualized sever showed that CPU consume 58% of total power, memory 28% and disk 14% [20]. For CPU consumes the main part of energy, most literatures focus on management of CPU s power consumption and efficient usage. However, modern multi-core processors are much more power-efficient and with the increase of memory size in modern server, the power consumption of memory could not be overlooked. In this paper, we build a novel power model, taking both CPU and memory into consideration, which indicate holistic power consumption. 2.2.1. CPU Model Typically, the power consumption of CPU is modeled as linear relationship with CPU utilization, based on several metering environments. But the result of SPECpower_ss2008 benchmark [21], which is the first industry-standard benchmark that evaluates the power and performance characteristics of volume server class, illustrates piecewise linear relationship. And critical point of power consumption is at 10% CPU utilization approximately. So we define CPU model by piecewise function, as shown in Equation (1), where P c max, P clow, Pcidle are maximum power (100% utilization), low power (10% utilization) and idle power respectively. u denotes current CPU utilization. cpu P cpu 10 Pclow ( Pc max Pclow ) ( ucpu 0.1) if ucpu 0.1 9 Pcidle 10( Pclow Pcidle ) ucpu if ucpu < 0.1 To simplify the expression of CPU power model, we use Equation (2) to substitute (1), where k 1, k2 denotes the slope of each range. (1) 603
Pc 1 ku 1 cpu if ucpu 0.1 Pcpu (2) Pc2 k2ucpu if ucpu< 0.1 The result of SPECpower_ss2008 benchmark also shows that the change rate in low CPU utilization is higher than that in high CPU utilization. We can conclude that k 1 k 2.This piecewise linear model approaches to realistic CPU power consumption. 2.2.2. Memory model Adoption of virtualization technology would increase power consumption of memory. Recently memory is packaged in dual in-line memory modules (DIMMs), and the memory in a server with eight 1GB DIMMs would consume 80W [22], which would become significant power consumer. In [23], a linear model was given to depict memory power consumption. Thus we simply introduce linear power model: Pmem Pmidle ( Pmmax Pmidle ) umem (3) Also, we simply (3) as following: P P k u (4) 2.3. Placement Problem Formulation mem m 3 mem We consider a data center infrastructure composed by M hosts. Also, we consider three maor resources for placement: CPU, memory and I/O. Thus each PM can be denoted as threedimension vector and each resource dimension has fixed capacity, namely cpu mem IO c, c, c, where {1,, M}. Assume that there are N VM allocation requests. Each VM has resource demand, namely cpu mem io r i, r i, r i, i {1,, N}.We define binary variable x i to indicate allocation decision: xi 1 indicates that VM i is placed on PM. We assume that the datacenter is big enough for allocating VMs. Then each VM should be placed in one PM, as Equation (5) describes: M xi 1 (5) VM placement should consider resource limitation of each PM. Consequently, the sum of multiple resource demand of VMs placed on a PM should not exceed the capacity of that PM. N i 1 N i 1 N i 1 1 r x c cpu cpu i i r x c mem mem i i r x c io io i i Our placement obectives is to minimize power consumption. From (2) and (4) we can get the total power of each PM, so the power consumption optimization can be written as: M min ( Pcpu ( ) Pmem ( )) (7) 1 To solve the power minimization problem, we introduce a theorem to simply problem solving. Theorem 1. Under the power model of (2) and (4), if less hosts are used in process of VM placement, the power consumption is less. (6) 604
Proof. Assume there are two placement methods A and B, ma and mb denotes amounts of active hosts respectively, and ma mb. If mode A transforms to B, the previous allocated resource in mutative hosts should move to other constant hosts. In this context, mutative hosts represent those hosts which will be turned off, while constant hosts maintain active. We use ucpu mutative hosts. k 1, k 2, k3 and umem to indicate the average CPU and memory usage of denotes average power weights among mutative hosts respectively. Particularly, ucpu 0.1 and k1 k2. After transition, the CPU utilization of mutative hosts return to 0, so the power reduction of mutative hosts is: The power addition of constant hosts is: Then the power saving is: P ( m m )( ku k u 0.1 k k u ) reduce A B 1 cpu 2 cpu 1 3 mem P ( m m )( ku k u ) add A B 1 cpu 3 mem P P P ( m m )( k u 0.1 k ) 0 save reduce add A B 2 cpu 1 Therefore, according to Theorem 1 we conclude that the obection of power consumption optimization can be transformed to server consolidation. In the next section, we discuss our PHVP method to minimize amounts of active hosts. 3. Power-Aware Heuristic Vector Placement This section presents the design of our heuristic vector based algorithm to solve the previously defined Vector Bin-Packing problem. First we provide the terminologies used in our algorithms. After analyzing the flaw of typical vector based algorithms, we propose novel power-aware heuristic vector placement algorithm, which consists of GRASP based heuristic vector placement algorithm and poweraware best-fit host activation algorithm. 3.1. Terminologies In PHVP, three resources have been taken into consideration. We normalize these resources along each dimension and use <cpu, mem, io> to denote related vectors of a PM. The total capacity of PM is defined to <1,1,1>, namely Total Capacity Vector(TCV). Utilized Resource Vector (URV) refers to the current utilization of resources of a PM, and Remaining Resource Vector (RRV) represents the remaining percent of resource of the PM. Obviously, TCV equals to the sum of URV and RRV. The resource requirement of a VM is defined as Demand Resource Vector (DRV), which is the normalized vector towards the total capacity of PM. The expression of DRV is described in (8) cpu mem io cpu mem io ri ri ri Ri, Ri, Ri,, (8) cpu mem io c c c The dot product refer to the relationship between VM s normalized demand and PM s utilization. u, cpu u mem and u denotes the real-time utilization of CPU, memory and I/O of io host respectively. Then dot products is defined as follows: cpu cpu mem mem io io dotproduct( host, vmi) u Ri u Ri u Ri (9) In the process of VM placement, we divide into two stages: 1) how to pick the most appropriate active PM for VM place; 2) if there is no appropriate active PM for VM, how to active the fit idle PM to allocate? The former stage deals with the shortcoming of VectorDot algorithm mentioned as follows, while the latter one settle power-aware problem related to heterogeneous scenarios. 605
3.2. Vector-based Algorithm Analysis Traditional vector based algorithms employed VectorDot method to solve the placement problem. Basically, after receiving of VM request, VectorDot traverses all the active PMs and computes the Dot Product of DRV of this VM and RRV of each PM. The PM, whose RRV gives the maximum Dot Product with RRV, is chosen to place the VM. The main idea of this placement method is to find the most complementary PM for VM. For example, the PM of low CPU utilization and high memory utilization is suitable for placing VMs with high CPU requirement but low memory requirement. Figure 1. VM Placement problem using VectorDot This method seems good for improving resource usage, but the author of [16] pointed out the anomaly, as is shown in Figure. 1. Two resource dimensions are indicated for clear observation. This figure shows two PMs which have different URVs, but the magnitude of URVs are same. Because the angle between VM s DRV and PM2 s RRV is smaller than that between VM s DRV and PM1 s RRV, the Dot Product of DRV and PM2 s RRV is larger and VectorDot would select PM2 as target PM. However it is clear from Fig.2 that PM1 is better choice as PM2 owns smaller remaining space (in gray color) for following VM request. Thus VectorDot would not always select best fit PM. This defect is introduced because the TCV are neglected in the decision process and search process reach local optima. To overcome this defect, [15] considered all the related vectors of PM, and mapped these 3D vectors into 2D hexagon to make decision. In addition, several thresholds are introduced to distinguish priority of PMs. Yet, this method has limitation to extend, as it is impossible to transform vectors of more dimensions into plane figure. Moreover, threshold based algorithm leaves more efforts to determine parameters. 3.3. GRASP Based Heuristic Vector Placement Greedy Randomized Adaptive Search Procedures (GRASP) is a meta-heuristic algorithm for solving combinatorial optimization problems [24]. To reduce the probability of local optima, GRASP uses two-phase iterative process: a construction phase and a local search phase. In the construction phase, a candidate list of feasible solution is iteratively constructed based on heuristic methods, name Restricted Candidate List (RCL). After the construction of RCL, a local search algorithm works in an iterative fashion by successively replacing the current solution with a better solution. Thus, local optima will be replace in local search phase. As the problem of VectorDot is derived from local optima, we propose GRASP based heuristic vector placement. The pseudocode of the algorithm is depicted in Figure.2. The algorithm takes PM list, VM list and RCL size as input, and iterates for each VM placement request in VM list one by one. In each iterate, several intermediate variables are initialized as NULL (line 2 to 4). Then the algorithm traverses all the active hosts, computes the Dot product of VM and host with enough resource for placing, and then inserts the result into dplist (line 5 to 10). If there is no host demand the requirement of VM, a new host will be active using host activation algorithm described in section 3.4 (line 11 to 13). 606
Figure 2. GRASP based Heuristic Vector Placement Otherwise, dplist is sorted in decreasing order and then construct RCL by selecting the first r of (line 15-16) ( r 1). Line 17 is local search stage of GRASP and the target host is random selected from RCL. While all the VM have been allocated, the complete VM-to-PM mapping will be returned. 3.4. Power-aware Best-Fit Host Activation Figure 3. Power-aware Best-Fit Host Activation In the homogenous environment, all the PMs have the same capacity, thus there is no need for designing special methods to activate host. However, the power consumption of hosts differs and it s important to evaluate the priority of these hosts for particular VM placement. Figure.3 depicts the pseudocode of Power-aware Best-Fit Host Activation algorithm. The main idea is to traverse all idle hosts and compute estimate power consumption with VM respectively (line 3 to 4). Host with the 607
minimized power consumption is picked as target to activate (line 5 to 7). This allows leveraging the heterogeneity of resources by choosing the most power-efficient nodes to activate. 3.5 Complexity Analysis The complexity of typical heuristic algorithms is ON ( log N NM), e.g. Best-Fit, First-Fit. In heuristic vector placement stage, the complexity of the allocation part is ONMr ( ) and the complexity of host activation is OM ( ).When each VM request to place, the algorithms either search for appropriate PM for placement, or activate new idle PM So the total complexity of PHVP is ONMr ( ( 1)), which is greater than heuristic algorithms. Yet, the probability of optima would be declined as long as r 1, and the value of r can be given as a small number. 4. Evaluation In this section, we discuss a performance evaluation of the PHVP algorithm presented in Section 3. Our research work is towards resource allocation in heterogeneous cloud scenario, so the algorithm should be evaluated in large-scale virtualized datacenter. However, it is highly difficult to conduct repeatable experiments on real infrastructure due to physical limitation. To gain a first insight of our algorithms, we have decided to conduct simulation-based experiments based on CloudSim toolkit [16]. Host Type ID Host1 Host2 Host Types HP ProLiant DL360 G7 HP ProLiant DL165 G7 Table 1. Capacity and Power Parameters of Hosts CPU (MIPS) CPU cores Mem (M) I/O (Mbit/s) P c max (w) P clow (w) P cidle (w) P mm ax (w) 3067 2 16384 50 216 95.4 55.6 42 15 2300 2 32768 50 308 116 68.9 80 30 P midle (w) 4.1. Experimental Environment We simulated a datacenter composed of 300 heterogeneous hosts. Two types of hosts are used and the capacity and power parameters of these two hosts are listed in Table.1. Pmm ax and Pmidle are estimated based on the assumption of memory power consumption in [21], while other parameters are determined from SPECpower_ss2008. Table 2 lists the parameters of diffident types of requested VM, Four kinds of VM instance have been proposed, with diverse resource demand in CPU and memory. Also, the parameter of RCL size is set to 5 ( r 5 ). The obective of our algorithm is to reduce power consumption of the datacenter, and it is concluded that the effect of power consumption is equivalent to server consolidation in Section 2.3. Thus, the performance metrics is the amounts of active hosts and total power consumption of datacenter. In addition, each experiment has been run 10 times and we take the average of 10 results as the final output. Table 2. Parameters of different types of VM CPU CPU Mem VM Type (MIPS) cores (M) I/O (Mbit/s) High-CPU 2000 1 1500 10 High-Memory 1700 1 4096 10 Small 1000 1 1700 10 Micro 500 1 613 10 608
4.2. Performance under different VM requests In order to demonstrate the efficiency of our algorithm, we compare PHVP with typical VectorDot and FFDSum algorithm. FFDSum algorithm is the extension of FFD algorithm, which sorting VM list in descending order based on the sum of multiple resource requests. In the experiment, two types of hosts occupy respectively one half hosts, that is, both types of hosts have 150 hosts. Also, different amounts of VM requests are generated to compare the performance of VM placement algorithms. The simulation results are presented in Figure. 4. It is apparent from (a) that VectorDot and FFDSum algorithm achieve similar performance, thus the typical vector based algorithm takes no advantage in heterogeneous scenario. However, PHVP surpasses these typical algorithms, reducing 10% amount of active hosts in average when the number of VM requests is in [100,700]. Also, the effect of this advantage declines with the increase of VM requests. The main reason is that the remaining capacity of active hosts for further VM placement is smaller, and more idle hosts should be activated. This phenomenon is also shown in (b), in which PHVP has 10.3% power saving compared with VectorDot and FFDSum when amount of VM request is in [100,700]. Therefore, PHVP achieves better power saving compared with typical VectorDot and FFDSum algorithm.. Figure 4. Amounts of hosts and power consumption under VM requests Figure 5. Amount of hosts and power consumption in heterogeneous scenario 4.3. Impact of heterogeneous scenarios In this section, we study the impact of heterogeneous scenarios, that is the effects of different portion of hosts. We maintain the total number of hosts but change the percent of Host1. According to last section, the performance of VectorDot and FFDSum are similar, so we compare PHVP with one of these two algorithms. In this experiment, the amounts of VM requests maintains 600. 609
Figure.5 illustrates the performance comparison of PHVP and VectorDot. The percent of Host1 is set from 10% to 90% and the number of each type of hosts in different algorithms is displayed in (a). In general, the amount of active hosts in PHVP is smaller than that in VectorDot. According to the details of result, the portion of active Hosts1 in PHVP grows quickly and after percent of Host1 reaches 50%, the number of active Hosts1 maintain 137 with no Hosts2. Therefore, PHVP executes the same placement if the number of Hosts1is greater than 150. It can be concluded that PHVP give priority to Hosts1for host activation, for power-aware best-fit activation algorithm select hosts with less consumption and Hosts1 consumes less power than Host2 exactly. However, the portion of active Hosts1 in VectorDot increases linearly with the pace of percent of Host1. This is because VectorDot uses random strategy for host activation. Total power consumption is dependent on the amounts of active hosts, as is shown in (b). The gap between PHVP and VectorDot indicates that PHVP achieves more power saving in heterogeneous scenario. In summary, the strategy for host activation would affect the performance of VM placement in heterogeneous scenario. 5. Conclusion and Future Work In this paper, we discuss VM placement in heterogeneous cloud scenarios, aiming to minimize power consumption. The extend power model has been proposed, combining both CPU and memory consumption. We formulate VM placement as vector bin packing problem, and deduce the target of power minimization is equivalent to server consolidation. After analyzing the flaw of typical vector based algorithms, we presents PHVP algorithm to optimize heterogeneous VM placement. We decompose VM placement into two stage: activate host place and idle host activation. GRASP based heuristic vector algorithm expands the decision space and decreases the risk of local optima. Poweraware Best-Fit Host Activation algorithm employ Best-Fit policy to choose low-power hosts to activate, thus adapt to heterogeneous scenarios. Simulation results show that typical vector based algorithm achieves no improvement compared with FFD algorithm. However, under certain amount of VM requests, PHVP outperforms VectorDot and FFD algorithm, with 10.3% power saving In addition, the strategy for host activation would affect the performance of VM placement in heterogeneous scenario and our power-aware strategy performs better power-friendly compared with FFD algorithm. In the future, we will conduct more complex experiments to evaluate our algorithm, by adding more types of hosts and power models. Moreover, other placement constraints should be taken into account in VM placement, such as VM anti-collocation/collocation, or user defined policy, which result in multiobective optimization. Last but not least, we plan to investigate dynamic VM placement algorithms based on PHVP. Such algorithms should consider the actual resource utilization of VMs under certain workload, thus lead to VM sizing or migration of VMs. Acknowledgments This work is supported by the National Key proect of Scientific and Technical Supporting Programs of China (Grant No. 2009BAH39B03); the National Natural Science Foundation of China(Grant No.61072060); the National High Technology Research and Development Program of China (Grant No. 2011AA100706);the Research Fund for the Doctoral Program of Higher Education (Grant No. 20110005120007); the Fundamental Research Funds for the Central Universities(2012RC0205); the Coconstruction Program with Beiing Municipal Commission of Education; Engineering Research Center of Information Networks, Ministry of Education. References [1] Baliga, J., R. W. A. Ayre, et al, Green Cloud Computing: Balancing Energy in Processing, Storage, and Transport, Proceedings of the IEEE, vol.99, no.1, pp.149-167, 2011 [2] R. Buyya, A. Beloglazov, J. Abaway, Energy-efficient management of data center resources for cloud computing: a vision, architectural elements, and open challenges, in Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12 15, 2010. 610
[3] Verma, A., P. Ahua, et al, pmapper: power and migration cost aware application placement in virtualized systems, in Proceedings of the 9th ACM/IFIP/USENIX International Conference on Middleware. Leuven, Belgium, Springer-Verlag New York, Inc, pp. 243-264, 2008 [4] Cardosa, M., M. R. Korupolu, et al, Shares and utilities based power consolidation in virtualized server environments, IFIP/IEEE International Symposium on Integrated Network Management, IM '09, 2009. [5] Beloglazov, A., J. Abaway, et al, Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing, Future Generation Computer Systems, vol.28, no.5, pp.755-768, 2012. [6] Weiming, S. and H. Bo, Towards Profitable Virtual Machine Placement in the Data Center, Utility and Cloud Computing (UCC), 2011 Fourth IEEE International Conference on, 2011. [7] Speitkamp, B. and M. Bichler, A Mathematical Programming Approach for Server Consolidation Problems in Virtualized Data Centers, Services Computing, IEEE Transactions on, vol.3, no.4, pp.266-278, 2010. [8] Breitgand, D. and A. Epstein, SLA-aware placement of multi-virtual machine elastic services in compute clouds, Integrated Network Management (IM), 2011 IFIP/IEEE International Symposium on, 2011. [9] Jing, X. and J. A. B. Fortes, Multi-Obective Virtual Machine Placement in Virtualized Data Center Environments, Green Computing and Communications (GreenCom), 2010 IEEE/ACM Int'l Conference on & Int'l Conference on Cyber, Physical and Social Computing (CPSCom), 2010 [10] Tsakalozos, K., M. Roussopoulos, et al, VM placement in non-homogeneous Iaas-clouds, in Proceedings of the 9th international conference on Service-Oriented Computing. Paphos, Cyprus, Springer-Verlag, pp.172-187, 2011. [11] Feller, E., L. Rilling, et al, Energy-Aware Ant Colony Based Workload Placement in Clouds, Grid Computing (GRID), 12th IEEE/ACM International Conference on, 2011 [12] Rina Panigrahy, Kunal Talwar, Lincoln Uyeda, and Udi Wieder, Heuristics for Vector Bin Packing, Microsoft Research Report, 2011. [13] Singh, A., M. Korupolu, et al, Server-storage virtualization: integration and load balancing in data centers, in Proceedings of the 2008 ACM/IEEE conference on Supercomputing. Austin, Texas, IEEE Press, pp.1-12, 2008 [14] Mishra, M. and A. Sahoo, On Theory of VM Placement: Anomalies in Existing Methodologies and Their Mitigation Using a Novel Vector Based Approach, Cloud Computing (CLOUD), 2011 IEEE International Conference on, 2011. [15] Mark Stillwell, Frederic Vivien, et al, Virtual Machine Resource Allocation for Service Hosting on Heterogeneous Distributed Platforms, Parallel & Distributed Processing Symposium (IPDPS), 2012 IEEE International, 2012 [16] Calheiros, R. N., R. Ranan, et al, CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms, Software: Practice and Experience, vol.41, no.1, pp.23-5, 2011 [17] Mills, K., J. Filliben, et al, Comparing VM-Placement Algorithms for On-Demand Clouds, Cloud Computing Technology and Science (CloudCom), 2011 IEEE Third International Conference on, 2011 [18] Siyuan Jing, Kun She, A Novel Model for Load Balancing in Cloud Data Center, Journal of Convergence Information Technology (JCIT), Vol. 6, No. 4, pp.171-179, 2011 [19] Yu Guo, Jiqiang Liu, Changxiang Shen, Trusted Dynamic Self-confidence Migration of Cloud Service, International Journal of Advancements in Computing Technology (IJACT), Vol. 4, No. 7, pp.92-101, 2012 [20] Kansal, A., F. Zhao, et al, Virtual machine power metering and provisioning, in Proceedings of the 1st ACM symposium on Cloud computing. Indianapolis, Indiana, USA, ACM, pp.39-50, 2010. [21] SPECpower_ss2008 website: http://www.spec.org/power_ss2008/ [22] L. Minas and B. Ellison, Energy Efficiency for Information Technology: How to Reduce Power Consumption in Servers and Data Centers, Inter Press, USA, 2009 [23] Krishnan, B., H. Amur, et al., VM power metering: feasibility and challenges, SIGMETRICS Perform. Eval. Rev, vol.38,no.3, pp.56-60, 2011 [24] Feo, T. A. and M. G. C. Resende, Greedy Randomized Adaptive Search Procedures, Journal of Global Optimization, vol.6, no.2, pp.109-133, 1995 611