Minimizing the Operational Cost of Data Centers via Geographical Electricity Price Diversity
|
|
|
- Delilah White
- 10 years ago
- Views:
Transcription
1 203 IEEE Sixth International Conference on Cloud Computing Minimizing the Operational Cost of Data Centers via Geographical Electricity Price Diversity Zichuan Xu Weifa Liang Research School of Computer Science, The Australian National University Canberra, ACT 0200, Australia Abstract Data centers, serving as infrastructures for cloud services, are growing in both number and scale. However, they usually consume enormous amounts of electric power, which lead to high operational costs of cloud service providers. Reducing the operational cost of data centers thus has been recognized as a main challenge in cloud computing. In this paper we study the minimum operational cost problem of fair request rate allocations in a distributed cloud environment by incorporating the diversity of time-varying electricity prices in different regions, with an objective to fairly allocate requests to different data centers for processing while keeping the negotiated Service Level Agreements (SLAs) between request users and the cloud service provider to be met, where the data centers and web portals of a cloud service provider are geographically located in different regions. To this end, we first propose an optimization framework for the problem. We then devise a fast approximation algorithm with a provable approximation ratio by exploiting combinatorial properties of the problem. We finally evaluate the performance of the proposed algorithm through experimental simulation on reallife electricity price data sets. Experimental results demonstrate that the proposed algorithm is very promising, which not only outperforms other existing heuristics but also is highly scalable. I. INTRODUCTION With the rapid development of processing and storage technologies and the success of the Internet, computing resources have become cheaper, more powerful and more ubiquitously available than ever before. This technological trend has enabled the realization of a new computing model called cloud computing, in which resources (e.g., software, platforms and infrastructures) are provided as general utilities that can be leased and released by users through the Internet in an on-demand fashion. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years, where large-scale data centers have been built in different regions by cloud service providers [8], [0]. Although cloud computing benefits users by freeing them from setting and maintaining IT infrastructures, it increases the operational cost of cloud service providers due to the large quantity of electricity consumption. According to a McKinsey report [], a typical data center consumes as much energy as 25,000 households per year, and the electricity bill for data centers in 200 is estimated over $ billion and this cost is almost doubled every five years, and the electricity cost by major cloud service providers is 30%-50% percentage of their total operational cost [5]. To minimize the operational cost of data centers, lots of efforts have been taken recently, and different approaches have been developed which include Dynamic Voltage and Frequency Scaling (DVFS), dynamic sizing data centers, and exploiting electricity price diversities. Following the electricity market policies, the electricity prices vary over time (e.g., they change from every 5 minutes to every one hour), and usually are determined by the clearing prices of the current supplies and demands. This creates an opportunity for cloud service providers to reduce their operational costs through dynamically allocating user requests to these data centers with much cheaper electricity at that moment. However, shifting user requests and the results of the requests between the web portals and the data centers consume a large amount of network bandwidth, and the cost of this bandwidth consumption must be taken into account [2]. In addition to the costs of electricity and network bandwidth consumption, it is very important for a cloud service provider to provide quality services in order to avoid the penalties of unfulfilled SLA agreements, thereby reducing the revenue loss of the cloud service provider. In this paper we will use the average service delay of each request as its QoS requirement, and refer to it as the delay requirement. Specifically, we deal with a fundamental operational cost optimization problem in a distributed cloud computing environment with an aim of maximizing the system throughput while minimizing the operational cost and satisfying user SLAs, through exploring electricity price diversities. The main contributions of this paper are summarized as follows. We consider minimizing the operational cost of a distributed cloud service provider whose data centers are geographically located in different regions. We first formulate a novel optimization problem, namely the minimum operational cost problem for fair request rate allocations. We then develop a combinatorial approximation algorithm with a provable approximation ratio, by reducing the problem to a minimum cost multicommodity flow problem. We finally conduct extensive experiments by simulation to evaluate the performance of the proposed algorithm, using real-life electricity price data sets. Experimental results demonstrate that the proposed algorithm is effective and promising. To the best of our knowledge, this is the first fast approximate solution to the minimum operational cost problem in distributed cloud environments by exploiting combinatorial properties of the problem. The proposed algorithm is not only highly scalable but also running fast in response to dynamic /3 $ IEEE DOI 0.09/CLOUD
2 changes of electricity prices and request rates. In contrast, most existing solutions formulated the problem or the similar optimization problems as a mixed integer programming (MIP) and solved the MIP through the randomness rounded technique or the other heuristics. Thus, the accuracy of their solution is not guaranteed, or is achieved at the expense of expensive computation time. For the latter, even such an exact solution is found, it may not be applicable due to the time-varying nature of electricity prices and user request rates. The remainder of the paper is organized as follows. Section II briefly introduces related works, followed by introducing the system model, the problem definition, and a well-known approximation algorithm in Section III. An optimization framework and an approximation algorithm for the problem are proposed in Section IV. Section V conducts experiments to evaluate the performance of the proposed algorithm through simulations, and the conclusion is given in Section VI. II. RELATED WORK A global cloud service provider (e.g. Google) usually deploys many data centers in different geographical regions for resource saving and convenience. To achieve energy efficiency for such cloud service providers, a promising approach is to explore the time-varying electricity prices in these regions where the data centers are located at. However, minimizing the energy consumption of geographically distributed data centers is essentially different from that of a single data center, this poses a new challenge to design efficient algorithms for energy management in such geographically distributed data centers, and several efforts have been taken in the past several years [5], [2], [6], [7], [6], [2], [20], [3], [4]. For example, Qureshi et al. [5] initialized the study by characterizing the energy expense per unit of computation due to fluctuating electricity prices. They empirically showed that the exploration of electricity price diversity may save millions of dollars per day. Buchbinder et al. [2] considered the energy optimization through migrating jobs among data centers and devised an online algorithm to reduce the electricity bill. Guo et al. [6] made use of the Lyapunov optimization technique to reduce the electricity bill of data centers, using temporary energy storages like UPS. Le et al. [2] devised a general framework to manage the usage of brown energy (produced via carbon-intensive means) and green energy (renewable energy) with the aim of reducing environmental effects on huge amount of energy consumed by the data centers. Zhang et al. [20] investigated the problem of geographical request allocation to maximize the usage of renewable energy under a given operation budget. Liu et al. [3], [4] dealt with the energy sustainability of data centers by exploring sources of renewable energy and geographical load-balancing. They demonstrated the necessity of the storage of renewable energies. The most closely related studies to the work in this paper are due to Rao et. al [7], [6], [8]. They [7], [6] considered the minimum operational cost problem in distributed data centers through exploring the diversity of electricity prices, by formulating the problem as a mixed integer programming (MIP) and providing a heuristic to the MIP. They also proposed a flow-based algorithm for the MIP [8]. Most existing studies in literature on geographical request allocation subject to diverse electricity prices and bandwidth costs focused on solving a constrained optimization problem with one or multiple constraints [7], [6], [8], [20]. However, finding an exact solution usually takes a much longer time due to highly computational complexity. The solution based on the MIP thus is only suitable for a small or medium network size and not scalable. Even if such a solution is found, it may not be applicable in the reality due to time varying nature of both electricity prices and request rates in the system. III. PRELIMINARIES In this section we first introduce the system model and notations. We then define the problem precisely. We finally introduce a fast approximation algorithm for the minimum cost multicommodity flow problem which will be used later. A. System model We consider a distributed cloud computing environment that consists of a set of geographically distributed data centers DC = {DC i i N} and a set of web portals WP = {WP j j M}. Each data center DC i is equipped with hundreds of thousands of homogeneous servers with N i = DC i, and denote by μ i the service rate of each server in DC i for each i, where i N. Each web portal WP j serves as a front-end server that directly receives requests from the users, performs request rate allocations to data centers. Each data center and every web portal can communicate with each other through the Internet. Each user sends its requests to its nearby web portal and the requests are then forwarded to different data centers. The requests allocated to a data center are initially stored in its M/M/n waiting queue [9] prior to being processed. To respond to time-varying request rates and electricity prices, the time is assumed to be slotted into equal time slots. The request rate allocation will be performed at each time slot. Let r j Z be the request rate at web portal WP j at a time slot t. For the sake of convenience, in this paper we assume that each time slot is one hour so that the servers in data centers will not be turned on and off quite often, given the significant wear-and-tear cost of power-cycling. Associated with each user request, there is a tolerant delay requirement which in certain extent represents the negotiated SLA between the user and the cloud service provider. Given a data center DC i containing N i servers with the average service rate μ i per server, denote by r j,i Z the request rate from web portal WP j to data center DC i. Let Pr i be the probability of requests waiting in the queue, then the average delay of a request in DC i is [7], [3], [4] M Pr i D i (N i, r j,i )= N i μ i M r, () j,i 00
3 where we assume that there always are requests in the waiting queue of DC i, i.e., Pr i =. To meet the negotiated SLA of each request in DC i, the average waiting time of all requests in it is no greater than the user tolerant delay. Otherwise, it will incur the late penalty due to the violation of the specified SLA. Worst of all, the users may no longer use the cloud service provided by the cloud service provider in the future. B. Electricity cost model The electricity markets typically have the structures of both regulated utility and deregulated wholesale. In the regulated electricity market, the electricity price is fixed during a day and may have on-peak and off-peak prices, while in the deregulated electricity market, the electricity price varies over time. Denote by p i (t) the electricity price per unit energy at time slot t at which data center DC i is located. The electricity cost incurred by data center DC i for the duration of time slot t is determined by the amount of power it consumed and the electricity price. Let E i (t) be the total energy consumption of data center DC i at time slot t. Assuming that the load among the servers in each data center is well balanced, E i (t) thus is proportional to the energy consumption per request in DC i. Let α i be the average energy consumed per request in it which is a constant. Then the power consumption in data center DC i at time slot t is M E i (t) =α i r j,i, (2) and the electricity cost at data center DC i at time slot t is M p i (t)e i (t) =p i (t)α i r j,i. (3) C. Network bandwidth cost Most existing studies assumed that allocating requests to data centers is transparent and does not incur any cost on the communication bandwidth usage, as stateless requests from the web portals normally consist of simple jobs and can be enclosed by small data packages. However, a cloud service provider may lease the bandwidth from an ISP for its data transfer between the web portals and the data centers, a charge of occupying the bandwidth during the acquisition or construction phase of communication links (e.g. TCP/IP links) will be applied [2]. We thus assume that the bandwidth consumption cost is proportional to the number of requests. Since the cloud service provider may lease bandwidths with different data transfer rates between web portals and data centers according to the request loads at web portals, the cost of bandwidth consumption by transferring a request from different web portals is different. Let p b j denote the bandwidth cost of transferring a single request from WP j to any data center. Then, the total cost of transferring requests from web portal server WP j to all data centers is p b j i= N r j,i. (4) D. Problem definition The minimum operational cost problem for fair request rate allocations for the cloud service provider at time slot t is to allocate a fractional request rate r j,i Z from each web portal WP j to each data center DC i such that the operational cost N i= p i(t)α i M r j,i + M pb j r j,i is minimized while the system throughput N i= M r j,i is maximized (or equivalently the number of servers n i at each DC i to be switched on) and all allocated requests are served within their SLAs, subject to that N i= r j,i r j and 0 n i N i for all i and j with i N and j M. In other words, the objective is to make each web portal have the same proportion of number of requests to be served and this proportion λ is as large as possible while the associated cost is minimized, i.e., to maximize M λ r j = N i= M r j,i where 0 λ. E. The minimum cost multicommodity flow problem Given a directed graph G(V,E; u, c) with capacities u : E R + and costs c : E R 0, assume that there are k source-destination pairs (s i,t i ; d i ) where d i is the amount of demands to be routed from its source node s i to its destination node t i for all i with i k. Let B (> 0) be a given budget and f i the amount of flow f i sent from s i to t i. The minimum cost multicommodity flow problem in G is to find a largest λ such that there is a multicommodity flow f i routing λ d i units of commodity i from s i to t i for each i with i k, subject to the flow constraint and the budget constraint e E c(e) f(e) B, where f(e) = k i= f i(e). The optimization framework given by Garg and Könemann is as follows. It first formulates the problem as a linear programming LP, then finds an approximate solution to the duality DP of LP that returns an approximate solution to the original problem. Let P i be the set of paths in G from s i to t i, and let P = k i= P i. Variable f p represents the flow on path p P. The linear programming formulation of the problem LP is LP: max λ s.t. e p f p u(e) e E, L i= p P i e p (f p c(e)) B, p P i f p λ d i i, i k, f p 0 0 λ. p P, The dual linear programming DP of LP is described as follows, where l(e) is the length on every edge e E and φ is viewed as the length of the cost constraint B. DP: min D(l, φ) = e E l(e)u(e)+b φ, s.t. e p (l(e)+c(e) φ) z i p P i, k i= d i z i, l(e) 0 e E, z i 0 i, i k. Specifically, Garg and Könemann s optimization framework for the DP proceeds in a number of phases, while each phase is composed of exactly k iterations, corresponding to the k 0
4 commodities. Within each iteration, there are a number of steps. Initially, l(e) = δ u(e) for each e E, φ = δ/b, z i =min p Pi {l(p) +c(p)φ}, where c(p) = e p ( c(e), δ = ɛ ) /ɛ, E and ɛ is the increasing step of the length function in each iteration. In one phase, let i be the current iteration, the algorithm will route d i units of flow of commodity i from s i to t i within a number of steps. In each step, it routes as much fraction of commodity i as possible along a shortest path p from s i to t i with the minimum l(p) +c(p)φ. The amount of flow sent on p is the minimum one u min among the three values: the bottleneck capacity of p, the remaining demand d i of commodity i and B/c(p). Once the amount of the flow u min has been routed on p, the dual variables l and φ are then updated: l(e) =l(e)( + ɛ umin umin c(p) u(e) ) and φ = φ( + ɛ B ). The algorithm terminates when the value of the objective function D(l, φ). A feasible flow is finally obtained by scaling the flow by log +ɛ δ [7]. Theorem : (see [7]) There is an approximation algorithm for the minimum cost multicommodity flow problem in G(V,E; u, c) with k commodities to be routed from their sources to their destinations, which delivers a solution with an approximation ratio of ( 3ɛ) while the associated cost is the minimum. The algorithm takes O (ɛ 2 m(k + m)) time, where m = E and ɛ is a given constant with 0 <ɛ /3. IV. AN APPROXIMATION ALGORITHM FOR THE MINIMUM OPERATIONAL COST PROBLEM In this section we first propose a novel optimization framework for the minimum operational cost problem. We then develop a fast approximate solution to the problem based on the optimization framework, through a reduction to a minimum cost multicommodity flow problem whose approximate solution will return a feasible solution to the original problem. A. An optimization framework Given a distributed cloud computing environment, to allocate user requests from different web portals to different data centers such that the system throughput is maximized while the operational cost of processing the requests is minimized. An auxiliary flow network G f =(V f,e f,u f,c f ) at a time slot t is constructed as follows. V f = {DC i i N} {WP j j M} {s 0,t 0 } is the set of nodes and s 0 and t 0 are the virtual source and destination nodes. E f = { s 0,WP j j M} { WP j,dc i j M, i N} { DC i,t i N} is the set of directed edges. The capacity and cost of each edge in E f are defined in the following. Associated with each edge s 0,WP j for each WP j WP, its capacity u(s 0,WP j ) is the number of requests r j submitted to web portal WP j at time slot t, and its cost is zero. Associated each edge WP j,dc i for each WP j WP and each DC i DC, its capacity is infinity and its cost is the cost sum of network bandwidth consumed between WP j and DC i and electricity consumed O (f(n)) = O(f(n)log O() n) for processing a request, which is p i (t)α i + p b j. Associated with each edge DC i,t for each DC i DC, its cost is zero, and its capacity u(dc i,t) is the maximum number of requests that can be processed by DC i at time slot t without violating the average delay D i of these requests, assuming that the service rate of each server is μ i. The key here is how to transfer this delay requirement of requests into the edge capacity that corresponds to the processing ability of DC i. Notice that the number of requests waiting at the queue of DC i must be limited, as a longer queue will lead to a longer waiting time and the average delay constraint D i may not be met. To prevent this occurs, the number of requests allocated to DC i, M r j,i, must be upper bounded by the following inequality. M r j,i N i μ i, (5) D i where N i is the number of servers used for processing the requests with the service rate μ i. The capacity of edge DC i,t 0 thus is u(dc i,t 0 )= N i μ i D i for all i with i N. Fig. is an illustration of the flow network G f. u(s, WP )= r c(s, WP )= 0 WP u( DC, t) c( DC, t)= 0 WP j DC i u( WP,DC ) s Q t 0 0 c( WP,DC ) B. Algorithm WP M Fig DC DC N Flow network G f =(V f,e f,u,c). We now propose a fast approximate solution to the minimum operational cost problem in a distributed cloud computing environment, through a reduction to the minimum cost multicommodity flow problem in G f, where there are M commodities to be routed from their source nodes WP j to a common destination node t 0 with demands r j, represented by a triple (WP j,t 0 ; r j ) for all j with j M such that the system throughput M λr j is maximized while the associated operational cost is minimized, where 0 λ. Following Garg and Könemann s optimization framework for the minimum cost multicommodity flow problem [7], the value of λ can be obtained. Meanwhile, a flow f j with value of f j = λ r j for each commodity j is obtained too. Since +ɛ the value f j of flow f j is no more than log +ɛ δ times the capacity of the most congested edge in G f, a feasible flow f j with value of f j = f j +ɛ is then obtained by scaling flow log +ɛ δ +ɛ f by log +ɛ δ for all j, j M. Having the feasible flow f (e) at each edge e E f, the number of servers to be switchingon in each DC i, n i, can then be determined, which M is n i = f (WP j,dc i) μ i + μ id i, while the average delay Q Q 02
5 D i of all allocated requests will be met. The minimum cost for the feasible flow f thus is e E f c(e) f (e). The detailed description of algorithm for the minimum operational cost problem is given in Algorithm. Algorithm Request Allocations at time slot t Input: A set of data centers {DC i i N}, a set of web portals {WP j j M}, request arrival rate r j at web portal WP j for each j with j M, the set {p i (t) i N} of electricity prices for all data centers, the set {p b j j M} of bandwidth costs, and the accuracy parameter ɛ with 0 <ɛ. Output: The minimum operational cost C, the request assignment {r j,i j M, i N} such that N M i= r j,i is maximized and the number of servers n i ( N i ) in each DC i to be switched on, where N i= r j,i r j. : Construct an auxiliary flow network G f, in which there are M commodities in G f to be routed from their sources to the destination t 0, {(WP j,t 0,r j ) j M}; 2: B M r j (max i N {p i (t)}α i + p b j ); /* The upper bound on the total cost*/ 3: Let f j be the feasible flow of each commodity j delivered by applying Garg and Könemann s algorithm for the minimum cost multicommodity flow problem to G f, and let f (e) be the fraction of the feasible flow on each edge e E f ; 4: for each web portal WP j WP do 5: for each data center DC i DC do 6: r j,i f (WP j,dc i ) ; 7: end for 8: end for M 9: n i f (WP j,dc i) μ i + μ id i ; /* the number of servers in DC i to be turned on */ 0: C e E f c(e) f (e). C. Algorithm complexity analysis and correctness In the following we first show that the average delay of any request allocated to data center DC i is no greater than D i for all i with i N. We then analyze the performance and computational complexity of the proposed algorithm as follows. Lemma : For each request from any web portal allocated to a data center DC i, then the average delay of the request in DC i is no more than D i for all i with i N. Proof: Following Eq. (5), the capacity u(e i ) for each edge e i = DC i,t E f represents the maximum number of requests that can be served by DC i while their average delay requirement D i is met. We show this claim by contradiction. Let f (e i ) be the fraction of the feasible flow f on edge e i, then f (e i ) u(e i )= N i μ i D i by flow constraints. We now assume that at least one request among the f (e i ) requests cannot be served by DC i within the delay D i, which means the average delay of the requests in DC i is strictly greater than D i when all its servers are switched on, i.e., f (e i ) >u(e i )= N i μ i D i. This contradicts the fact that f (e i ) u(e i )= N i μ i D i, the lemma then follows. Theorem 2: Given a distributed cloud computing environment consisting of M web portals and N data centers located in different geographical regions with time-varying electricity prices, there is a fast approximation algorithm for the minimum operational cost problem for fair request rate allocations, which delivers a system throughput no less than ( 3ɛ) times the optimum while its cost is the minimum. The algorithm takes O (ɛ 2 M 2 N 2 ) time, where ɛ is a given constant with 0 <ɛ /3. Proof: Since the auxiliary flow network G f (V f,e f,u,c) consists of V f = M +N +2 nodes and E f = MN+M +N edges, the construction of G f takes O( V f + E f ) time. Following Theorem, Garg and Könemann s algorithm takes O (ɛ 2 M 2 N 2 ) time to solve the minimum cost multicommodity flow problem in G f where there are M commodities to be routed from their sources to a common destination t 0, and the solution obtained (i.e., the system throughput) is ( 3ɛ) times the optimum. Thus, Algorithm takes O (ɛ 2 M 2 N 2 ) time and the solution delivered is ( 3ɛ) times the optimum in terms of the system throughput. Following the definition of the minimum cost multicommodity flow problem, the solution f is a feasible solution to the minimum cost multicommodity flow problem, and B is an upper bound on the optimal budget of the problem, thus, the associated cost for the feasible flow f is the minimum one. In the rest we show that the cost of f is no more than the optimal cost C of the optimal flow f for the minimum operational cost problem as follows. Let P be the union of sets P (WP j,t 0 ), where each routing path p P (WP j,t 0 ) routes part of the demands r j from WP j to t 0. and let P (WP j,t 0 ) be the set of routing paths of the optimal flow fj, then P = M P (WP j,t 0 ). Let f j be the feasible flow obtained by Algorithm. Let P () (WP j,t 0 ) be a subset of P (WP j,t 0 ) such that the value of the flow of routing paths in it is equal to f j, i.e., there is the same fraction of demands d j as the feasible flow f j routed from WP j to t 0. Note that one of the routing paths in P () (WP j,t 0 ) may need to reduce its flow in order to reach the value f j. We then have e E f f (e) c(e) M e p P () (WP j,t 0) f () (e) c(e), (6) as the feasible flow f delivered by Algorithm is a minimum cost multicommodity flow, where f () (e) is the sum of flows in M P() (WP j,t 0 ) on edge e M E f. Meanwhile, e p P () (WP f () j,t 0) (e) c(e) M e p P (WP f j,t 0) (e) c(e) =C, because the left hand side is part of the right hand side. Thus, e E f f (e) c(e) M e p P () (WP f () j,t 0) (e) c(e) C, and the cost associated with the feasible flow f is the minimum. 03
6 V. PERFORMANCE EVALUATION In this section we evaluate the performance of the proposed algorithm through experimental simulations, using real-life electricity price data sets. A. Simulation environment We consider a cloud environment consisting of five data centers and six web portals, where each data center hosts 5,000 to 25,000 homogeneous servers with identical operating power of 350 W atts. The service rate of servers μ i in different data centers are different, which vary from 2.75 to 3.25, while the bandwidth cost of allocating a request from a web portal to a data center varies from $0.03 to $0.06 per hour. The maximum request rates of web portals are from 60,000 to 70,000. Table I summarizes the properties of the data centers and web portals. The accuracy parameter ɛ is set to 0.. The running time is based on a desktop with 2.66 GHz Intel Core 2 Quad CPU and 8GB RAM. Unless otherwise specified, in the following we adopt this default setting and refer to the proposed algorithm as algorithm Fair. TABLE I DATA CENTERS AND WEB PORTALS Data Centers Location Number Service Operating of rate (reqs Power Machines /second) (Watts) DC Mountain View, 5, CA DC 2 Council Bluffs, 5, IA DC 3 Boston, MA 20, DC 4 Houston, TX 25, DC 5 Lenoir, NC 25, Web portals Location Maximum Bandwidth cost ($/hr) request rate WP Seattle 70, WP 2 San Francisco 70, WP 3 Dallas 63, WP 4 Chicago 63, WP 5 New Mexico 60, WP 6 Denver 60, The request rate Fig Lognormal Uniform Time Two types of request arrival patterns at each web portal. The request model: In real applications the request rate of a web portal during a day usually starts to rise at around 9:00am, reaching a peak at around 2:00pm and leveling off before 6:00pm. This request arrival pattern is similar to a lognormal process [] which is referred to the lognormal request arrival pattern. However, due to some unexpected social events, a web portal may receive burst requests at a certain time period. These burst requests usually exceed the processing capability of data centers, and this request rate may not change during that time period. We refer to this type of request arrival pattern as the uniform request arrival pattern. We will evaluate the performance of the proposed algorithm, using these two different request arrival patterns as illustrated in Fig. 2. In our simulation, notice that the curves of request arrival rates from web portals are shifted according to their time zones. The electricity price (dollar per MWh) Fig Mountain View, California Council Bluffs, Iowa Boston, Massachusetts Houston, Texas Lenoir, North Carolina Time (GMT -8) Electricity prices of different data centers in Table I Electricity prices for data centers: The average electricity price per hour at each data center is calculated, according to the raw electricity price data obtained from US government agencies [4], [5], and the electricity prices at the data centers are shifted according to their time zones. Fig. 3 depicts the curves of electricity prices of the five data centers listed in Table I. B. Algorithm performance evaluation We first investigate the operational cost of the solution delivered by algorithm Fair. The average operational costs of algorithm Fair during 24 hours are,900, and 3,33, for the lognormal and uniform request arrival patterns, respectively. To further investigate whether this cost C is optimal, we decrease the budget B in algorithm Fair by a certain percentage β of its current cost C where β varies from 0.95 to With this new budget B = β C, if algorithm Fair is able to deliver another feasible solution while maintaining the current system throughput, then the cost C is not optimal to the problem. Fig. 4 depicts the impact of β on the system throughput, from which it can clearly be seen when the budget B is below C, the system throughput drops below its current level accordingly. Thus, the operational cost C is optimal. We then evaluate the performance of algorithm Fair under both lognormal and uniform request arrival patterns, by incorporating time varying electricity price diversity. Fig. 5 (a) 04
7 The number of switched on servers25000 Mountain View, California Council Bluffs, Iowa Boston, Massachusetts Houston, Texas Lenoir, North Carolina Time (GMT -8) (a) The number of active servers under the lognormal request arrival pattern The number of switched on servers Mountain View, California Council Bluffs, Iowa Boston, Massachusetts Houston, Texas Lenoir, North Carolina Time (GMT -8) (b) The number of active servers under the uniform request arrival pattern The operational cost 4e e+06 3e e+06 2e+06.5e+06 Lognormal Uniform e Time (GMT -8) (c) The hourly operational cost Fig. 5. The performance evaluation of algorithm Fair with time-varying electricity prices. The throughput 3e e+05 2e+05.5e+05 e β Uniform Lognormal Fig. 4. The impact of the budget B = β C on the system throughput where C is the cost of the solution by Fair when the budget B C. shows that under the lognormal request arrival pattern, although the data center located in Lenoir has a nearly identical processing capability as that of the data center in Houston, it contains a fewer servers to be switched on due to the high electricity price there. However, Fig. 5 (b) indicates that under the uniform request arrival pattern, almost all servers in all data centers are to be switched on, as the sum of the number of requests from all web portals is greater than the aggregative processing capability of these data centers. Fig. 5 (c) plots that for both request arrival patterns, the operational costs vary over time. Particularly, the operational costs are much higher from 0am to 6pm, since the electricity prices during this period are normally higher than those in other time periods as shown in Fig. 3. We thirdly evaluate the system throughput and fairness provided by algorithm Fair by varying the accuracy parameter ɛ from 0.05 to 0.. Notice that with this setting, all requests under the lognormal request arrival pattern can be processed immediately by the data centers as their aggregative processing capability is much larger than the accumulative request load from all web portals. The rest thus focuses only on the performance of algorithm Fair under the uniform request arrival pattern. To investigate how far the system throughput delivered by algorithm Fair from the optimal one OPT, we use the maximum flow in G f from s 0 to t 0 as an upper bound on OPT. It must be mentioned that this upper bound estimation is very conservative because it does not consider the budget constraint. Fig. 6 (a) plots the curve of the system throughput while Fig. 6 (b) depicts the throughput approximation ratio curve by algorithm Fair. From Fig. 6 (b) it can be seen that the throughput ratio by algorithm Fair is no less than 0.96 when ɛ =0.05, and this value decreases with the increase of ɛ. With different accuracy values ɛ, the running time of algorithm Fair is different too, which is illustrated in Fig. 6 (c). Obviously, a larger ɛ will result in a shorter running time. Fig. 7 depicts the percentage of numbers of requests from each web portal WP j allocated to data centers, λ j = N i= r j,i/r j by algorithm Fair under the uniform request arrival patter. It can be seen that the solution delivered by the algorithm maintains the request rate fairness among the web portals. The performance ratio Seattle San Francisco Dallas Chicago ε=0.05 ε=0. ε=0.5 ε=0.2 ε=0.25 New Mexico Denver Fig. 7. The percentage of numbers of requests from each web portal allocated. We finally study the scalability of algorithm Fair by assuming that there is a virtual large scale cloud service provider consisting of from 0 to 20 data centers randomly deployed in some of 48 potential states in the States and one data center is deployed in each chosen state. We evaluate the scalability of algorithm Fair by varying the number of data centers from 0 to 20 while keeping the request rate of each 05
8 The throughput 2.8e e e e e e e+05 2.e+05 OPT Fair 2e ε (a) The system throughput The throughput approximation ratio Fair ε (b) The throughput approximation ratio The running time (second) Fair ε (c) The running time Fig. 6. The performance of algorithm Fair under the uniform request arrival pattern with different accuracy values ɛ. web portal unchanged. We set ɛ to be 0.. The electricity price for each data center is randomly selected from a prior given set of electricity prices. The time zone of each data center is randomly chosen from GMT-8 to GMT-5. Fig. 8 indicates that algorithm Fair takes at most 8 seconds and 4 seconds to find a solution to allocate all coming requests during 24 hours under both uniform and lognormal request arrival patterns, respectively. The running time (second) Uniform Lognormal The number of data centers Fig. 8. The scalability of algorithm Fair under both lognormal and uniform request arrival patterns. VI. CONCLUSION In this paper, we considered the minimum operational cost problem for fair request rate allocations in a distributed cloud service environment, by incorporating time-varying electricity prices and request rates, for which we first proposed an optimization framework. We then developed a fast approximation algorithm with a provable approximation ratio. We finally conducted extensive experiments by simulation to evaluate the performance of the proposed algorithm, using real-life electricity price data sets. The experimental results demonstrate that the proposed algorithm is very promising. [2] N. Buchbinder, N. Jain, and I. Menache. Online job-migration for reducing the electricity bill in the cloud. Proc. of Networking, IFIP, 20. [3] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. 3rd Edition, MIT Press, [4] U.S. energy information administration (EIA). [5] Federal Energy Regulatory Commission. [6] Y. Guo, Z. Ding, Y. Fang, and D. Wu. Cutting down electricity cost in Internet data centers by using energy storage. Proc of Globecom, IEEE, 20. [7] N. Garg and J. Könemann. Faster and simpler algorithms for multicommodity flow and other fractional packing problems. Proc of FOCS 98, IEEE, 998. [8] Google Data Centers. [9] D. Gross, J. F. Shortle, J. M. Thompson, and C. M. Harris. Fundamentals of Queueing Theory. 4th Edition, Wiley Press, [0] IBM cloud computing. [] J. Kaplan, W. Forrest, and N. Kindler. Revolutionizing data centre energy efficiency. McKinsey, Technique Report, [2] K. Le, O. Bilgir, R. Bianchini, M. Martonosi, and T. D. Nguyen. Managing the cost, energy consumption, and carbon footprint of Internet services. Proc. of SIGMETRICS 0, ACM, 200. [3] Z. Liu, M. Lin, A. Wierman, S. Low, and L. L. H. Andrew. Geographical load balancing with renewables. Performance Evaluation Review, ACM, Vol. 39, No. 3, pp.62 66, 20. [4] Z. Liu, M. Lin, A. Wierman, S. Low, and L. L. H. Andrew. Greening geographical load balancing. Proc. of SIGMETRICS, ACM, 20. [5] A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, and B. Maggs. Cutting the electric bill for Internet-scale systems. Proc. of SIGCOMM 09, ACM, 2009 [6] L. Rao, X. Liu, M. D. Ilic, and J. Liu. Distributed coordination of Internet data centers under multiregional electricity markets. Proc. of IEEE, IEEE, 20. [7] L. Rao, X. Liu, L. Xie, and W. Liu. Minimizing electricity cost: optimization of distributed Internet data centers in a multi-electricitymarket environment. Proc. of INFOCOM 0, IEEE, 200. [8] L. Rao, X. Liu, L. Xie, and W. Liu. Coordinated energy cost management of distributed Internet data centers in smart grid. IEEE Transactions on Smart Grid, IEEE, Vol. 3, No., pp.50 58, 202. [9] A. Riska, N. Mi, E. Smirni, and G. Casale. Feasibility regions: exploiting tradeoffs between power and performance in disk drives. Performance Evaluation Review, ACM, Vol. 37, No. 3, pp.43-48, [20] Y. Zhang, Y. Wang, and X. Wang. GreenWare: greening cloudscale data centers to maximize the use of renewable energy. Proc. of MIDDLEWARE, ACM, 20. [2] Z. Zhang, M. Zhang, A. Greenberg, Y. C. Hu, R. Mahajan, and B. Christian. Optimizing cost and performance in online service provider networks. Proc. of NSDI 0, ACM 200. REFERENCES [] T. Benson, A. Anand, A. Akella, and M. Zhang. Understanding data center traffic characteristics. Proc. of Sigcomm Workshop: Research on Enterprise Networks, ACM,
Profit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs
Profit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs Mahdi Ghamkhari and Hamed Mohsenian-Rad Department of Electrical Engineering University of California at Riverside,
Data Center Energy Cost Minimization: a Spatio-Temporal Scheduling Approach
23 Proceedings IEEE INFOCOM Data Center Energy Cost Minimization: a Spatio-Temporal Scheduling Approach Jianying Luo Dept. of Electrical Engineering Stanford University [email protected] Lei Rao, Xue
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
Data Locality-Aware Query Evaluation for Big Data Analytics in Distributed Clouds
Data Locality-Aware Query Evaluation for Big Data Analytics in Distributed Clouds Qiufen Xia, Weifa Liang and Zichuan Xu Research School of Computer Science Australian National University, Canberra, ACT
ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS
ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS Lavanya M., Sahana V., Swathi Rekha K. and Vaithiyanathan V. School of Computing,
Dynamic Scheduling and Pricing in Wireless Cloud Computing
Dynamic Scheduling and Pricing in Wireless Cloud Computing R.Saranya 1, G.Indra 2, Kalaivani.A 3 Assistant Professor, Dept. of CSE., R.M.K.College of Engineering and Technology, Puduvoyal, Chennai, India.
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING
Journal homepage: http://www.journalijar.com INTERNATIONAL JOURNAL OF ADVANCED RESEARCH RESEARCH ARTICLE CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING R.Kohila
Load Balancing and Switch Scheduling
EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: [email protected] Abstract Load
ENERGY EFFICIENT AND REDUCTION OF POWER COST IN GEOGRAPHICALLY DISTRIBUTED DATA CARDS
ENERGY EFFICIENT AND REDUCTION OF POWER COST IN GEOGRAPHICALLY DISTRIBUTED DATA CARDS M Vishnu Kumar 1, E Vanitha 2 1 PG Student, 2 Assistant Professor, Department of Computer Science and Engineering,
Integrated Backup Topology Control and Routing of Obscured Traffic in Hybrid RF/FSO Networks*
Integrated Backup Topology Control and Routing of Obscured Traffic in Hybrid RF/FSO Networks* Abhishek Kashyap, Anuj Rawat and Mark Shayman Department of Electrical and Computer Engineering, University
Performance of networks containing both MaxNet and SumNet links
Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for
Path Selection Methods for Localized Quality of Service Routing
Path Selection Methods for Localized Quality of Service Routing Xin Yuan and Arif Saifee Department of Computer Science, Florida State University, Tallahassee, FL Abstract Localized Quality of Service
On the Interaction and Competition among Internet Service Providers
On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers
On the effect of forwarding table size on SDN network utilization
IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems
215 IEEE International Conference on Big Data (Big Data) Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems Guoxin Liu and Haiying Shen and Haoyu Wang Department of Electrical
2004 Networks UK Publishers. Reprinted with permission.
Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET
Dynamic Virtual Machine Allocation in Cloud Server Facility Systems with Renewable Energy Sources
Dynamic Virtual Machine Allocation in Cloud Server Facility Systems with Renewable Energy Sources Dimitris Hatzopoulos University of Thessaly, Greece Iordanis Koutsopoulos Athens University of Economics
QUALITY OF SERVICE METRICS FOR DATA TRANSMISSION IN MESH TOPOLOGIES
QUALITY OF SERVICE METRICS FOR DATA TRANSMISSION IN MESH TOPOLOGIES SWATHI NANDURI * ZAHOOR-UL-HUQ * Master of Technology, Associate Professor, G. Pulla Reddy Engineering College, G. Pulla Reddy Engineering
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis Aparna.C 1, Kavitha.V.kakade 2 M.E Student, Department of Computer Science and Engineering, Sri Shakthi Institute
Optimization of IP Load-Balanced Routing for Hose Model
2009 21st IEEE International Conference on Tools with Artificial Intelligence Optimization of IP Load-Balanced Routing for Hose Model Invited Paper Eiji Oki Ayako Iwaki The University of Electro-Communications,
A Cloud Data Center Optimization Approach Using Dynamic Data Interchanges
A Cloud Data Center Optimization Approach Using Dynamic Data Interchanges Efstratios Rappos Institute for Information and Communication Technologies, Haute Ecole d Ingénierie et de Geston du Canton de
Impact of workload and renewable prediction on the value of geographical workload management. Arizona State University
Impact of workload and renewable prediction on the value of geographical workload management Zahra Abbasi, Madhurima Pore, and Sandeep Gupta Arizona State University Funded in parts by NSF CNS grants and
Router Scheduling Configuration Based on the Maximization of Benefit and Carried Best Effort Traffic
Telecommunication Systems 24:2 4, 275 292, 2003 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Router Scheduling Configuration Based on the Maximization of Benefit and Carried Best Effort
International Journal of Applied Science and Technology Vol. 2 No. 3; March 2012. Green WSUS
International Journal of Applied Science and Technology Vol. 2 No. 3; March 2012 Abstract 112 Green WSUS Seifedine Kadry, Chibli Joumaa American University of the Middle East Kuwait The new era of information
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers
Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers Ho Trong Viet, Yves Deville, Olivier Bonaventure, Pierre François ICTEAM, Université catholique de Louvain (UCL), Belgium.
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 [email protected], [email protected] Abstract One of the most important issues
IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT
IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT Muhammad Muhammad Bala 1, Miss Preety Kaushik 2, Mr Vivec Demri 3 1, 2, 3 Department of Engineering and Computer Science, Sharda
DECENTRALIZED LOAD BALANCING IN HETEROGENEOUS SYSTEMS USING DIFFUSION APPROACH
DECENTRALIZED LOAD BALANCING IN HETEROGENEOUS SYSTEMS USING DIFFUSION APPROACH P.Neelakantan Department of Computer Science & Engineering, SVCET, Chittoor [email protected] ABSTRACT The grid
Network Performance Monitoring at Small Time Scales
Network Performance Monitoring at Small Time Scales Konstantina Papagiannaki, Rene Cruz, Christophe Diot Sprint ATL Burlingame, CA [email protected] Electrical and Computer Engineering Department University
Two Approaches to Internet Traffic Engineering for End-to-End Quality of Service Provisioning
Two Approaches to Internet Engineering for End-to-End Quality of Service Provisioning Kin-Hon Ho, Michael Howarth, Ning Wang, George Pavlou and Stylianos Georgoulas Centre for Communication Systems Research,
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
Online Resource Management for Data Center with Energy Capping
Online Resource Management for Data Center with Energy Capping Hasan Mahmud and Shaolei Ren Florida International University 1 A massive data center Facebook's data center in Prineville, OR 2 Three pieces
The Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy
The Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy Yan Gao Accenture Technology Labs Zheng Zeng Apple Inc. Xue Liu McGill University P. R. Kumar Texas A&M University
Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.
Load Rebalancing for Distributed File Systems in Clouds. Smita Salunkhe, S. S. Sannakki Department of Computer Science and Engineering KLS Gogte Institute of Technology, Belgaum, Karnataka, India Affiliated
Shared Backup Network Provision for Virtual Network Embedding
Shared Backup Network Provision for Virtual Network Embedding Tao Guo, Ning Wang, Klaus Moessner, and Rahim Tafazolli Centre for Communication Systems Research, University of Surrey Guildford, GU2 7XH,
Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida
Power Management in Cloud Computing using Green Algorithm -Kushal Mehta COP 6087 University of Central Florida Motivation Global warming is the greatest environmental challenge today which is caused by
CHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century.
Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.
Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement
Load Balanced Optical-Network-Unit (ONU) Placement Algorithm in Wireless-Optical Broadband Access Networks
Load Balanced Optical-Network-Unit (ONU Placement Algorithm in Wireless-Optical Broadband Access Networks Bing Li, Yejun Liu, and Lei Guo Abstract With the broadband services increasing, such as video
An Approach to Load Balancing In Cloud Computing
An Approach to Load Balancing In Cloud Computing Radha Ramani Malladi Visiting Faculty, Martins Academy, Bangalore, India ABSTRACT: Cloud computing is a structured model that defines computing services,
Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing
Sla Aware Load Balancing Using Join-Idle Queue for Virtual Machines in Cloud Computing Mehak Choudhary M.Tech Student [CSE], Dept. of CSE, SKIET, Kurukshetra University, Haryana, India ABSTRACT: Cloud
Comparision of k-means and k-medoids Clustering Algorithms for Big Data Using MapReduce Techniques
Comparision of k-means and k-medoids Clustering Algorithms for Big Data Using MapReduce Techniques Subhashree K 1, Prakash P S 2 1 Student, Kongu Engineering College, Perundurai, Erode 2 Assistant Professor,
Task Scheduling for Efficient Resource Utilization in Cloud
Summer 2014 Task Scheduling for Efficient Resource Utilization in Cloud A Project Report for course COEN 241 Under the guidance of, Dr.Ming Hwa Wang Submitted by : Najuka Sankhe Nikitha Karkala Nimisha
Stability of QOS. Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu
Stability of QOS Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Abstract Given a choice between two services, rest of the things being equal, it is natural to prefer the one with more
A hierarchical multicriteria routing model with traffic splitting for MPLS networks
A hierarchical multicriteria routing model with traffic splitting for MPLS networks João Clímaco, José Craveirinha, Marta Pascoal jclimaco@inesccpt, jcrav@deecucpt, marta@matucpt University of Coimbra
Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints
Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints Olivier Beaumont,, Paul Renaud-Goud Inria & University of Bordeaux Bordeaux, France 9th Scheduling for Large Scale Systems
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
Dynamic Workload Management in Heterogeneous Cloud Computing Environments
Dynamic Workload Management in Heterogeneous Cloud Computing Environments Qi Zhang and Raouf Boutaba University of Waterloo IEEE/IFIP Network Operations and Management Symposium Krakow, Poland May 7, 2014
Power-efficient Virtual Machine Placement and Migration in Data Centers
2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing Power-efficient Virtual Machine Placement and Migration
Multi-service Load Balancing in a Heterogeneous Network with Vertical Handover
1 Multi-service Load Balancing in a Heterogeneous Network with Vertical Handover Jie Xu, Member, IEEE, Yuming Jiang, Member, IEEE, and Andrew Perkis, Member, IEEE Abstract In this paper we investigate
Tasks Scheduling Game Algorithm Based on Cost Optimization in Cloud Computing
Journal of Computational Information Systems 11: 16 (2015) 6037 6045 Available at http://www.jofcis.com Tasks Scheduling Game Algorithm Based on Cost Optimization in Cloud Computing Renfeng LIU 1, Lijun
A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster
, pp.11-20 http://dx.doi.org/10.14257/ ijgdc.2014.7.2.02 A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster Kehe Wu 1, Long Chen 2, Shichao Ye 2 and Yi Li 2 1 Beijing
Energetic Resource Allocation Framework Using Virtualization in Cloud
Energetic Resource Allocation Framework Using Virtualization in Ms.K.Guna *1, Ms.P.Saranya M.E *2 1 (II M.E(CSE)) Student Department of Computer Science and Engineering, 2 Assistant Professor Department
Factors to Consider When Designing a Network
Quality of Service Routing for Supporting Multimedia Applications Zheng Wang and Jon Crowcroft Department of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom ABSTRACT
Environmental and Green Cloud Computing
International Journal of Allied Practice, Research and Review Website: www.ijaprr.com (ISSN 2350-1294) Environmental and Green Cloud Computing Aruna Singh and Dr. Sanjay Pachauri Abstract - Cloud computing
An Efficient Hybrid P2P MMOG Cloud Architecture for Dynamic Load Management. Ginhung Wang, Kuochen Wang
1 An Efficient Hybrid MMOG Cloud Architecture for Dynamic Load Management Ginhung Wang, Kuochen Wang Abstract- In recent years, massively multiplayer online games (MMOGs) become more and more popular.
Joint Optimization of Overlapping Phases in MapReduce
Joint Optimization of Overlapping Phases in MapReduce Minghong Lin, Li Zhang, Adam Wierman, Jian Tan Abstract MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple
An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution
An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution Y. Kessaci, N. Melab et E-G. Talbi Dolphin Project Team, Université Lille 1, LIFL-CNRS,
Change Management in Enterprise IT Systems: Process Modeling and Capacity-optimal Scheduling
Change Management in Enterprise IT Systems: Process Modeling and Capacity-optimal Scheduling Praveen K. Muthusamy, Koushik Kar, Sambit Sahu, Prashant Pradhan and Saswati Sarkar Rensselaer Polytechnic Institute
A Genetic Algorithm Approach for Solving a Flexible Job Shop Scheduling Problem
A Genetic Algorithm Approach for Solving a Flexible Job Shop Scheduling Problem Sayedmohammadreza Vaghefinezhad 1, Kuan Yew Wong 2 1 Department of Manufacturing & Industrial Engineering, Faculty of Mechanical
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
Cost Effective Selection of Data Center in Cloud Environment
Cost Effective Selection of Data Center in Cloud Environment Manoranjan Dash 1, Amitav Mahapatra 2 & Narayan Ranjan Chakraborty 3 1 Institute of Business & Computer Studies, Siksha O Anusandhan University,
A COMPARATIVE STUDY OF SECURE SEARCH PROTOCOLS IN PAY- AS-YOU-GO CLOUDS
A COMPARATIVE STUDY OF SECURE SEARCH PROTOCOLS IN PAY- AS-YOU-GO CLOUDS V. Anand 1, Ahmed Abdul Moiz Qyser 2 1 Muffakham Jah College of Engineering and Technology, Hyderabad, India 2 Muffakham Jah College
2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment
R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI
Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing
Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Problem description Cloud computing is a technology used more and more every day, requiring an important amount
Real Time Bus Monitoring System by Sharing the Location Using Google Cloud Server Messaging
Real Time Bus Monitoring System by Sharing the Location Using Google Cloud Server Messaging Aravind. P, Kalaiarasan.A 2, D. Rajini Girinath 3 PG Student, Dept. of CSE, Anand Institute of Higher Technology,
MEASURING PERFORMANCE OF DYNAMIC LOAD BALANCING ALGORITHMS IN DISTRIBUTED COMPUTING APPLICATIONS
MEASURING PERFORMANCE OF DYNAMIC LOAD BALANCING ALGORITHMS IN DISTRIBUTED COMPUTING APPLICATIONS Priyesh Kanungo 1 Professor and Senior Systems Engineer (Computer Centre), School of Computer Science and
Capping the Brown Energy Consumption of Internet Services at Low Cost
Capping the Brown Energy Consumption of Internet Services at Low Cost Kien T. Le Ricardo Bianchini Thu D. Nguyen Rutgers University Ozlem Bilgir Margaret Martonosi Princeton University Energy Consumption
A Scalable Monitoring Approach Based on Aggregation and Refinement
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL 20, NO 4, MAY 2002 677 A Scalable Monitoring Approach Based on Aggregation and Refinement Yow-Jian Lin, Member, IEEE and Mun Choon Chan, Member, IEEE
Analysis of the influence of application deployment on energy consumption
Analysis of the influence of application deployment on energy consumption M. Gribaudo, Nguyen T.T. Ho, B. Pernici, G. Serazzi Dip. Elettronica, Informazione e Bioingegneria Politecnico di Milano Motivation
Cloud Management: Knowing is Half The Battle
Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Joseph
Proxy-Assisted Periodic Broadcast for Video Streaming with Multiple Servers
1 Proxy-Assisted Periodic Broadcast for Video Streaming with Multiple Servers Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,
IN THIS PAPER, we study the delay and capacity trade-offs
IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 15, NO. 5, OCTOBER 2007 981 Delay and Capacity Trade-Offs in Mobile Ad Hoc Networks: A Global Perspective Gaurav Sharma, Ravi Mazumdar, Fellow, IEEE, and Ness
A Network Flow Approach in Cloud Computing
1 A Network Flow Approach in Cloud Computing Soheil Feizi, Amy Zhang, Muriel Médard RLE at MIT Abstract In this paper, by using network flow principles, we propose algorithms to address various challenges
supported Application QoS in Shared Resource Pools
Supporting Application QoS in Shared Resource Pools Jerry Rolia, Ludmila Cherkasova, Martin Arlitt, Vijay Machiraju HP Laboratories Palo Alto HPL-2006-1 December 22, 2005* automation, enterprise applications,
Multi-Commodity Flow Traffic Engineering with Hybrid MPLS/OSPF Routing
Multi-Commodity Flow Traffic Engineering with Hybrid MPLS/ Routing Mingui Zhang Tsinghua University Beijing, China [email protected] Bin Liu Tsinghua University Beijing, China [email protected]
