Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs



Similar documents
Fault tolerance in cloud technologies presented as a service

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

J. Parallel Distrib. Comput.

How To Improve Power Demand Response Of A Data Center Wth A Real Time Power Demand Control Program

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid

DEFINING %COMPLETE IN MICROSOFT PROJECT

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

Dimming Cellular Networks

Dynamic Pricing for Smart Grid with Reinforcement Learning

IMPACT ANALYSIS OF A CELLULAR PHONE

Enabling P2P One-view Multi-party Video Conferencing

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA).

Formulating & Solving Integer Problems Chapter

Dynamic Fleet Management for Cybercars

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Traffic State Estimation in the Traffic Management Center of Berlin

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

The Greedy Method. Introduction. 0/1 Knapsack Problem

Calculating the high frequency transmission line parameters of power cables

264 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 11, NO. 3, SEPTEMBER 2014

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Cloud-based Social Application Deployment using Local Processing and Global Distribution

A Agent-Based Homeostatic Control for Green Energy in the Smart Grid

Temperature Aware Workload Management in Geo-distributed Data Centers

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A Lyapunov Optimization Approach to Repeated Stochastic Games

"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *

Thermal-aware relocation of servers in green data centers

Politecnico di Torino. Porto Institutional Repository

Pricing Data Center Demand Response

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

Project Networks With Mixed-Time Constraints

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

An Interest-Oriented Network Evolution Mechanism for Online Communities

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014

In some supply chains, materials are ordered periodically according to local information. This paper investigates

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage

P2P/ Grid-based Overlay Architecture to Support VoIP Services in Large Scale IP Networks

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

A 2 -MAC: An Adaptive, Anycast MAC Protocol for Wireless Sensor Networks

CLoud computing technologies have enabled rapid

Support Vector Machines

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

Temperature Aware Workload Management in Geo-distributed Datacenters

Period and Deadline Selection for Schedulability in Real-Time Systems

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

Hosting Virtual Machines on Distributed Datacenters

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Fuel Cell Generation in Geo-Distributed Cloud Services: A Quantitative Study

Virtual Network Embedding with Coordinated Node and Link Mapping

Omega 39 (2011) Contents lists available at ScienceDirect. Omega. journal homepage:

Agent-based Micro-Storage Management for the Smart Grid

EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP. Kun-chan Lan and Tsung-hsun Wu

A Novel Auction Mechanism for Selling Time-Sensitive E-Services

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA )

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to

LIFETIME INCOME OPTIONS

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

A Performance Analysis of View Maintenance Techniques for Data Warehouses

Availability-Based Path Selection and Network Vulnerability Assessment

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

An MILP model for planning of batch plants operating in a campaign-mode

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud

Multi-Source Video Multicast in Peer-to-Peer Networks

An Alternative Way to Measure Private Equity Performance

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

To Fill or not to Fill: The Gas Station Problem

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook)

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Sngle Snk Buy at Bulk Problem and the Access Network

On the Interaction between Load Balancing and Speed Scaling

Enterprise Master Patient Index

Restoration of Services in Interdependent Infrastructure Systems: A Network Flows Approach

On the Interaction between Load Balancing and Speed Scaling

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Transcription:

Cost Mnmzaton usng Renewable Coolng and Thermal Energy Storage n CDNs Stephen Lee College of Informaton and Computer Scences UMass, Amherst stephenlee@cs.umass.edu Rahul Urgaonkar IBM Research rurgaon@us.bm.com Ramesh Staraman, Prashant Shenoy College of Informaton and Computer Scences UMass, Amherst {ramesh,shenoy}@cs.umass.edu Abstract Content delvery networks (CDNs) employ hundreds of data centers that are dstrbuted across varous geographcal locatons. These data centers consume a sgnfcant amount of energy to power and cool ther servers. Ths paper nvestgates the jont effectveness of usng two new coolng technologes - open ar coolng (OAC) and thermal energy storage (TES) - n CDNs to reduce ther dependence on tradtonal chllerbased coolng and mnmze ts energy costs. Our Lyapunov-based onlne algorthm optmally dstrbutes workload to data centers leveragng prce and weather varatons. We conduct a trace based smulaton usng weather data from NOAA and workload data from a global CDN. Our results show that CDNs can acheve at least 64% and 98% coolng energy savngs durng summer and wnter respectvely. Further, CDNs can sgnfcantly reduce ther coolng energy footprnt by swtchng to renewable open ar coolng. We also emprcally evaluate our approach and show that t performs optmally. Keywords Content delvery networks; coolng energy I. INTRODUCTION Modern content delvery networks (CDNs) allow content provders of web-based servces to effcently delver ther content to end-users through a global network of servers. From an archtectural standpont, servers of a CDN are organzed nto clusters that are housed n a dstrbuted set of data centers across the globe. Gven ther mmense szes, the data centers of a CDN consume sgnfcant amounts of energy, and as a result, the desgn of green CDNs that reduce the energy usage has ganed recent research attenton [6]. CDNs, and data centers n general, consume sgnfcant amounts of energy to power ther servers and cool them. Whle technques for reducng the energy consumed by data center servers have receved sgnfcant attenton [3], [13], technques for reducng the energy spent n coolng servers of a CDN are less well studed. The topc s nevertheless mportant snce coolng energy represents a sgnfcant porton of the total energy usage wthn data centers n some cases, up to a watt of coolng energy s needed for each watt consumed by the servers [8]. In an effort to reduce ther coolng energy, companes such as Facebook have made remarkable progress by swtchng to renewable sources for ther coolng energy needs. By usng renewable coolng sources, rather than expensve chllers, they have sgnfcantly reduced the energy needed to cool ther data center servers and acheved Power Usage Effectveness (PUE) as low as 1.7 [1]. Based on these emergng trends, n ths paper, we focus on usng alternate technologes, ncludng renewables, for coolng servers n dstrbuted data centers of a CDN to acheve reductons n coolng energy usage as well as energy costs. We study two complementary coolng technologes: renewable open ar coolng (OAC) and thermal energy storage (TES) to acheve these goals. In case of renewable OAC, outsde ar s drectly used to cool the servers of the data center, nstead of relyng on tradtonal chller-based technques. Snce open ar coolng s largely free, t can acheve sgnfcant reductons n energy coolng costs. However the effectveness of the approach depends on outsde weather condtons the outsde temperature and humdty must permt ts use and t may not be feasble to employ OAC durng extreme weather condtons such as very hot or humd summer days. Thermal energy storage (TES) s a complementary coolng technology where thermal energy s stored by the data center n chlled water or chlled ce tanks, and ths stored energy s used to cool data center servers when needed e.g., when OAC becomes nfeasble. Whle TES has not seen much use n data center scenaros, they are common n other settngs such as manufacturng plants [14]. The use of thermal energy storage as a falover opton to OAC s smlar to the use of UPS batteres a form of chemcal energy storage as a falover opton to power servers when grd power becomes unavalable. Thermal energy storage technques can also be used to optmze coolng energy costs by storng energy when electrcty prces are low and usng stored thermal energy durng peak hours when prces are hgh. Thus, OAC and TES are complementary technologes and both have the potental to sgnfcantly reduce the relance on expensve chller-based coolng technologes n data centers and CDNs. A CDN has sgnfcant flexblty at ts dsposal n explotng these technologes. Typcally CDN content s replcated at multple data centers for reasons of avalablty and performance. In such a scenaro, the CDN routes user requests to the nearest data center that has the requested content to optmze user-perceved performance. In the presence of OAC and TES, the decson on whch data centers to use for servcng user requests can be made based on both energy and performance consderatons. When the weather permts the use of OAC, requests contnue to be served by the nearest data center lke before. When the weather does not permt the use of OAC at a partcular CDN data center, the CDN can dynamcally decde between one of two optons: t can swtch over to the use of TES at such stes or t can redrect request to another data center n the regon where the local weather allows for the use of OAC. Further, f a data center s low on stored thermal energy and none of the nearby data centers can

employ OAC, the CDN can also choose to redrect requests to nearby data centers that have avalable TES capacty. Clearly the decson on whch CDN data center to use for ncomng requests and whether to use OAC or TES must be made onlne and dynamcally n an autonomous fashon. To do so, we need an onlne adaptve approach that determnes how to best use OAC and TES at varous CDN data centers whle ensurng that users perceved performance guarantees. Our contrbutons: In ths paper, we propose a new Lyapunov optmzaton-based dstrbuted algorthm that ntegrates the use of OAC and TES for coolng servers of a CDN. Our approach makes onlne decsons on how to adaptvely route requests to maxmze the use of OAC and ntellgently uses TES when OAC s nfeasble wthn a regon; the algorthm also makes ntellgent decsons on when to charge the TES and when to use the stored thermal energy based on varable electrcty prcng schemes. Whle Lyapunov-based approaches have prevously been proposed for usng batteres to optmze server power usage [3], ts use n TES settngs for coolng servers and the ntegraton of OAC wth TES has not been studed prevously. We evaluate our algorthm usng real workload traces from Akama s CDN and use real weather and electrcty prcng data from varous locatons. Our results show that CDNs can acheve at least 64% and 98% coolng energy savngs durng summer and wnter respectvely (Fgure 1). Further, we show that load redrecton can be restrcted to mnmally mpact performance. We provde theoretcal bounds on our algorthm s performance (Theorem 4.1), and emprcally compare ts performance wth the offlne optmal (Fgure 2). II. A. Content Delvery Network BACKGROUND A content delvery network (CDN) comprses thousands of servers, organzed nto a dstrbuted network of data centers spread across the globe. A CDN can leverage ts proxmty to end users to delver content whle mnmzng latency and loss. Archtecturally, a CDN employs a two level load balancng algorthm that frst maps ncomng requests to a partcular data center and then maps each request to a partcular server wthn that data center. The frst-level load balancer, whch s referred to as the global load balancer, wll typcally choose a data center that s closest to the end-user to optmze userperceved performance. Our work enhances the global load balancer to make t energy aware, specfcally to take advantage of new coolng technologes when determnng how to map requests to data centers, whle contnung to optmze user performance. Further, we note that whle some CDNs may not have control over the coolng technologes used n ther data centers, we assume that a future shft towards modular selfcontaned data centers may provde CDNs greater flexblty n adoptng these advanced coolng technologes, even n colocated (colo) facltes. B. Coolng energy model To model server power consumpton, we use a wellknown lnear model for computng the power consumed by a server [2]. The power consumed by a server (n Watts) can be modeled as a functon of ts workload.e. P dle + (P peak P dle )λ, where P dle s the power consumed by an dle server, P peak s the power consumed durng peak load and λ s the normalzed workload equvalent to the load served by the server as a fracton of ts capacty. In order to calculate the server energy consumed n a data center, we further assume that the server load can be dynamcally consoldated and dle servers n a cluster can be turned off to save energy [5]. Gven the energy consumed by a server usng the above model, we can then derve a smple model to compute the energy needed to cool server. Bascally, we transform the server energy to coolng energy usng Power Usage Effectveness (PUE), an energy effcency metrc for data centers, whch s a rato between the total energy consumed n a data center and the total energy consumed by the servers. Gven the data center PUE and the sever energy consumpton, we can calculate the coolng energy. 1 Recent surveys have shown that the average PUE of a data center s 1.8 [8], whch s the value assumed n our experments when estmatng coolng energy. C. Coolng technologes Open ar coolng: Data centers can be equpped wth OAC and can use one of the two OAC technologes: ar-sde or water-sde economzer. In an ar-sde economzer, hot ar s flushed outsde the data center and cool ar s drawn nto the data center. Water-sde economzer, on the other hand, uses water as a medum to cool, and uses cooler towers to enable free coolng. Typcally, a water-sde economzer s used n data centers as t can be easly ntegrated wth water-cooled chllers. However, dependng on the exstng nfrastructure ether of the technologes can be ntegrated to aval free coolng. The Amercan Socety of Heatng, Refrgeraton, and Ar condtonng Engneers (ASHRAE) defnes the permssble temperature and humdty for operatng IT equpments. We assume that whenever the outsde dry-bulb temperature, dew-pont temperature and humdty are below the ASHRAE recommended maxmum, a data center at that locaton can rely on outsde ar to cool ts server [12]. Modern server hardware have been bult to wthstand sgnfcantly hgher temperature and humdty levels. For example, an ASHRAE class A1 data center s engneered to wthstand temperatures of up to 32 C and humdty levels of %. More recent ASHRAE class A4 data centers hardware can wthstand temperatures of up to 45 C and humdty levels of 9%, makng t feasble to use OAC even n locatons wth warm or humd clmates [12]. We assume class A1 data centers n our experments. Thermal Energy Storage: TES allows coolng energy to be stored n a medum (ce, water etc.) provdng flexblty n drawng power from grd. Most facltes uses TES for load shftng. By storng energy when prces are cheap and dschargng when prces are hgh, load s shfted n tme to mnmze the total energy cost. Our work assumes that TES s provsoned to meet peak demand of the data center. Chllers: Most HVAC systems use water as a medum to cool the data center. Usng chllers, heat s removed from the water and recycled back to HVAC. Snce removng heat from lqud consume a lot of power, t s not surprsng that chllers consume one-thrd power n the data center [1]. Thus, lmtng 1 PUE = Total Power / IT Power and Total Power = (IT Power + Coolng Power); Gven the PUE and Server Power, we can compute coolng power.

the use of chller can brng substantal cost benefts n a data center. III. PROBLEM STATEMENT: AN OFFLINE LP FORMULATION We formulate the problem addressed n our work as an optmal offlne lnear program (LP). The lnear program takes as nput complete knowledge of the (future) workload at each data center, future electrcty prces and weather data to compute whch data center servces what porton of the workload and how each data center s cooled whle mnmzng total coolng energy cost. Intutvely the approach uses free OAC locally when possble or redrects load to other nearby data centers where OAC s feasble f t s nfeasble locally. When OAC s not possble locally or at other nearby locatons, local (or remote) TES s used; chller-based coolng may be used f t s cheaper or when TES capacty s depleted. Our optmal offlne LP, whle mpractcal n practce, provdes a baselne for comparson wth an onlne approach. Note that we are nterested n mnmzng the sum total coolng power cost across multple data centers by makng use of OAC and TES n an optmal manner. Ths problem requres global decsons about routng of workloads across multple data centers as well as local decsons about chargng/dschargng of TES and servcng the workload usng a combnaton of OAC, TES and power drawn from the grd. Note that at each data center, the ncomng workload, avalablty of OAC, and the electrcty prce vary over tme, thereby makng ths a challengng control problem. The LP formulaton we present below hghlghts the control decsons and constrants nvolved n ths problem. We begn by defnng our model. We assume there are N data centers and that tme s slotted. In each slot t, we denote the orgnal workload ntended for data center by λ (t). For each data center, let K denote the set of data centers where ths workload can be routed. As an example, ths could be based on a dstance metrc that ensures that the resultng routng delay s tolerable. Note that, K. Next, the amount of workload routed from data center to j s denoted by λ j (t). These λ j (t) must satsfy the followng conservaton constrant: j K λ j (t) = λ (t) (1) Next, the total ncomng workload to data center (after routng) s gven by j λ j(t). We must have that λ j (t) µ (2) j where µ s the capacty of data center. Ths workload results n a coolng power demand accordng to the model descrbed earler and must be satsfed usng a combnaton of OAC, TES and power drawn from the grd. The avalablty of OAC at data center n slot t s denoted by a /1 varable O (t).e. 1 (resp. ) denotes OAC s avalable (resp. unavalable). Snce OAC use ncurs no power cost, wthout loss of generalty we assume that all avalable OAC s frst used to satsfy the coolng power demand and that any remanng demand s served usng TES and power drawn from the grd. We denote the remanng demand (after OAC) at data center by W (t). Thus when O (t) = 1, then all coolng power demand can be satsfed usng OAC alone, and W (t) =. However, our formulaton can easly be extended to consder the case where only part of the coolng power demand can be satsfed usng OAC alone. Next, we denote the TES recharge and dscharge amounts at data center by R (t) and D (t) respectvely. Also denote the total power drawn from the grd by P (t). Then the followng equalty must be satsfed for all, t W (t) = P (t) R (t) + D (t), t (3) We assume that the R (t), D (t) and P (t) are upper bounded by R max, D max and P max respectvely. Fnally, the recharge and dscharge decsons affect the stored energy n the TES as follows. Let Y (t) denote the amount of stored energy n the TES of data center n slot t. Denotng ts effcency by < α 1, we have: Y (t + 1) = Y (t) + αr (t) D (t), t (4) We assume that the TES has a maxmum capacty of Y max. Snce the stored energy cannot be negatve, we have: Y (t) Y max, t (5) Let C (t) denote the unt electrcty prce at data center n slot t. Also, let C mn, C max denote the mnmum and maxmum values taken by C (t). Then the sum total power cost over an nterval [, T ] s gven by T t=1 =1 P (t)c (t) (6) The LP formulaton seeks to mnmze the objectve n (6) subject to all the constrants dscussed so far. Ths s a lnear program snce the objectves as well as all the constrants are lnear. However, t should be noted that the complexty of ths LP ncreases wth both N and T and t becomes nfeasble to solve t for large nstances. Further, solvng ths LP requres full knowledge of the entre workloads, prces as well as OAC avalablty. In the next secton, we present an onlne soluton to ths problem that overcomes both of these lmtatons. IV. ONLINE ALGORITHM In ths secton, we present an onlne control algorthm for the power cost mnmzaton problem presented n Sec. III. Ths algorthm s based on the technque of Lyapunov optmzaton [9] and s smlar n sprt to the algorthm presented n [13] for the problem of power cost optmzaton n a sngle data center usng batteres. Here, we extend ths algorthm to consder multple data centers along wth the avalablty of OAC. The onlne control algorthm operates as follows. Frst, we defne a shfted verson X (t) of stored energy level Y (t) for each data center as follows X (t) = Y (t) V Cmax θ (7)

where the parameters V and θ are constants defned as ( Y max R max ) V = mn C max C mn /α ( ) θ = V Y max C max C mn /α R max and where we assume that V. Gven the collecton of X (t), the algorthm makes jont decsons about routng of workloads as well as chargng/dschargng TES by solvng the followng optmzaton problem every slot. max D (t) ( θ X (t) + V C (t) ) V =1 + W (t)c (t) =1 (8) (9) R (t) ( αθ X (t) V C (t) ) (1) =1 Ths optmzaton s subject to constrants (1), (2), (3) as well as the upper bounds on R (t), D (t) and P (t). It should be noted that ths results n a much smpler LP compared to the offlne formulaton presented earler. In fact, we can further smplfy ths LP by observng the followng two structural propertes of ts optmal soluton: If θ X (t) + V C (t) <, then D (t) =,.e., there s no dscharge for TES of data center n slot t. If αθ X (t) + V C (t) >, then R (t) =,.e., there s no recharge for TES of data center n slot t. The above propertes follow by notng that ther correspondng terms n the objectve are maxmzed by choosng the D (t)/r (t) values to be. After mplementng the output of ths optmzaton, the onlne algorthm proceeds by updatng the values of X (t) and repeats ths procedure. We make the followng observatons about ths algorthm. Frst, t s onlne, requrng no knowledge of future prces, workloads or OAC avalablty. Second, ths algorthm s easy to mplement, requrng solvng a smple (and much smaller) LP n each slot. Further, we show both theoretcally and emprcally that the cost acheved by our onlne technque s wthn a bounded addtve term of the soluton generated by the LP. Ths addtve term can be made arbtrarly small by scalng the TES capacty. Ths s formally shown by the followng theorem. Theorem 4.1: Suppose the onlne algorthm gven by (1) s mplemented over T slots wth a control parameter V as defned n (8). Then, the followng hold: 1) Each queue X (t) s determnstcally upper and lower bounded for all t as follows: V Cmax θ X (t) Y max V Cmax θ (11) 2) The TES energy level Y (t) satsfes for all, t: Y (t) Y max (12) 3) Suppose the processes C (t), O (t) and λ (t) are..d. over slots. Then the expected per slot cost under the onlne algorthm s wthn B/V of the optmal offlne value..e., 1 T E { P onlne (t)c (t) } T t=1 1 T { } E P lp T C (t) + B (13) V t=1 where B s a constant (ndependent of V ) defned as B = (Dmax ) 2 + (R max 2 ) 2 (14) Proof: The proof s based on the technque of Lyapunov optmzaton [9] and s presented n the tech report [4]. The performance bound (13) shows that ncreasng V can reduce the gap between the offlne LP cost and the onlne cost. V. EXPERIMENTAL EVALUATION In ths secton, we frst descrbe our expermental methodology and then present our results. A. Expermental Methodology We use a trace-based smulaton to analyze the potental benefts usng the Lyapunov-based algorthm dscussed n Secton IV. Our extensve traces contan a month-long workload traces collected from Akama CDN, a year-long weather trace provded by Natonal Oceanc and Atmospherc Admnstraton (NOAA) and a year-long real-tme prcng trace (RTP) shown n Table I. The workload trace contans load nformaton, total capacty, number of servers deployed, locaton nformaton such as cty, lattude and longtude etc. The load nformaton s captured at a granularty level of 5 mnutes. On the whole, the trace contans 39 US locatons, ncludng a total of 6345 servers spread across all locatons. For the purpose of our evaluaton, we selected sx US data center locatons as our representatve sample. Unless stated otherwse, all of our experments use all sx locatons. The weather trace contans hourly dew-pont, dry-bulb temperature, humdty, locaton nformaton for the year 12. Fnally, our prcng data contans hourly prcng nformaton for the year 11 and 13. To determne OAC feasblty for all locatons at a gven tme, we frst dentfy the weather statons closest to these locatons. Snce, the locaton of data centers and the weather statons are known, we could map each data center to ts closest weather staton. Next, we use the dry-bulb temperature and dew pont to determne whether the condtons are wthn the ASHRAE standards for OAC. Unlke the fve mnutes granularty of the workload nformaton, the weather data s hourly. Hence, we assume the weather to reman the same for a gven hour. Ths assumpton s reasonably correct snce the weather doesn t change rapdly over short tme ntervals. Gven that we had only a month-long workload trace, we repeat our workload pattern for each month of the year and use the weather data to create a combned trace contanng OAC feasblty nformaton for each locaton for the entre year.

ISO Locatons Duraton Calforna ISO San Jose Jan - Dec, 11 ERCOT ISO Houston Jan - Dec, 13 Mdcontnent ISO Chcago Jan - Dec, 13 NEMassBOST Boston Jan - Dec, 13 New York ISO New York Jan - Dec, 13 PJM ISO Phladelpha Jan - Dec, 13 TABLE I: Real-tme prcng dataset Next, we compute the server and coolng energy requred n each data center usng the combned trace contanng OAC feasblty nformaton. The approach to compute the server and energy s descrbed n Secton II. Fnally, we compute the reducton n energy cost usng our dynamc prce hstory for varous TES capactes and dstances of radus r. The radus r parameter mposes a load redrecton constrant on each data center, and restrcts the movement of load to data centers that are wthn the radus r. Such restrctons on load movement s useful for requests that are senstve to latency. For all our smulatons, we assume TES starts wth full capacty and ts effcency α = 1. B. Emprcal Results 1) Coolng energy cost savngs usng OAC and TES: We analyze the ablty of OAC and TES to mnmze the total coolng energy cost. We ran our Lyapunov-based algorthm for the entre year wth varyng TES capacty and radus r. Note that the energy cost reducton s ether from free OAC, or due to usng TES, or by redrectng to cheaper prce locatons. Thus, to evaluate the potental energy cost savngs, we compare t aganst a baselne where no OAC or TES technology or any load redrecton mechansm to cheaper electrcty prce locaton s avalable. For each data center and tme slot t, the algorthm decdes whether to servce the load locally or remotely, chargng/dschargng TES based on the workload, prce, OAC feasblty, and TES capacty. Snce OAC s scarce durng summer months, we plot the energy cost savngs for the month of July wth r = 5kms (see Fgure 1 (a)). Note that an ncrease n TES capacty ncreases the overall energy cost savngs. In partcular, the overall energy cost savngs ncrease from 63% to 95%. However, the cost savngs see dmnshng returns after TES capacty of 3 mnutes. Whle ncreasng TES does not add addtonal cost savngs, a large TES capacty allows peakdemand shavng and s useful n a peak-based prcng scheme. In addton, t can store renewable energy from ntermttent sources such as solar or wnd, reducng ts dependency on brown energy. Interestngly, the ndvdual cost savngs of some ctes may decrease wth ncrease n TES capacty. However, the overall cost savngs ncreases wth ncrease n TES capacty. Specfcally, the cost savngs for Chcago decreases from 48% to 47% wth ncrease n TES capacty from to 25 mnutes. Such a behavor s observed when servcng a load remotely s cheaper than servcng locally, decreasng the savngs of the data center but ncreasng the overall savngs. Fgure 1 (b) shows the overall energy cost savngs for all sx US locatons wth radus r = 5 and r = 5kms and TES capacty of 45 mnutes. Note that at least 98% and % Energy cost savngs (%) Energy cost savngs (%) r=5 r=5 Jan FebMarAprMayJun Jul AugSepOctNovDec Month (a) Annual cost savngs. r = 5 kms Energy cost savngs (%) SANJOSE NEWYORK HOUSTON BOSTON PHILADELPHIA OVERALL CHICAGO 15 25 3 35 45 5 TES capacty (mnutes) 55 (b) Cost savngs n July. TES = 45 mns Fg. 1: Energy cost savngs across sx US locatons Offlne LP Onlne Lyapunov Baselne OAC 15 25 3 35 45 5 55 TES capacty (mnutes) (a) r = 5 kms Energy cost savngs (%) Offlne LP Onlne Lyapunov Baselne OAC 15 25 3 35 45 5 55 TES capacty (mnutes) (b) r = 5 kms Fg. 2: Performance comparson between offlne LP and onlne Lyapunov for the month of July cost savngs s acheved durng wnter months wth r = 5 and r = 5 respectvely. Snce OAC s free, the addtonal savngs n r = 5 kms s acheved due to load redrected to a dstant datacenter where OAC s feasble and coolng capacty s avalable. 2) Convergence of our Lyapunov approach: We valdate the convergence of our algorthm stated n Theorem 4.1 by comparng t aganst an offlne LP descrbed n Secton III. To compare the offlne LP wth onlne Lyapunov algorthm, we compute the cost savngs usng both approaches. We also compare the cost savngs aganst a baselne OAC cost savngs from usng only OAC and calculated usng a greedy approach. In ths greedy approach, load s redrected from no OAC avalable data centers to OAC avalable data centers. subject to the dstance and capacty constrants.e. load s redrected only to a data center s wthn a radus r and does not exceed the data center s coolng capacty. Fgure 2 (a) and (b) shows the result for r = 5 and r = 5 kms respectvely. Note that the gap between the offlne LP and onlne algorthm reduces wth the ncrease n TES capacty. In fact, wth TES capacty as lttle as 3 mnutes our approach reaches close to the offlne algorthm. 3) Load balancng and ts mpact on performance: To study our onlne algorthm s mpact on performance due to load redrecton, we ran our algorthm on all 39 US locatons to better understand the algorthm s behavor n a more comprehensve dataset. We assgn each data center a prcng trace from Table I n a round-robn manner. Whle the algorthm doesn t account for redrecton cost to a remote datacenter, we mpose a redrecton constrant usng the radus parameter. Note that we use dstance as a proxy for latency.e. load see a hgher latency f moved further away from the assgned

Average Load Moved (%) Modfed Lyapunov Lyapunov - 11-25 251-5 51-1k 1-2.5k >2.5k Dstance (kms) (a) January Average Load Moved (%) Modfed Lyapunov Lyapunov - 11-25 251-5 51-1k 1-2.5k >2.5k Dstance (kms) (b) July Fg. 3: Average load movement wth r = 5 kms and TES capacty of 5 hours. locaton. We selected two months January and July as representatve months to study the effect of load redrecton due to OAC. Fgure 3 (a) and (b) shows the percentage of load redrected wth TES capacty of 5 hours and r = 5 and r = 5 kms. We observe that 6% and 21% load s servced locally wth r = 5 kms n January and July respectvely. We note that although OAC may be abundant locally, our algorthm may redrect the load to another OAC feasble data center wthn the radus r parameter, or to a cheaper prce locaton such that the energy cost s mnmzed. As our approach doesn t consder routng cost, wth r = 5 a hgher redrecton s expected. Indeed, redrectng load mpacts performance and ncreases latency. However, delay-tolerant workloads such as batch jobs can be redrected to a remote data center to leverage OAC or cheaper electrcty prce. To avod redrecton when local OAC s avalable, we ran a modfed Lyapunov algorthm wheren we ensure all coolng energy OAC s provded locally and f OAC s nfeasble locally, load s greedly redrected to a data center where OAC s feasble subject to the radus constrant. Fnally, the resdual load not satsfed by OAC s provded as nput to our Lyapunov approach. We notce a 91% ncrease n load servced locally n January, and a 36% ncrease n July. Thus, the modfed algorthm mnmally reroutes data as most of the coolng needs are provded locally. We omt results for r = 5 kms as all the load s servced wthn 5 kms dstance, and performance s mnmally mpacted. We note that the energy cost savngs, not shown here, for the modfed Lyapunov approach are also smlar [4]. VI. RELATED WORK Pror work on energy cost optmzaton focussed on leveragng prce dfferental across varous locatons [11]. Other studes nvolved shuttng down CDN servers to reduce server energy costs [7] [5]. Lu et. al. nvestgated the benefts of usng renewable energy to reduce electrcty cost and mnmze use of brown energy [6]. In the context of thermal energy storage, researchers studed the use of thermal storage to reduce electrcty cost n data centers [14]. Our Lyapunovbased technque s nspred by the approach used n Urgaonkar et. al. and Guo et. al. [3], [13] [9]. Whle Urgaonkar et. al. used Lyapunov optmzaton to reduce server energy cost for a sngle data center, Guo et. al. focussed on reducng energy cost for multple data centers. Unlke ther work, whch s n the context of reducng server energy cost n data centers, our work focus on mnmzng energy cost usng OAC and TES. To the best of our knowledge, an ntegrated optmzaton of OAC wth TES usng a Lyapunov-based approach has not been studed prevously. VII. CONCLUSIONS In ths paper, we focussed on mnmzng the coolng energy cost n CDNs by ntegratng OAC wth TES. We provded a Lyapunov optmzaton-based onlne approach that optmally dstrbuted load across geographcally locatons to maxmze the use of OAC and optmally use TES. We emprcally evaluated our approach usng extensve traces and showed that our results are optmal. Our results showed that at least 64% and 98% coolng energy savngs can be acheved n CDNs durng summer and wnter respectvely. Also, we showed that CDNs can reduce ts coolng energy footprnt by at least 57% by swtchng to OAC. ACKNOWLEDGMENTS Ths research s supported n part by Natonal Scence Foundaton grants 1422245, 122959 and 1413998 REFERENCES [1] Desgnng a Very Effcent Data Center. http://on.fb.me/1jrq3pk. [2] L. A. Barroso and U. Hölzle. The case for energy-proportonal computng. IEEE computer, (12):33 37, 7. [3] Y. Guo and Y. Fang. Electrcty cost savng strategy n data centers by usng energy storage. IEEE TPDS, 24(6):1149 11, 13. [4] S. Lee, R. Urgaonkar, R. Staraman, and P. Shenoy. Cost mnmzaton usng renewable coolng and thermal energy storage n cdns. Techncal Report UM-CS-15-11, UMass Amherst, May 15. [5] M. Ln, A. Werman, L. L. Andrew, and E. Thereska. Dynamc rghtszng for power-proportonal data centers. IEEE/ACM Transactons on Networkng (TON), 21(5):1378 1391, 13. [6] Z. Lu, Y. Chen, C. Bash, A. Werman, D. Gmach, Z. Wang, M. Marwah, and C. Hyser. Renewable and coolng aware workload management for sustanable data centers. In ACM SIGMETRICS Performance Evaluaton Revew, volume, pages 175 186. ACM, 12. [7] V. Mathew, R. K. Staraman, and P. Shenoy. Energy-aware load balancng n content delvery networks. In INFOCOM, 12 Proceedngs IEEE, pages 954 962. IEEE, 12. [8] R. Mller. Uptme nsttute: The average pue s 1.8. Data Center Knowledge, 11. [9] M. J. Neely. Stochastc Network Optmzaton wth Applcaton to Communcaton and Queueng Systems. Morgan and Claypool Publshers, 1. [1] S. Pelley, D. Mesner, T. F. Wensch, and J. W. VanGlder. Understandng and abstractng total data center power. In Workshop on Energy- Effcent Desgn, 9. [11] A. Quresh, R. Weber, H. Balakrshnan, J. Guttag, and B. Maggs. Cuttng the electrc bll for nternet-scale systems. In ACM SIGCOMM Computer Communcaton Revew, volume 39, pages 123 134. ACM, 9. [12] M. P. Tom Harvey and J. Bean. Updated ar-sde free coolng maps: The mpact of ASHRAE 11 allowable ranges. The Green Grd, 12. http://bt.ly/lfi14q. [13] R. Urgaonkar, B. Urgaonkar, M. J. Neely, and A. Svasubramanam. Optmal power cost management usng stored energy n data centers. In Proc. of the ACM SIGMETRICS jont nternatonal conference on Measurement and modelng of computer systems, pages 221 232. ACM, 11. [14] Y. Wang, X. Wang, and Y. Zhang. Leveragng thermal storage to cut the electrcty bll for datacenter coolng. In Proc. of the 4th Workshop on Power-Aware Computng and Systems, page 8. ACM, 11.