Data Centers as Demand Response Resources in the Eectricity Maret: Some Preiminary Resuts Rui Wang ECE Department Drexe University Phiadephia, PA 19104, USA rui.wang@drexe.edu Nagarajan Kandasamy ECE Department Drexe University Phiadephia, PA 19104, USA andasamy@drexe.edu Chia Nwanpa ECE Department Drexe University Phiadephia, PA 19104, USA nwanpa@drexe.edu ABSTRACT Eectric utiities have recenty instituted demand response (DR) programs, with economic incentives that encourage consumers to modify their demand eve and usage patterns during periods of pea oad as we as grid emergencies. Data centers, being major consumers of power, can pay an important roe in the efficient operation of eectrica grids. This paper describes wor aimed at deveoping an optimization framewor that aows data centers to be treated as DR resources that can effectivey participate in whoesae energy marets by reducing the consumption of eectricity in response to signas from utiity companies. As a return, data centers are paid a reward based on the prevaiing maret price of eectricity. We use a case study invoving a set of geographicay distributed data centers participating in an economic DR program to vaidate the framewor. Keywords Data centers, virtua machine migration, demand response program, whoesae eectricity maret 1. INTRODUCTION When eectricity demand is cose to the pea eve supported by the grid, this demand is usuay satisfied using ess efficient and higher cost generators; prices in whoesae marets can increase from ess than fifty doars per MWh off pea to hundreds of doars at pea hours. Under such circumstances, even a sma reduction in demand can resut in an appreciabe reduction in system margina costs of production. In recent years, demand response (DR) programs designed to encourage consumers to modify their eectric demand eve and usage pattern have become increasingy critica to the reiabe operation and transmission efficiency of an (aging) eectrica grid. Moreover, in dereguated eectricity marets where the margina generating unit determines the maret price for a oad, a drop in whoesae pea prices means that prices for a customers are aso hed in chec an important socia benefit [17]. A specific type of DR program caed economic DR is designed to provide monetary incentives to customers to reduce power consumption in response to price signas from the utiity when the maret price, so caed ocationa margina price (LMP) of energy is high. (Section 2 describes economic DR programs in more detai.) Data centers are major consumers of eectricity; a typica center consumes as much energy as 25,000 househods. In 2007, wordwide, there were about 30 miion servers consuming 100 TWh of Feedbac Computing 12, September 17, 2012, San Jose, Caifornia, USA.. energy amost 0.5% of the word s energy production at a cost of $9 biion; consumption increased even more so to 200 TWh in 2010 and has been projected to grow at about 9% annuay [2, 6]. Therefore, data centers can pay an important roe in ensuring the reiabe and efficient operation of eectrica grids wordwide whie generating additiona revenue from the eectricity maret. This paper describes preiminary wor aimed at deveoping mathematica modes and an optimization framewor, aowing a data center to be treated as a DR resource that can effectivey participate in whoesae energy marets by reducing consumption of eectricity in response to price signas from the utiity. The DR resources are paid the prevaiing LMP or some fraction thereof for such reductions. Prior research has deveoped operating techniques for data centers that tae into account the prevaiing LMP prices ony to reduce eectricity cost and Section 5 discusses the reated wor in more detai. The basic idea is one of sef scheduing: migrate the woroad in the form of virtua machines (s) within geographicay distributed data centers, thereby reducing the oad of data centers situated in areas with high LMPs. A recent FERC order impemented in Apri 2012 has, however, eiminated the option for sef scheduing and DR resources must now be dispatched (or ceared) by the responsibe utiity; that is, DR refers ony to energy modifying activities performed in response to expicit economic/reiabiity signas provided by the utiity. 1 The foowing ey issues must be addressed if data centers are to deeper participate in the DR maret and trade reduced power for economic reward. Utiities currenty issue DR notifications with a ead time of two to three hours aong with requirements on the amount of oad to be curtaied, usuay 100 W. So, data centers wishing to participate must coordinate and compete their migrations, and curtai the oad within this deadine. The migration time incurred per is a function of the avaiabe networ bandwidth and the geographica distance between data centers, imiting the number of s that can be migrated between two centers within the deadine. This constraint must be expicity considered in the optimization probem. Once the DR resource is dispatched by the utiity, faiure to achieve the said oad reduction when participating in certain DR programs incurs stiff financia penaties. So, when maing a decision to commit to such DR programs, the optimizer must consider the various riss that may cause the curtaiment operation to be utimatey unsuccessfu. One such ris factor is when s are in the process of being migrated during the two hour window but fuctuations in the avaiabe networ bandwidth cause migration times to increase more 1 In the US, the Federa Energy Reguatory Commission (FERC) has jurisdiction over inter-state and whoesae eectric rates.
Power (W) ead me down me 0 t1 t2 t3 Time (hour) Figure 2: The overa operating principe underying the RTDR and EDR programs. Figure 1: The LMPs vaues in a whoesae eectricity maret over a 24-hour period [11]. The Y-axis shows the $/MWh. than anticipated, and the number of migrated s is insufficient to achieve the promised oad reduction. Another ris factor is after s are migrated to destination data centers, the server provisioning needed to accommodate the incoming woroad variation is under-estimated, resuting in SLA vioations. These uncertainties/riss must be incorporated expicity within the optimization framewor as we. Finay, there is the issue of the financia reward: once the resource is ceared by the utiity, the actua reward is ony nown after the time of the settement. So, the optimizer must continuay earn the outcome of its DR actions and use it to guide future decisions. The optimization framewor presented in this paper is a step towards addressing the above issues and we present a case study invoving a set of data centers participating in an economic DR program, with promising resuts. To the best of our nowedge, this paper is the first to study how DR programs can be integrated into data center operations, thereby enabing them to trade the reduced power for financia reward. The paper is organized as foows. Section 2 famiiarizes the reader with the DR program. Section 3 deveops the optimization probem and Section 4 presents a DR case study using a set of distributed data centers. Section 5 discusses reated wor and we concude the paper in Section 6. 2. PRELIMINARIES This section first famiiarizes the reader with the demand response program in the US eectricity maret. We aso discuss issues reated to migration over wide area networs between geographicay distributed data centers. The demand response program: As of 2012, the eectricity maret in North America is managed by ten regiona transmission organizations (RTOs) and Independent System Operators (ISOs). For exampe, PJM Interconnection, the word s argest RTO, coordinates the movement of whoesae eectricity in 13 US states, serving about 50 miion peope. The case study presented in this paper uses PJM as an exampe; other RTOs/ISOs have simiar structures. The overa eectricity maret comprises both whoesae and retai marets [10]. In the whoesae maret, the competing suppiers (generators) se their eectricity output to retaiers with timechanging ocationa margina prices (LMPs) 2. Fig. 1 shows an exampe of LMP vaues for the city of Phiadephia over a 24-hour period. The whoesae maret contains two sub-marets: (1) the day-ahead maret which is a forward maret where houry prices are cacuated for the next operating day based on generation offers 2 LMP refects the vaue of the energy at the specific ocation and time it is deivered. Each ocation has its own regiona LMPs. and demand bids; and (2) the rea-time maret which is a spot maret where current prices are cacuated at the granuarity of minutes based on prevaiing operating conditions within the power grid. In the retai maret, the eectricity is re-priced and sod to end users (such as data centers) and as such, these customers tend to be unresponsive to the whoesae price. A demand response (DR) program aows end-user customers (aso caed DR resources) to participate in whoesae marets by reducing their eectricity consumption when LMP vaues are high or when the reiabiity of the eectric grid is threatened. In return, customers receive a corresponding economic reward (payment) [13]. PJM currenty offers three types of DR programs; customers can participate in these programs through agents caed curtaiment service providers [12, 17]. Economic DR is offered in both the day-ahead and rea-time marets when the LMP is higher than the monthy net benefits price (NBP). 3 Day-Ahead Demand Response (DADR) is a mandatory program where participants bids their proposed reduction in the day ahead maret and are subject to penaty for non-compiance. Rea-Time Demand Response (RTDR) is a vountary program in which PJM dispatches signas to participants requesting power reduction. Customers participating in this maret can choose whether to respond to this signa and no penaty exists for non-compiance. 4 Emergency DR (EDR) comprises both mandatory and vountary programs to reduce oad when PJM needs assistance to maintain the reiabiity of the eectric grid under suppy shortage or other emergency conditions. Anciary service DR incudes synchronized reserves with the abiity to reduce consumption within 10 minutes of PJM dispatch, day ahead scheduing reserves with the abiity to reduce consumption within 30 minutes, and reguation with the abiity to foow PJM s reguation and frequency signas. A customer may participant in mutipe DR programs. In this paper, we focus on integrating RTDR and EDR into data center operations, specificay to trade power reduction for economic reward. Athough RTDR and EDR signas are issued by PJM under different situations, they foow essentiay the same principe, as shown in Fig. 2. At time t 1, assume a participant is consuming a power of 3 The NBP represents a price at which the societa benefits gained from a reduction in LMP wi exceed the cost to pay for the economic DR (from a generation viewpoint). 4 Before Apri 2012, RTDR was sef-schedued by participants. However, since the impementation of FERC ORDER 745, participants must respond ony to notification signas from PJM to earn economic reward [12].
P W when it receives a signa from PJM requesting a reduction of P W by time t 2 and hoding it unti time t 5 3. In the RTDR program, the reward R during a given hour t is based on the rea time LMP as (1) shows P LMP t LMP t NBP R t = P (LMP t GT ) GT LMP t < NBP (1) 0 LMP t GT where GT is the generation and transmission components of the participant s eectric bi [12]. In the EDR program, R t is { P max($500/mw h, LMPt ) vountary R t = (2) P max(striep rice, LMP t ) mandatory where the strie price (up to $500/MWh) is the price at which the customer is wiing to participate in the maret [17]. As we can see from (1) and (2), the reward depends on the prevaiing LMP vaues from t 2 to t 3 (when the demand is actuay curtaied). Since these LMP vaues are unnown at t 1, participants must mae curtaiment decisions based on LMP vaues they predicted. migration across data centers: Data centers host onine services, in the form of enterprise appications, on distributed computing systems comprising heterogeneous networed servers. Virtuaization technoogy is usuay used to support mutipe services with fewer computing resources. This technoogy enabes a singe server to be shared among mutipe performance-isoated patforms caed virtua machines (), where each can support one or more services. Aso, virtuaization enabes on-demand computing where resources such as CPU, memory, and dis space are aocated to s as needed, based on the currenty prevaiing woroad demand, rather than staticay, based on the pea woroad demand. By dynamicay provisioning s and turning servers on/off as needed, data center operators can consoidate mutipe woroads onto fewer servers, achieving higher server utiization and curtaiing power consumption whie sti maintaining SLAs; numerous research studies have focused on this probem in great detai. 6 The use of dynamic power management schemes within a data center can mean that its rea-time power consumption P is aready (near) minima, eaving itte margin for further power curtaiment (in terms of P in Fig. 2) and precuding a singe data center from responding to DR signas, especiay given the minimum requirement of 50-100 W on P. However, if a company has a number of geographicay distributed data centers, then by migrating some of the woroad (in the form of ive migrations), from source appication servers in one data center to destination appication servers in other data centers, we can shutdown the source servers to trade P for the reward whie sti maintaining the SLAs. 7 The s can migrate bac to source servers after time t 3. 5 The minimum P that the customer must achieve varies by RTO and typicay ies in the 50-100 W range. The deviation between the requested reduction and the actuay measured curtaiment shoud be ess than 20%, and the notification ead time and downtime are usuay a coupe of hours [12]. 6 For exampe, Wang and Kandasamy study decentraized contro architectures designed to maintain, at any given time, the fewest number of operationa servers, each of which are highy consoidated, so that the rea-time power consumption P within a data center is substantiay reduced [18]. 7 We assume that s in the appication tier process session-based woroad. So, to maintain the sessions of redispatched woroads, each s state must be copied over to the destination, whie its storage (maintained within the database tier) can stay within the source data center. For sessioness woroads, the corresponding s need not to be migrated to the destination; the necessary s can be freshy instantiated, and the design in Section 3 can be easiy simpified to fit this situation. Figure 3: The ocations of distributed data centers used in our case study. In this exampe, the bue ines indicate communication ins ess than 500 iometers between data centers, whereas red ines indicate ins more than 500 iometers (infeasibe for ive migration) or no in exist between data centers. Made with Googe Map. In terms of migration in wide area networs, ware s vmotion technoogy [15] combined with soutions from other vendors such as Cisco [16], F5 [7], and NetEx [5] aow s to be migrated between data centers as ong as the networ satisfies minimum bandwidth (622 Mbps) and round-trip atency requirements (not to exceed 10 miiseconds in vmotion 5.0, corresponding roughy to a one-way distance of 500 iometers) [3]. The migration time for a varies depending on its memory usage [4] as we as the distance (networ atency) and the avaiabe bandwidth between the data centers; for exampe, pubished data shows that it taes about 80 seconds for a with 8 GB of memory to migrate between two data centers 100 iometers away [16]. During the migration process, the appication hosted within the maintains a its sessions on the source server except for a transient drop in performance (again, measured to be around 10%) when the switches to execute on the destination server [16]. Finay, as a the s within a source server are migrated together excusivey to the same destination server, we must ensure that the CPU and memory capacity avaiabe at the destination server exceeds the source server, so that the current resource provisioning to the s can sti be satisfied at the destination to maintain a steady SLA. We assume the avaiabiity of geographicay distributed data centers situated in mutipe regiona eectrica marets. Fig. 3 and Tabe 1 show the data center ocations used in our case study. The centers supports different services and have inter-connections via dedicated fiber-optic ins. Finay, appication servers are categorized into mutipe types based on CPU and memory capacity, as shown in Fig. 4; servers with more CPU/memory resources are denoted using arger type numbers and a data center houses a mix of server types. We define i as one of the source data centers that migrates s out whereas j denotes one of the destination centers accommodating those s, and i j; and denote the server types within data centers i and j, respectivey; x i_j indicates the number of type servers in data center i migrated to type servers within center j, where ensures that the destination server possesses more CPU/memory resources than the source server. 3. PROBLEM FORMULATION The migration probem among mutipe source and destination data centers is posed as a unified mixed-integer inear programming (MILP) probem as shown beow and soved in rea time. Data centers may purchase eectricity from retaiers in mutipe ways: (1) a fixed amount of energy (in MWh), whose vaue is deter-
Type 1 server Type 1 server Data Center i Type 2 server Type 3 server Type 2 server Na i Type X i _ j Data Center j No j Type Type 4 server Figure 4: The scheme of ive migration from source data center i to destination data center j. mined via capacity panning, at a fixed annua monetary cost; (2) a fat rate (in $/MWh) for the power consumed; and (3) the rea-time LMP (in $/MWh) for the power consumed. If the eectricity is purchased as per case (1) mentioned above, for data centers that accommodate the migrated s the corresponding increase in power consumption does not ead to extra eectricity bis due to the fixed annua cost. The cost function is min (c x i_j ), (3) x i_j i where c is the round-trip migration cost in doars incurred by a type server due to the momentariy reduced SLA, and it is proportiona to the server s processing capacity. Equation (3) minimizes the cumuative migration cost incurred by a data centers. For case (2) of eectricity purchase, another term must be added to cacuate the difference in eectricity cost incurred due to the servers (s) that are migrated. For exampe, a hundred operating servers may cost x doars at the source data center; once migrated to the destination, however, they may cost y doars to operate because of: (a) s from a source server may be accommodated by a destination server that consumes more power, (b) different PUE ratings of the data centers, 8 and (c) the fat eectricity rates r may vary in different ocations. The cost function is then represented as min (c x i_j ) x i_j i j + [ t D (r j P UE j p i j j r i P UE i p ) x i_j ], (4) where p is the (computing) power consumed by a type server, and t D is the DR downtime. Finay, considering case (3), we can simpy estimate the rea time LMPs and use them to repace the rate r in Equation (4). The cost functions corresponding to a three cases are subject to 8 Power usage effectiveness (PUE) is the ratio of the tota amount of power used by a data center incuding cooing to its computing power. So, the PUE measures how efficienty a data center uses its power and varies by data centers. the foowing constraints: x i_j Z 0, (5) x i_j Na i, (6) j x i_j No j, (7) i t ij (m x i_j ) t L, (8) [ P UE i p ( Na i j ] x i_j ) P i P i. (9) Here constraint (6) ensures that the tota number of type servers migrated out of data center i can not exceed Na i, the estimated number of type servers in data center i that shoud be operationa to satisfy the incoming woroad intensity. Simiary, (7) guarantees that the tota number of servers migrated into type servers in data center j does not exceed No j, the estimated number of type servers in data center j that are ide (meaning they can accommodate incoming s). 9 In (8), m is the memory capacity of a type server and t ij is the time needed to migrate one GB of memory from data center i to j, cacuated as t ij = t 0 + a d ij, (10) where t 0 is the base time, a is a constant factor, and d ij is the distance between data centers. As per (8), migration of the chosen s must be competed within the avaiabe ead time t L, which imits the number of s that can be migrated between data centers. Finay, (9) guarantees that the power eventuay consumed by the active servers eft in data center i is no more than P i P i, where P i is the current power consumption and P i is the required power curtaiment. The financia reward R i earned by data center i from DR programs has uncertainties because it is reated to future LMPs. We can estimate it based on historica LMPs using we-nown forecasting methods such as Hot s inear exponentia smoothing [9], and the resuting R i foows a norma distribution as per R i (µ i, σ 2 i ). (11) As the rewards at different ocations are independent, for mutipe source data centers we have R i ( µ i, σi 2 ). (12) i i i Then we can cacuate what is the probabiity that the tota reward i Ri exceeds the tota participation cost C (as per (3) or (4)) by a desired profit threshod P rofit set by data center operator, as P rob( i R i C P rofit). (13) The data center operator decides whether to accept the signa or not, based on the above probabiity. Once the offer is ceared by the RTO, the actua LMPs obtained which are ony nown after the downtime is fed bac to (11) to tune µ i and σ i. 9 A data center can estimate, based on the historica woroad intensity, how many servers ( Na) wi stay active during the downtime, and how many servers ( No) wi be ide across the ead time and downtime. An exampe is demonstrated in [18].
Tabe 1: Distances between the seven data centers, measured with Googe Map. Data Center (m) 1 2 3 4 5 6 7 1 Phiadephia, PA 0 81 128 433 348 444 412 2 New Brunswic, NJ 81 0 44 349 315 449 464 3 New Yor, NJ 128 44 0 305 314 466 496 4 Boston, MA 433 349 305 0 417 634 764 5 Syracuse, NY 348 315 314 417 0 218 428 6 Buffao, NY 444 449 466 634 218 0 283 7 Pittsburgh, PA 412 464 496 764 428 283 0 Tabe 2: The PUE ratings of the various data centers. Data Center 1 2 3 4 5 6 7 PUE 1.7 1.9 1.5 1.6 1.3 1.8 2.0 Tabe 3: The four types of servers housed within data centers. Server Type 1 2 3 4 Frequency Cores (GHz) 8 10.8 16 22.4 Memory (GB) 4 4 8 8 Power Consumption (Watt) 160 220 320 440 Migration Cost (Cent) 0.8 1.1 1.6 2.2 Tabe 4: Number of servers in each of the seven data centers. Data Server Type Tota Center 1 2 3 4 Number 1 600 800 600 2000 2 800 400 1200 3 1000 500 1500 4 500 500 1000 5 400 250 350 1000 6 300 200 500 7 400 200 200 800 4. CASE STUDY This section uses a case study to demonstrate some preiminary simuation resuts. We first describe the ey simuation parameters which refect the situation in the rea word and then test scenarios wherein data centers participate in the DR program. As the goa of this paper is to vaidate the idea of data centers as DR resources, we ony demonstrate the first case of the three eectricity contract cases discussed in Section 2, meaning the objective function used in our simuations is Equation (3). Simuation parameters: Tabe 1 shows the seven data centers assumed in this case study. As discussed in Section 2, migration cannot happen between two ocations more than 500 iometers away; when the distance is ess than 500 iometers, the migration time is captured by (10). After curve fitting the experimenta data reported in [16] on migration times, t 0 in (10) is set to 6.3 and a is set to 0.0525. The data centers aso have different PUE ratings as shown in Tabe 2 and each ocation has its own regiona timechanging LMP vaues. Our simuations use the four different server types assumed in Tabe 3, where each type possesses different CPU and memory capacity and consumes a different amount of power. The round-trip migration cost of a server is proportiona to its processing capacity. Each data center houses hundreds of servers, with some mix invoving a four server types, as assumed in Tabe 4. One data center signaed at a time: In the first scenario, we test the migration coordination scheme when one data center is signaed by the RTO at a given time instant. Returning to Fig. 1, data center 1 receives a RTDR signa at hour 12 from PJM requesting a power curtaiment of 250 W from hours 15 to the end of hour 20. 10 Based on their own historica woroad intensity, the seven data 10 PJM is the RTO responsibe for the Phiadephia area whereas the Tabe 5: The estimated number of active and ide servers in the data centers. Data Estimated Active Servers Estimated Ide Servers Center 1 2 3 4 1 2 3 4 1 361 459 376 239 341 224 2 535 241 265 159 3 600 389 400 111 4 375 368 125 132 5 240 174 213 160 76 137 6 204 125 96 75 7 228 151 127 172 49 73 centers independenty and simutaneousy estimate the number of servers that wi stay active ( Na) as we as ide ( No) during this event, as shown in Tabe 5. Then, the controer in Section 3 finds an optima migration action, which specifies the number of type servers to migrate from data center 1 to type servers in center j as shown in Tabe 6, so that the projected power curtaiment is 250.002 W. Finay, using historica LMP vaues in Phiadephia, the controer computes the estimated reward R 1 (332.5, 36.6 2 ) and decides that the profit woud exceed the desired threshod $270 with a probabiity of 93.5% (the threshod is whatever the operator desires). If the operator decides to respond to PJM s signa, then data center 1 and the other data centers invoved in the migration pan can specify which servers to migrate out and which servers to accommodate the incoming s, respectivey. Mutipe data centers signaed at a time: The proposed method aso wors when mutipe data centers receive RTO signas at the same time. In this scenario, we assume that data centers 1 and 4 both receive signas, requiring oad curtaiments of 200 and 100 W, respectivey. The controer decides an optima migration poicy as shown in Tabe 7 and computes that the chance to earn more than $350 is 89.4%. The above discussed scenarios are both summarized in Tabe 8. It is possibe that n data centers receives their own RTO signas at the same time but a soution is not found because the constraints (5) through (9) are not a satisfied. In this situation, the controer woud move to chec if n 1 of the data centers can commit to the DR, and so on. Finay, if we assume an average profit of $300 per event and that the seven data centers fufi a tota of three events per day, then the annua profit woud be $300 3 365 = $328, 500. 5. RELATED WORK Qureshi et a. [14] show how to expoit the voatiity in the dayahead eectricity marets using a set of distributed data centers to reduce overa energy cost. The repicated data centers with high day-ahead prices are deactivated and the woroad is rerouted to active ones with ower LMPs. The method wi not wor for sessionbased woroad since it does not consider migration cost and time, and more importanty, a DR program is not discussed. Abbasi et a. [1] propose a dynamic appication hosting management scheme for distributed data centers to minimize the cumuative energy, server switching, and woroad migration costs (for statefu woroad), by determining the number of active servers and the woroad share of each data center. However, the authors assume homogeneous servers in a data center with identica power consumption. Moreover, appications are assumed to be hosted within non-virtuaized servers, maing it practica infeasibe to perform migration for statefu woroad. Aso, instead of predicting the LMPs, they assume that eectricity prices are nown a priori, and do not consider DR programs. Irwin et a. [8] present a staggered Boston area, considered in the next scenario, is managed by ISO New Engand.
Tabe 6: The number of servers migrated out from data center 1 to other data centers. Data Center 2 3 4 5 6 7 Server Type 1 2 3 4 2 4 2 3 4 1 3 1 3 4 1 24 0 1 0 0 0 0 10 0 0 0 0 0 0 1 2 0 159 0 0 1 0 1 0 0 0 1 0 0 1 3 0 0 114 6 0 51 0 56 0 0 50 0 31 22 Tabe 7: The number of servers migrated out from data centers 1 and 4 to other data centers. Data Center 2 3 5 6 7 Server Type 1 2 3 4 2 3 4 1 3 1 3 4 1 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 110 1 0 3 11 0 0 2 0 0 1 3 0 0 120 0 0 54 0 0 49 0 49 4 2 0 49 0 0 0 0 0 0 0 0 0 0 4 4 0 0 0 66 0 0 52 0 0 0 0 0 Tabe 8: Summary of the two scenarios discussed in the paper. Scenario Data Rea Power Estimated Reward ($) Migration Desired Profit Success Center Curtaiment (W) µ σ Cost ($) Threshod ($) Probabiity 1 1 250.002 332.5 36.6 7.4 270 93.4% 1 200.016 263.3 28.9 5.8 2 4 100.320 136.3 14.9 3.1 350 89.4% bining approach for storage systems within a data center where hard diss are turned on/off so that the power consumption is adjusted based on the time-of-use LMPs or power variations from renewabe energy sources. However, the power consumption, as controed by the storage system, is a form of sef scheduing and is not compatibe with the atest FERC order that stipuates DR resources to be dispatched by the utiity [12]. Financia reward is not introduced in this paper to refect the other benefit of DR. To summarize, the essentia difference is that the above-discussed papers aim to cut costs by reducing a chun of power, whie our idea is aso to earn reward/profit with this chun of power by integrating DR programs into data center operations. 6. DISCUSSION We have described wor aimed at deveoping an optimization framewor that aows data centers to be treated as DR resources that can participate in whoesae energy marets and earn finicia reward by reducing the consumption of eectricity in response to price signas from utiity companies. Some simuation resuts invoving data centers participating in economic DR programs in mutipe RTO marets were presented to vaidate the framewor. Our DR discussions have assumed that the minimum oad curtaiment that must be achieved by a singe data center is in the 50-100 W range. We expect these minimum curtaiment eves to decrease as the instaation of more sensitive and accurate metering systems becomes widespread in the future. Moreover, the recent FERC order aows for the creation of an aggregate of mutipe data-center ocations (with no imit on the number of ocations) that can participate in DR programs as a singe entity as ong as the curtaiment eve is greater than 100 W and the ocations fa under the same oad serving entity and eectric distribution company. We are currenty extending the proposed framewor to accommodate such aggregations as we as the uncertainty and ris reated to migrations over wide-area networs. 7. REFERENCES [1] Z. Abbasi, T. Muherjee, G. Varsamopouos, and S. K. S. Gupta. DAHM: A green and dynamic web appication hosting manager across geographicay distributed data centers. ACM Journa on emerging technoogy, 2012. [2] D. Bouey. Estimating a data center s eectrica carbon footprint, Apri 2010. [3] Cisco. Data center interconnect design guide for virtuaized woroad mobiity with Cisco, NetApp and ware, 2011. [4] Cisco. Data center interconnect impementation guide for virtuaized woroad mobiity with Cisco, EMC and ware, 2011. [5] DeepStorage. HyperIP enabes otion over WAN, 2009. [6] EPA. Report to congress on server and data center energy efficiency, Juy 2007. [7] F5. Depoying the BIG-IP v10.2 to enabe ong distance ive migration with ware vsphere vmotion, 2010. [8] D. Irwin, N. Sharma, and P. Shenoy. Towards continuous poicy-driven demand response in data centers. In ACM Worshop on Green Networing, Aug. 2011. [9] S. G. Maridais, S. C. Wheewright, and R. J. Hyndman. Forecasting: methods and appications. Wiey, 1998. [10] PJM. Marets & operations. http://www.pjm.com/ marets-and-operations.aspx. [11] PJM. Introduction to PJM demand response, Oct. 2008. [12] PJM. PJM demand side response overview, May 2012. [13] PJM. Retai eectricity consumer opportunities for demand response in PJM s whoesae marets, 2012. [14] A. Qureshi. Pugging into energy maret diversity. In 7th ACM Worshop on Hot Topics in Networs, Oct. 2008. [15] ware. ware vsphere vmotion: 5.4 times faster than Hyper-V ive migration, 2011. [16] ware and Cisco. Virtua machine mobiity with ware otion and Cisco data center interconnect technoogies, 2009. [17] R. Waawaar, S. Fernands, N. Thaur, and K. R. Chevva. Evoution and current status of demand response (DR) in eectricity marets: Insights from PJM and NYISO. Energy, 35(4):1553 1560, 2010. [18] R. Wang and N. Kandasamy. On the design of decentraized contro architectures for woroad consoidation in arge-scae server custers. In Int. Conf. on Autonomic computing, 2012.