A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage

Size: px
Start display at page:

Download "A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage"

Transcription

1 A Self-Organzed, Fault-Tolerant and Scalable Replcaton Scheme for Cloud Storage Ncolas Bonvn, Thanass G. Papaoannou and Karl Aberer School of Computer and Communcaton Scences École Polytechnque Fédérale de Lausanne (EPFL) 115 Lausanne, Swtzerland Emal: ABSTRACT Falures of any type are common n current datacenters, partly due to the hgher scales of the data stored. As data scales up, ts avalablty becomes more complex, whle dfferent avalablty levels per applcaton or per data tem may be requred. In ths paper, we propose a self-managed key-value store that dynamcally allocates the resources of a data cloud to several applcatons n a costeffcent and far way. Our approach offers and dynamcally mantans multple dfferentated avalablty guarantees to each dfferent applcaton despte falures. We employ a vrtual economy, where each data partton (.e. a key range n a consstent-hashng space) acts as an ndvdual optmzer and chooses whether to mgrate, replcate or remove tself based on net beneft maxmzaton regardng the utlty offered by the partton and ts storage and mantenance cost. As proved by a game-theoretcal model, no mgratons or replcatons occur n the system at equlbrum, whch s soon reached when the query load and the used storage are stable. Moreover, by means of extensve smulaton experments, we have proved that our approach dynamcally fnds the optmal resource allocaton that balances the query processng overhead and satsfes the avalablty objectves n a cost-effcent way for dfferent query rates and storage requrements. Fnally, we have mplemented a fully workng prototype of our approach that clearly demonstrates ts applcablty n real settngs. Categores and Subject Descrptors H.3.2 [Informaton storage and retreval]: Informaton Storage; H.3.4 [Informaton storage and retreval]: Systems and Software Dstrbuted systems; H.2.4 [Database Management]: Systems Dstrbuted databases; E.1 [Data Structures]: Dstrbuted data structures; E.2 [Data Storage Representatons]: Hash-table representatons General Terms Relablty, Economcs Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. SoCC 1, June 1 11, 21, Indanapols, Indana, USA. Copyrght 21 ACM /1/6...$1.. Keywords decentralzed optmzaton, net beneft maxmzaton, equlbrum, ratonal strateges 1. INTRODUCTION Cloud storage s becomng a popular busness paradgm, e.g. Amazon S3, ElephantDrve, Ggaspaces, etc. Small companes that offer large Web applcatons can avod large captal expendtures n nfrastructure by rentng dstrbuted storage and pay per use. The storage capacty employed may be large and t should be able to further scale up. However, as data scales up, hardware falures n current datacenters become more frequent [22]; e.g. overheatng, power (PDU) falures, rack falures, network falures, hard drve falures, network re-wrng and mantenance. Also, geographc proxmty sgnfcantly affects data avalablty; e.g., n case of a PDU falure 5-1 machnes suddenly dsappear, or n case of a rack falure 4-8 machnes nstantly go down. Furthermore, data may be lost due to natural dsasters, such as tornadoes destroyng a complete data center, or varous attacks (DDoS, terrorsm, etc.). On the other hand, as [7] suggests, Internet avalablty vares from 95% to 99.6%. Also, the query rates for Web applcatons data are hghly rregular, e.g. the Slashdot effect ( and an applcaton may become temporarly unavalable. To ths end, the support of servce level agreements (SLAs) wth data avalablty guarantees n cloud storage s very mportant. Moreover, n realty, dfferent applcatons may have dfferent avalablty requrements. Fault-tolerance s commonly dealt wth by replcaton. Exstng works usually rely on randomness to dversfy the physcal servers that host the data; e.g. n [23], [17] node IDs are randomly chosen, so that peers that are adjacent n the node ID space are geographcally dverse wth a hgh probablty. To the best of our knowledge, no system explctly addresses the geographcal dversty of the replcas. Also, from the applcaton perspectve, geographcally dstrbuted cloud resources have to be effcently utlzed to mnmze rentng costs assocated to storage and communcaton. Clearly, geographcal dversty of replca locatons and communcaton cost are contradctory objectves. From the cloud provder perspectve, effcent utlzaton of cloud resources s necessary both for cost-effectveness and for accommodatng load spkes. Moreover, resource utlzaton has to be adaptve to resource falures, addton of new resources, load varatons and the dstrbuton of clent locatons. Dstrbuted key-value store s a wdely employed servce case of cloud storage. Many Web applcatons (e.g. Amazon) and many large-scale socal applcatons (e.g. LnkedIn, Last.fm, etc.) use dstrbuted key-value stores. Also, several research communtes (e.g. peer-to-peer, scalable dstrbuted data structures, databases)

2 study key-value stores, even as complete database solutons (e.g. BerkeleyDB). In ths paper, we propose a scattered key-value store (referred to as Skute"), whch s desgned to provde hgh and dfferentated data avalablty statstcal guarantees to multple applcatons n a cost-effcent way n terms of rent prce and query response tmes. A short four-pages overvew of ths work has been descrbed n [5]. Our approach combnes the followng nnovatve characterstcs: It enables a computatonal economy for cloud storage resources. It provdes dfferentated avalablty statstcal guarantees to dfferent applcatons despte falures by geographcal dversfcaton of replcas. It apples a dstrbuted economc model for the cost-effcent self-organzaton of data replcas n the cloud storage that s adaptve to addng new storage, to node falures and to clent locatons. It effcently and farly utlzes cloud resources by performng load balancng n the cloud adaptvely to the query load. Optmal replca placement s based on dstrbuted net beneft maxmzaton of query response throughput mnus storage as well as communcaton costs, under the avalablty constrants. The optmalty of the approach s proved by comparng smulaton results to those expected by numercally solvng an analytcal form of the global optmzaton problem. Also, a game-theoretc model s employed to observe the propertes of the approach at equlbrum. A seres of smulaton experments prove the aforementoned characterstcs of the approach. Fnally, employng a fully workng prototype of Skute, we expermentally demonstrate ts applcablty n real settngs. The rest of the paper s organzed as follows: In Secton 2, the scattered key-value data store s presented. In Secton 3, the global optmzaton problem that we address s formulated. In Secton 4, we descrbe the ndvdual optmzaton algorthm that we employ to solve the problem n a decentralzed way. In Secton 5, we defne a game-theoretcal model of the proposed mechansm and study ts equlbrum propertes. In Secton 6, we dscuss the applcablty of our approach n an untrustworthy envronment. In Secton 7, we present our smulaton results on the effectveness of the proposed approach. In Secton 8, we descrbe the mplementaton of Skute and dscuss our expermental results n a real testbed. In Secton 9, we outlne some related work. Fnally, n Secton 1, we conclude our work. 2. SKUTE: SCATTERED KEY-VALUE STORE Skute s desgned to provde low response tme on read and wrte operatons, to ensure replcas geographcal dsperson n a costeffcent way and to offer dfferentated avalablty guarantees per data tem to multple applcatons, whle mnmzng bandwdth and storage consumpton. The applcaton data owner rents resources from a cloud of federated servers to store ts data. The cloud could be a sngle busness,.e. a company that owns/manages data server resources ( prvate clouds), or a broker that represents servers that do not belong to the same busnesses ( publc clouds). The number of data replcas and ther placement are handled by a dstrbuted optmzaton algorthm autonomously executed at the servers. Also, data replcaton s hghly adaptve to the dstrbuton of the query load among parttons and to falures of any knd so as to mantan hgh data avalablty. Defnng a new approach for mantanng data consstency among replcas s not among the objectves of ths work. A potental soluton could be to mantan eventual data consstency among replcas by vector-clock versonng, quorum consstency mechansms and read-repar, as n [9]. 2.1 Physcal node We assume that a physcal node (.e. a server) belongs to a rack, a room, a data center, a country and a contnent. Note that fner geographcal granularty could also be consdered. Each physcal node has a label of the form contnent-country-datacenter-roomrack-server n order to precsely dentfy ts geographcal locaton. For example, a possble label for a server located n a data center n Berln could be EU-DE-BE1-C12-R7-S Vrtual node Based on the fndngs of [12], Skute s bult usng a rng topology and a varant of consstent hashng [16]. Data s dentfed by a key (produced by a one-way cryptographc hash functon, e.g. MD5) and ts locaton s gven by the hash functon of ths key. The key space s splt nto parttons. A physcal node (.e. a server) gets assgned to multple ponts n the rng, called tokens. A vrtual node (alternatvely a partton) holds data for the range of keys n (prevous token; token], as n [9]. A vrtual node may replcate or mgrate ts data to another server, or sucde (.e. delete ts data replca) accordng to a decson makng process descrbed n Secton 4.4. A physcal node hosts a varyng amount of vrtual nodes dependng on the query load, the sze of the data managed by the vrtual nodes and ts own capacty (.e. CPU, RAM, dsk space, etc.). 2.3 Vrtual rng Our approach employs the concept of multple vrtual rngs on a sngle cloud n an nnovatve way (cf. Secton 9 for a comparson wth [28]). Thus, as subsequently explaned, we allow multple applcatons to share the same cloud nfrastructure for offerng dfferentated per data tem and per applcaton avalablty guarantees wthout performance conflcts. The sngle-applcaton case wth one unform avalablty guarantee has been presented n [4]. In the present work, each applcaton uses ts own vrtual rngs, whle one rng per avalablty level s needed, as depcted n Fgure 1. Each vrtual rng conssts of multple vrtual nodes that are responsble for dfferent data parttons of the same applcaton that demand a specfc avalablty level. Ths approach provdes the followng advantages over exstng key-value stores: 1. Multple data avalablty levels per applcaton. Wthn the same applcaton, some data may be crucal and some may be less mportant. In other words, an applcaton provder may want to store data wth dfferent avalablty guarantees. Other approaches, such as [9], also argue that they can support several applcatons by deployng a key-value store per applcaton. However, as data placement for each data store would be ndependent n [9], an applcaton could severely mpact the performance of others that utlze the same resources. Unlke exstng approaches, Skute allows a fnegraned control of the resources of each server, as every vrtual node of each vrtual rng acts as an ndvdual optmzer (as descrbed n Secton 4.4), thus mnmzng the mpact among applcatons. 2. Geographcal data placement per applcaton. Data that s mostly accessed from a gven geographcal regon should be moved close to that regon. Wthout the concept of vrtual rngs, f multple applcatons were usng the same data store,

3 Avalablty Level vrtual node vrtual rng App. A App. B App. C Applcatons Fgure 1: Three applcatons wth dfferent avalablty levels. data of dfferent applcatons would have to be stored n the same partton, thus removng the ablty to move data close to the clents. However, by employng multple vrtual rngs, Skute s able to provde one vrtual store per applcaton, allowng the geographcal optmzaton of data placement. 2.4 Routng As Skute s ntended to be used wth real-tme applcatons, a query should not be routed through multple servers before reachng ts destnaton. Routng has to be effcent, therefore every server should have enough nformaton n ts routng table to route a query drectly to ts fnal destnaton. Skute could be seen as a O(1) DHT, smlarly to [9]. Each vrtual rng has ts own routng entres, resultng n potentally large routng tables. Hence, the number of entres n the routng table s: entres = apps X levels X j partton(; j) (1) where partton(; j) returns the number of parttons (.e. vrtual nodes) of the vrtual rng of the avalablty level belongng to applcaton j. However, the memory space requrement of the routng table s qute reasonable; e.g. for 1 applcatons, each wth 3 avalablty levels and 5K data parttons, the necessary memory space would be 31:5MB, assumng that each entry conssts of 22 bytes (2 bytes for applcaton d, 1 byte for avalablty level, 3 bytes for the partton d and 16 bytes for the sequence of server ds that host the partton). A physcal node s responsble to manage the routng table of all vrtual rngs hosted n t, n order to mnmze the update costs. Upon mgraton, replcaton and sucde events, herarchcal broadcast that leverages the geographcal topology of servers s employed. Ths approach costs O(N), but t uses the mnmum network spannng tree. The poston of a movng vrtual node (.e. durng the mgraton process) s tracked by forwardng ponters (e.g. SSP chans [25]). Also, the routng table s perodcally updated usng a gosspng protocol for shortenng/reparng chans or updatng stale entres (e.g. due to falures). Accordng to ths protocol, a server exchanges wth random log(n) other servers the routng entres of the vrtual nodes that they are responsble for. Moreover, as explaned n Secton 5 and expermentally proved n Secton 7, no routng table updates are expected at equlbrum wth stable system condtons regardng the query load and the number of servers. Even f a routng table contans a large number of entres, ts practcal mantenance s not costly thanks to the stablty of the system. The scalablty of ths approach s expermentally assessed n a real testbed, as descrbed n Secton THE PROBLEM DEFINITION The data belongng to an applcaton s splt nto M parttons, where each partton has r dstrbuted replcas. We assume that N servers are present n the data cloud. 3.1 Maxmze data avalablty The frst objectve of a data owner d (.e. applcaton provder) s to provde the hghest avalablty for a partton, by placng all of ts replcas n a set S d of dfferent servers. Data avalablty generally ncreases wth the geographcal dversty of the selected servers. Obvously, the worst soluton n terms of data avalablty would be to put all replcas at a server wth equal or worse probablty of falure than others. We denote as F j a falure event at server j 2 S d. These events may be ndependent from each other or correlated. If we assume wthout loss of generalty that events F 1 : : : F k are ndependent and that events F k+1 : : : F d js j are correlated, then the probablty a partton to be unavalable s gven as follows: P r( unavalable) = P r(f 1 \ F 2 \ : : : \ F js d j ) = ky j=1 P r(f j) P r(f k jf k+1 : : : \ F js d j ) P r(f k+1 jf k+2 \ : : : \ F js d j ) : : : P r(f js d j ) ; f F k+1 \ F k+2 \ : : : F d js j 6=. 3.2 Mnmze communcaton cost Whle geographcal dversty ncreases avalablty, t s also mportant to take nto account communcaton cost among servers that host dfferent replcas, n order to save bandwdth durng replcaton or mgraton, and to reduce latency n data accesses and durng conflct resoluton for mantanng data consstency. Let L ~ d be a M N locaton matrx wth ts element L d j = 1 f a replca of partton of applcaton d s stored at server j and L d j = otherwse. Then, we maxmze data proxmty by mnmzng network costs for each partton, e.g. the total communcaton cost for conflct resoluton of replcas for the mesh network of servers where the replcas of the partton are stored. In ths case, the network cost c n for conflct resoluton of the replcas of a partton of applcaton d can be gven by (2) c n( ~ L d ) = sum( ~ L d ~ NC ~ L d T ) ; (3) where NC ~ s a strctly upper trangular N N matrx whose element NC jk s the communcaton cost between servers j and k, and sum denotes the sum of matrx elements. 3.3 Maxmze net beneft Every applcaton provder has to perodcally pay the operatonal cost of each server where he stores replcas of hs data parttons. The operatonal cost of a server s manly nfluenced by the qualty of the hardware, ts physcal hostng, the access bandwdth allocated to the server, ts storage, and ts query processng and communcaton overhead. The data owner wants to mnmze hs expenses by replacng expensve servers wth cheaper ones, whle mantanng a certan mnmum data avalablty promsed by SLAs to hs clents. He also obtans some utlty u(:) from the queres answered by ts data replcas that depends on the popularty (.e. query load) pop of the data contaned n the replca of the partton and the response tme (.e. processng and network latency) assocated to the reples. The network latency depends on the dstance

4 of the clents from the server that hosts the data,.e. the geographcal dstrbuton G of query clents. Overall, he seeks to maxmze hs net beneft and the global optmzaton problem can be formulated as follows: maxfu(pop ; G) s.t. L ~ d ~c T + c n( L ~ d )g, 8; 8d 1 P r(f Ld 1 1 \ F Ld 2 2 \ : : : F Ld N N ) th d ; where ~c s the vector of operatonal costs of servers wth ts element c j beng an ncreasng functon of the data replcas of the varous applcatons located at server j. Ths also accounts for the fact that the processng latency of a server s expected to ncrease wth the occupancy of ts resources. F j for a partcular partton denotes that the partton s not present at server j and thus the correspondng falure event at ths server s excluded from the ntersecton and th d s a certan mnmum avalablty threshold promsed by the applcaton provder d to hs clents. Ths constraned global optmzaton problem takes 2 M N possble solutons for every applcaton and t s feasble only for small sets of servers and parttons. 4. THE INDIVIDUAL OPTIMIZATION The data owner rents storage space located n several data centers around the world and pays a monthly usage-based real rent. Each vrtual node s responsble for the data n ts key range and should always try to keep data avalablty above a certan mnmum level requred by the applcaton whle mnmzng the assocated costs (.e. for data hostng and mantenance). To ths end, a vrtual node can be assumed to act as an autonomous agent on behalf of the data owner to acheve these goals. Tme s assumed to be splt nto epochs. A vrtual node may replcate or mgrate ts data to another server, or sucde (.e. delete ts data replca) at each epoch and pay a vrtual rent (.e. an approxmaton of the possble real rent, defned later n ths secton) to servers where ts data are stored. These decsons are made based on the query rate for the data of the vrtual node, the rentng costs and the mantenance of hgh avalablty upon falures. There s no global coordnaton and each vrtual node behaves ndependently. Only one vrtual node of the same partton s allowed to sucde at the same epoch by employng Paxos [18] dstrbuted consensus algorthm among vrtual nodes of the same partton. The vrtual rent of each server s announced at a board and s updated at the begnnng of a new epoch. 4.1 Board At each epoch, the vrtual nodes need to know the vrtual rent prce of the servers. One server n the network s elected (.e. by a leader electon dstrbuted protocol) to store the current vrtual rent per epoch of each server. The electon s performed at startup and repeated whenever the elected server s not reachable by the majorty. Servers communcate to the central board only ther updated vrtual prces. Ths centralzed approach acheves common knowledge for all vrtual nodes n decson makng (cf. the algorthm of Secton 4.4), but: ) t assumes trustworthness of the elected server, ) the elected server may become a bottleneck. An alternatve approach would be that each server mantans ts own local board and perodcally updates the vrtual prces of a random subset (log(n)) of servers by contactng them drectly (.e. gosspng), havng as N the total number of servers. Ths approach does not have the aforementoned problems, but decson makng of vrtual nodes s based on potentally outdated nformaton on the vrtual rents. Ths fully decentralzed archtecture has been expermentally verfed n a real testbed to be very effcent wthout creatng hgh communcaton overhead (cf. Secton 8). (4) When a new server s added to the network, the data owner estmates ts confdence based on ts hardware components and ts locaton. Ths estmaton depends on techncal factors (e.g. redundancy, securty, etc.) as well as non-techncals ones (e.g. poltcal and economcal stablty of the country, etc.) and t s rather subjectve. Confdence values of servers are stored at the board(s) n a trustworthy settng, whle they can be stored at each vrtual node n case that trustworthness s an ssue. Note that askng for detaled nformaton on the server locaton s already done by large companes that rent dedcated servers, e.g. theplanet.com. The potental nsncerty of the server for ts locaton could be conveyed n ts confdence value based on ts offered avalablty and performance. 4.2 Physcal node The vrtual rent prce c of a physcal node for the next epoch s an ncreasng functon of ts query load and ts storage usage at the current epoch and t can be gven by: c = up (storage_usage + query_load) ; (5) where up s the margnal usage prce of the server, whch can be calculated by the total monthly real rent pad by vrtual nodes and the mean usage of the server n the prevous month. We consder that the real rent prce per server takes nto account the network cost for communcatng wth the server,.e. ts access lnk. To ths end, t s assumed that access lnks are gong to be the bottleneck ones along the path that connects any par of servers and thus we do not take explctly nto account dstance between servers. Multplyng the real rent prce wth the query load satsfes the network proxmty objectve. The query load and the storage usage at the current epoch are consdered to be good approxmatons of the ones at the next epoch, as they are not expected to change very often at very small tme scales, such as a tme epoch. The vrtual rent prce per epoch s an approxmaton of the real monthly prce that s pad by the applcaton provder for storng the data of a vrtual node. Thus, an expensve server tends to be also expensve n the vrtual economy. A server agent resdng at the server calculates ts vrtual rent prce per epoch and updates the board. 4.3 Mantanng avalablty A vrtual node always tres to keep the data avalablty above a mnmum level th (.e. the avalablty level of the correspondng vrtual rng), as specfed n Secton 3. As estmatng the probabltes of each server to fal necesstates access to an enormous set of hstorcal data and prvate nformaton of the server, we approxmate the potental avalablty of a partton (.e. vrtual node) by means of the geographcal dversty of the servers that host ts replcas. Therefore, the avalablty of a partton s defned as the sum of dversty of each dstnct par of servers,.e.: aval = js j X js j X = j=+1 conf conf j dversty(s ; s j) (6) where S = (s 1; s 2; : : : ; s n) s the set of servers hostng replcas of the vrtual node and conf, conf j 2 [; 1] are the confdence levels of servers, j. The dversty functon returns a number calculated based on the geographcal dstance among each server pars. Ths dstance s represented as a 6 bt number, each bt correspondng to the locaton parts of a server, namely contnent, country, data center, room rack and server. Note that more bts would be requred to represent addtonal geographcal locaton parts than those consdered. The most sgnfcant bt (leftmost) represents the contnent whle the least sgnfcant bt (rghtmost) represents the server.

5 Startng wth the most sgnfcant bt, each locaton part of both servers are compared one by one to compute ther smlarty: f the locaton parts are equvalent, the correspondng bt s set to 1, otherwse. Once a bt has been set to, all less sgnfcant bts are also set to. For example, two servers belongng to the same data center but located n dfferent rooms cannot be n the same rack, thereby all bts after the thrd bt (data center) have to be. The smlarty number would then look lke ths: cont coun data room rack serv A bnary NOT operaton s then appled to the smlarty to get the dversty value: 111 = 111 = 7(decmal) The dversty values of server pars are summed up, because havng more replcas n dstnct servers located even n the same locaton always results n ncreased avalablty. When the avalablty of a vrtual node falls below th, t replcates ts data to a new server. Note that a vrtual node can know the locatons of the replcas of ts partton by the routng table of ts hostng server and thus calculate ts avalablty accordng to 6. The best canddate server s selected so as to maxmze the net beneft between the dversty of the resultng set of replca locatons for the vrtual node and the vrtual rent of the new server. Also, a preference weght s assocated to servers accordng to ther locaton proxmty to the geographcal dstrbuton G of query clents for the partton. G s approxmated by the vrtual node by storng the number of clent queres per locaton. Thus, the avalablty s ncreased as much as possble at the mnmum cost, whle the network latency for the query reply s decreased. Specfcally, a vrtual node wth current replca locatons n S maxmzes: arg j max js j X k=1 g j conf j dversty(s k ; s j) c j ; (7) where c j s the vrtual rent prce of canddate server j. g j s a weght related to the proxmty (.e. nverse average dversty) of the server locaton to the geographcal dstrbuton of query clents for the partton of a vrtual node and s gven by: g j = P l q l 1 + P l q l dversty(l; s j) ; (8) where q l s the number of queres for the partton of the vrtual node per clent locaton l. To ths end, we assume that the clent locatons are encoded smlarly to those of servers. In fact, f clent requests reach the cloud by the geographcally nearest cloud node to the clent (e.g. by employng geodns), we can take the locaton of ths cloud node as the clent locaton. However, havng vrtual nodes to choose the destnaton server j for replcaton accordng to (7) would render j a bottleneck for the next epoch. Instead, the destnaton server s randomly chosen among the top-k ones that are ranked accordng to the maxmzed quantty n (7). The mnmum avalablty level th allows a fne-graned control over the replcaton process. A low value means that a partton wll be replcated on few servers potentally geographcally close, whereas a hgher value enforces many replcas to be located at dspersed locatons. However, settng a hgh value for the mnmum level of avalablty n a network wth a few servers can result n an undesrable stuaton, where all parttons are replcated everywhere. To crcumvent ths, a maxmum number of replcas per vrtual node s allowed. 4.4 Vrtual node decson tree As already mentoned, a vrtual node agent may decde to replcate, mgrate, sucde or do nothng wth ts data at the end of an epoch. Note that decson makng of vrtual nodes does not need to be synchronzed. Upon replcaton, a new vrtual node s assocated wth the replcated data. The decson tree of a vrtual node s depcted n Fgure 2. Frst, t verfes that the current avalablty of ts partton s greater than th. Note If the mnmum acceptable avalablty s not reached, the vrtual node replcates ts data to the server that maxmzes avalablty at the mnmum cost, as descrbed n Subsecton 4.3. If the avalablty s satsfactory, the vrtual node agent tres to mnmze costs. Durng an epoch, vrtual nodes receve queres, process them and send the reples back to the clent. Each query creates a utlty value for the vrtual node, whch can be assumed to be proportonal to the sze of the query reply and nversely proportonal to the average dstance of the clent locatons from the server of the vrtual node. For ths purpose, the balance (.e. net beneft) b for a vrtual node s defned as follows: b = u(pop; g) c ; (9) where u(pop; g) s assumed to be the epoch query load of the partton wth a certan popularty pop dvded by the proxmty g (as defned n (8)) of the vrtual node to the clent locatons and normalzed to monetary unts, and c s the vrtual rent prce. To ths end, a vrtual node decdes to: mgrate or sucde: f t has negatve balance for the last f epochs. Frst, the vrtual node calculates the avalablty of ts partton wthout ts own replca. If the avalablty s satsfactory, the vrtual node sucdes,.e. deletes ts replca. Otherwse, the vrtual node tres to fnd a less expensve (.e. busy) server that s closer to the clent locatons (accordng to maxmzaton formula (7)). To avod a data replca oscllaton among servers, the mgraton s only allowed f the followng mgraton condtons apply: The mnmum avalablty s stll satsfed usng the new server, the absolute prce dfference between the current and the new server s greater than a threshold, the current server storage usage s above a storage soft lmt, typcally 7% of the hard drve capacty, and the new server s below that lmt. replcate: f t has postve balance for the last f epochs, t may replcate. For replcaton, a vrtual node has also to verfy that: It can afford the replcaton by havng a postve balance b for consecutve f epochs: b = u(pop; g) c n 1:2 c where c n s a term representng the consstency (.e. network) cost, whch can be approxmated as the number of replcas of the partton tmes a fxed average communcaton cost parameter for conflct resoluton and routng table mantenance. c s the current vrtual rent of the canddate server for replcaton (randomly selected among the top-k ones ranked accordng to the formula (7)), whle the factor 1.2 accounts for an upper bounded 2% ncrease at ths rent prce at the next epoch due to the potentally ncreased occuped storage

6 and query load of the canddate server. Ths acton ams to dstrbute to load of the current server towards one located closer to the clents. Thus, t tends to decrease the processng and network latency of the queres for the partton. the average bandwdth consumpton bdw r for the answerng queres per replca after replcaton (left term of left sde of nequalty (1)) plus the sze p s of the partton s less than the respectve bandwdth bdw per replca wthout replcaton (rght sde of nequalty (1)) for a fxed number wn of epochs to compensate for steep changes of the query rate. A large wn value should be used for bursty query load. Specfcally, the vrtual node replcates f: wn q q s js j p s < wn q qs js j ; (1) where q s the average number of queres for the last wn epochs, q s s the average sze of the reples, js j s the number of servers currently hostng replcas of partton and p s s the sze of the partton. At the end of a tme epoch, the vrtual node agent sets lowest utlty value u(pop; g) to the current lowest vrtual rent prce of the server to prevent unpopular nodes from mgratng ndefntely. A vrtual node that ether gets a large number of queres or has to provde large query reples becomes wealther. At the same tme, the load of ts hostng server wll ncrease, as well as the vrtual rent prce for the next epoch. Popular vrtual nodes on the server wll have enough money" to pay the growng rent prce, as opposed to unpopular ones that wll have to move to a cheaper server. The transfer of unpopular vrtual nodes wll n turn decrease the vrtual rent prce, hence stablzng the rent prce of the server. Ths approach s self-adaptve and balances the query load by replcatng popular vrtual nodes. 5. EQUILIBRIUM ANALYSIS Frst, we fnd the expected payoffs of the actons of a vrtual node, as descrbed n our mechansm. Assume that M s the orgnal number of parttons (.e. vrtual nodes) n the system. These vrtual nodes may belong to the same or to dfferent applcatons and compete to each other. Tme s assumed to be slotted n rounds. At each round, a vrtual node (whch s ether responsble for an orgnal partton or a replca of the partton) s able to mgrate, replcate, sucde (.e. delete tself) or do nothng (.e. stay) compettvely to other vrtual nodes at a repeated game. The expected sngle round strategy payoffs at round t + 1 by the varous actons made at round t of the game for a vrtual node are gven by: Mgrate: EV M = u(t) r (t) Replcate: EV R = u(t) +a (t) r (t) +1 Sucde: EV D = Stay: EV S = u(t) r (t) f c f d r (t) C (t+1) c f c f d(r (t) +1) 1 fdr (t) C (t+1) e 2 (C(t+1) c +C (t+1) e ) u (t) Fgure 2: Decson tree of the vrtual node agent. s the utlty ganed by the queres served by the partton for whch vrtual node s responsble and only depends on ts popularty at round t; for smplcty and wthout loss of generalty, we assume that clents are geographcally unformly dstrbuted. To ths end, a vrtual node expects that ths popularty wll be mantaned at the next round of the game. r (t) s the number of replcas of vrtual node at round t. C (t+1) c s the expected prce at round t + 1 of the cheapest server at round t. Note that t s a domnant strategy for each vrtual node to select the cheapest server to mgrate or replcate to, as any other choce could be exploted by compettve ratonal vrtual nodes. f d s the mean communcaton cost per replca of vrtual node for data consstency, fc s the communcaton cost for mgratng vrtual node, a (t) s the utlty gan due to the ncreased avalablty of vrtual node when a new replca s created, and C (t+1) e s the prce at round t + 1 of the current hostng server at round t, therefore C (t) e > C (t) c. In case of replcaton, two vrtual nodes wll henceforth exst n the system, havng equal expected utltes, but the old one payng C (t+1) e and the new one payng C (t+1) c. In the aforementoned formula of EV R, we calculate the expected payoff per copy of the vrtual node after replcaton. Notce that EV R s expected to be ntally sgnfcantly above, as the ntal utlty gan a from avalablty should be large n order the necessary replcas to be created. Also, f the vrtual prce dfference among servers s ntally sgnfcant, then EV M EV S wll be frequently postve and vrtual nodes wll mgrate towards cheaper servers. As the number of replcas ncreases, a decreases (and eventually becomes a small constant close to after the requred avalablty s reached) and thus EV R decreases. Also, as prce dfference n the system s gradually balanced, the dfference EV M EV S becomes more frequently negatve, so fewer mgratons happen. On the other hand, f the popularty (.e. query load) of a vrtual node s sgnfcantly deterorated (.e. u decreases),

7 whle ts mnmum avalablty s satsfed (.e. a s close to ), then t may become preferable for a vrtual node to commt sucde. Next, we consder the system at equlbrum and that the system condtons, namely the popularty of vrtual nodes and the number of servers, reman stable. If we assume that each vrtual node plays a mxed strategy among hs pure strateges, specfcally t plays mgrate, replcate, sucde and stay wth probabltes x, y, z and 1 x y z respectvely, then we calculate C (t+1) c, C (t+1) e as follows: C (t+1) c C (t+1) e P = C (t) M c [1 + (x + y) =1 r(t) ] (11) = C (t) e [1 (x + z + y)] (12) In equaton (11), we assume that the prce of the cheapest server at the next tme slot ncreases lnearly to the number of replcas that are expected to have mgrated or replcated to that server untl the next tme slot. Also, n equaton (12), we assume that the expected prce of the current server at the next tme slot decreases lnearly to the fracton of replcas that are expected to abandon ths server or replcate untl the next tme slot. < << 1 s explaned as follows: Recall that the total number of queres for a partton s dvded by the total number of replcas of that partton and thus replcaton also reduces the rent prce of the current server. However, the storage cost for hostng the vrtual node remans and, as the replcas of the vrtual node n the system ncrease, t becomes the domnant cost factor of the rent prce of the current server. Therefore, replcaton only contrbutes to C (t+1) e n a lmted way, as shown n equaton (12). Note that any cost functon (e.g. a convex one, as storage s a constraned resource) could be used n our equlbrum analyss, as long as t was ncreasng to the number of replcas, whch s a safe assumpton. Henceforth, for smplcty, we drop ndces as we deal only wth one vrtual node. Recall that the term a becomes close to at equlbrum. Then, the replcate strategy s domnated by the mgrate one, and thus y =. Also, the sucde strategy has to be eventually domnated by the mgrate and stay strateges, because otherwse every vrtual node would have have the ncentve to leave the system; thus z =. Therefore, the number r of replcas of a vrtual node becomes fxed at equlbrum and the total sum N r of the replcas of all vrtual nodes n the cloud s also fxed. As y = z = at equlbrum, the vrtual node plays a mxed strategy among mgrate and stay wth probabltes x and 1 x respectvely. The expected payoffs of these strateges should be equal at equlbrum, as the vrtual node should be ndfferent between them: EV M = EV S, u f c f d r C c(1 + x N r) = u r r x = Ce Cc fc f d r C e(1 x), C e + C c N r (13) The nomnator of x says that n order for any mgratons to happen n the system at equlbrum the rent of the current server used by a vrtual node should exceed the rent of the cheapest server more than the cost of mgraton for ths vrtual node. Also, the probablty to mgrate decreases wth the total number of replcas n the system. Consderng that each mgraton decreases the average vrtual prce dfference n the system, then the number of mgratons at equlbrum wll be almost. 6. RATIONAL STRATEGIES We have already accounted for the case that vrtual nodes are ratonal, as we have consdered them to be ndvdual optmzers. In ths secton, we consder the ratonal strateges that could be employed by servers n an untrustworthy envronment. No malcous strateges are consdered, such as tamperng wth data, data delberate destructon or theft, because standard cryptographc methods (e.g. dgtal sgnatures, dgtal hashes, symmetrc encrypton keys) could easly allevate them (at a performance cost) and the servers would have legal consequences f dscovered employng them. Such cryptographc methods should be employed n a real untrustworthy envronment, but we refran from further dealng wth them n ths paper. However, ratonal servers could have the ncentve to le about ther vrtual prces, so that they do not reflect the actual usage of ther storage and bandwdth resources. For example, a server may overutlze ts bandwdth resources by advertsng a lower vrtual prce (or equvalently a lower bandwdth utlzaton) than the true one and ncrease ts profts by beng pad by more vrtual nodes. At ths pont, recall that the applcaton provder pays a monthly rent per vrtual node to each server that hosts ts vrtual nodes. In case of server overutlzaton, some queres to the vrtual nodes of the server would have to be buffered or even dropped by the server. Also, one may argue that a server can ncrease ts margnal usage prce on wll n ths envronment, whch then s used to calculate the monthly rent of a vrtual node. Ths s partly true, despte competton among servers, as the total actual resource usage of a server per month cannot be easly estmated by ndvdual applcaton provders. The aforementoned ratonal strateges could be tackled as follows: In Secton 4, we assumed that vrtual nodes assgn to servers a subjectve confdence value based on the qualty of the resources of the servers and ther locaton. In an untrustworthy envronment, the confdence value of a server could also reflect ts trustworthness for reportng ts utlzaton correctly. Ths trustworthness value could be effectvely approxmated by the applcaton provder by means of reputaton based on perodcal montorng of the performance of servers to own queres. The aforementoned ratonal strateges are common n everyday transactons among sellers and buyers, but n a compettve envronment, comparng servers based on ther prces and ther offered performance provdes them wth the rght ncentves for truthful reportng [1]. Therefore, n a cloud wth ratonal servers, applcaton provders should dvde c j by the confdence conf j of the server j n the maxmzaton formula (7), n order to provde ncentves to servers to refran from employng the aforementoned strateges. 7. SIMULATION RESULTS 7.1 The smulaton model We assume a smulated cloud storage envronment consstng of N servers geographcally dstrbuted accordng to dfferent scenaros that are explaned on a per case bass. Data per applcaton s assumed to be splt nto M parttons havng each represented by a vrtual node. Each server has fxed bandwdth capactes for replcaton and mgraton per epoch. They also have a fxed bandwdth capacty for servng queres and a fxed storage capacty. All servers are assumed to be assgned the same confdence. The popularty of the vrtual nodes (.e. the query rate) s dstrbuted accordng to Pareto(1, 5). The number of queres per epoch s Posson dstrbuted wth a mean rate, whch s dfferent per experment. For facltatng the comparson of the smulaton results wth those of the analytcal model of Secton 3, the geographcal dstrbuton of query clents s assumed to be Unform and thus g j s 1 for any server j. The sze of every data partton s assumed to be fxed and equal to 256MB. Tme s consdered to be slotted nto epochs. At each epoch, vrtual nodes employ the decson makng algorthm of

8 Amount of vrtual node per server over tme 5 App A: EU CH GVA CO1 R11 S1 App A: EU CH ZUR CO2 R22 S2 4 Vrtual node Subsecton 4.4. Note that decson makng of vrtual nodes s not synchronzed. Each server updates ts avalable bandwdth for mgraton, replcaton or answerng queres, and ts avalable storage after every data transfer that s decded to happen wthn one epoch. Prevous data mgratons and replcatons are taken nto account n the next epoch. The vrtual prce per server s determned accordng to formula (5) at the begnnng of each epoch. App A: EU FR PAR CO3 R33 S3 App A: EU DE BER CO4 R44 S4 App A: EU IT ROM CO5 R55 S5 3 App B: EU CH GVA CO1 R11 S1 App B: EU CH ZUR CO2 R22 S Convergence to equlbrum and optmal soluton We frst consder a small scale scenaro to valdate our results solvng numercally the optmzaton problem of Secton 3. Specfcally, we consder a data cloud consstng of N = 5 servers dspersed n Europe: two servers are hosted n Swtzerland n separate data centers, one n France and two servers are hosted n Germany n the same rack of the same data center. Data belongs to two applcatons and t s splt nto M = 5 parttons per applcaton that are randomly shared among servers at startup. The mean query rate s = 3 queres per epoch. The mnmum avalablty level n the smulaton model s confgured so as to ensure that each partton of the frst (resp. second) applcaton s hosted by at least 2 (resp. 4) servers located at dfferent data centers. In the analytcal model of Secton 3, we assume that each server has probablty.3 to fal and that the falure probabltes of the frst 3 server are ndependent, whle those of the Germany data centers are correlated, so as P r[f4 jf5 ] = P r[f5 jf4 ] = :5. We set th1 = :9 for the frst applcaton and th2 = :985 for the second applcaton. Only network-related operatonal costs (.e. access lnks) are consdered the domnant factor for the communcaton cost and thus dstance of servers s not taken nto account n decson makng; therefore we assume cn =, n both the smulaton and the analytcal model. The same confdence s assgned to all servers n the smulaton model. The monthly operatonal cost c of each server s assumed to be 1$. Also, as the geographcal dstrbuton of query clents s assumed to be Unform, the utlty functon n the analytcal model only depends on the popularty pop of the vrtual node and s taken equal to 1 pop. The detaled parameters of ths experment are shown n the left column (small scale) of Table 1. Table 1: Parameters of small-scale and large-scale experments. Parameter Small scale Large scale Servers 5 2 Server storage 1 GB 1 GB Server prce 1$ 1$ (7%), 125$ (3%) Total data 1 GB 1 GB Average sze of an tem 5 KB 5 KB Parttons 5 1 Queres per epoch Posson (λ = 3) Posson ( λ = 3) Query key dstrbuton Pareto (1,5) Pareto (1,5) Storage soft lmt.7.7 Wn 2 1 Replcaton bandwdth 3 MB/epoch 3 MB/epoch Mgraton bandwdth 1 MB/epoch 1 MB/epoch As depcted n Fgure 3, the vrtual nodes start replcatng and mgratng to other servers and the system soon reaches equlbrum, as predcted n Secton 5. The convergence process actually takes only about 8 epochs, whch s very close to the communcaton bound for replcaton (.e. total data sze / replcaton bandwdth = 1GB / 1.5GB per epoch 6:6 epochs). Also, as revealed by comparng the numercal soluton of the optmzaton problem of Secton 3 wth the one that s gven by smulaton experments, the proposed dstrbuted economc approach solves rather accurately the optmzaton problem. Specfcally, the average number of vr- App B: EU FR PAR CO3 R33 S3 App B: EU DE BER CO4 R44 S4 App B: EU IT ROM CO5 R55 S Epoch Fgure 3: Small-scale scenaro: replcaton process at startup. Fgure 4: Large-scale scenaro: robustness aganst upgrades and falures. tual nodes of ether applcaton per server were the same and the dstrbutons of vrtual nodes of ether applcaton per server were smlar. 7.3 Server arrval and falure Henceforth, we consder a more realstc large-scale scenaro of M = 1 parttons and N = 2 servers of dfferent real rents. Now, data belongs to three dfferent applcatons. The desred avalablty levels for applcatons 1, 2, 3 pose a requrement for a mnmum number of 2, 3, 4 replcas respectvely n the data store. One vrtual rng s employed per applcaton. Servers are shared among 1 countres wth 2 datacenters per country, 1 room per datacenter, 2 racks per room, and 5 servers per rack. The other parameters of ths experment are shown n the rght column (large-scale) of Table 1. At epoch 1, we assume that 3 new servers are added to the data cloud, whle 3 random servers are removed at epoch 2. As depcted n Fgure 4, our approach s very robust to resource upgradng or falures: the total number of vrtual nodes remans constant after addng resources to the data cloud and ncreases upon falure to mantan hgh avalablty. Note that the average number of vrtual nodes per server decreases after resource upgradng, as the same total number of vrtual nodes s shared among a larger number of servers. 7.4 Adaptaton to the query load Next, n order to show the adaptablty of the store to the query load, we smulate a load peak smlar to what t would result wth the Slashdot effect : n a short perod the query rate gets multpled by 6. Hence, at epoch 1 the mean rate of queres per epoch ncreases from 3 to 183 n 25 epochs and then slowly decreases for 25 epochs untl t reaches the normal rate of 3

9 Number of vrtual nodes Average amount of vrtual nodes per server Epoch Fgure 5: Large-scale scenaro: total amount of vrtual nodes n the system over tme. Number of requests Vrtual Rng (1/3 of total load, 2 replcas) Vrtual Rng 1 (1/3 of total load, 3 replcas) Vrtual Rng 2 (1/3 of total load, 4 replcas) 1 5 Average query load per server 2 servers (1/3: 125$, 2/3: 1$), max load = 183K requests/epoch Epoch Fgure 6: Large scale scenaro: average query load per vrtual rng per server over tme when the queres are evenly dstrbuted among applcatons. Number of requests 1 5 Vrtual Rng (4/7 of total load, 2 replcas) Vrtual Rng 1 (2/7 of total load, 3 replcas) Vrtual Rng 2 (1/7 of total load, 4 replcas) 1 5 Average query load per server 2 servers (1/3: 125$, 2/3: 1$), max load = 183K requests/epoch Epoch Fgure 7: Large-scale scenaro: average query load per vrtual rng per server over tme when 4/7, 2/7, 1/7 of the queres are attracted by applcaton 1, 2, 3 respectvely. Insert falures (n %) Insert falures Total cloud storage capacty used (n %) Fgure 8: Storage saturaton: nsert falures queres per epoch. The other parameters of ths experment are those of the large-scale scenaro of Table 1. Followng the Pareto dstrbuton propertes, a small amount of vrtual nodes are responsble for a large amount of queres. These vrtual nodes become wealther thanks to ther hgh popularty, and they are able to replcate to one or several servers n order to handle the ncreasng load. Therefore, the total amount of vrtual nodes s adjusted to the query load, as depcted n Fgure 5. The number of vrtual nodes remans almost constant durng the hgh query load perod. Ths s explaned as follows: For robustness, replcaton s only ntated by a hgh query load. However, a replcated vrtual node can survve even wth a small number of requests before commttng sucde. Therefore, the number of vrtual nodes decreases when the query load s sgnfcantly reduced. Fnally, at epoch 375, the balance of the addtonal replcated vrtual nodes becomes negatve and they commt sucde. More mportantly, the query load per server remans qute balanced despte the varatons n the total query load. Ths s true both for the case that the query load s evenly dstrbuted among applcatons (see Fgure 6) and for the case that 4/7, 2/7 and 1/7 fractons of the total query load are attracted by applcaton 1 (vrtual rng ), 2 (vrtual rng 1) and 3 (vrtual rng 2) respectvely (see Fgure 7). 7.5 Scalablty of the approach Intally, we nvestgate the scalablty of the approach regardng the storage capacty. For ths purpose, we assume the arrval of nsert queres that store new data nto the cloud. The nsert queres are agan dstrbuted accordng to Pareto(1, 5). We allow a maxmum partton capacty of 256MB after whch the data of the partton s splt nto two new ones, so that each vrtual node s always responsble for up to 256MB of data. The nsert query rate s fxed and equal to 2 queres per epoch, whle each query nserts 5KB of data. We employ the large-scale scenaro parameters, but wth the number of servers N = 1 and 2 racks per room n ths case. The ntal number of parttons s M = 2. We fll the cloud up to ts total storage capacty. As depcted n Fgure 8, our approach manages to balance the used storage effcently and fast enough so that there are no data losses for used capacty up to 96% of the total storage. At that pont, vrtual nodes start not fttng to the avalable storage of the ndvdual servers and thus they cannot mgrate to accommodate ther data. Next, we consder that the query rate to the cloud s not dstrbuted accordng to Posson, but t ncreases wth the rate of 2 queres per epoch untl the total bandwdth capacty of the cloud s saturated. In ths experment, real rents of servers are unformly dstrbuted n [1, 1]$. Now, our approach for selectng the dest-

10 Total falures (n %) Increasng query rate untl full cloud network capacty random greedy Data transferred (KB) Tme (sec) 121 x 1 4 Control and Applcaton Traffc replcaton Control Applcaton Average Vrtual Rent per Server 1 5 economc Vrtual Rent Total cloud bandwdth used (n %) Tme (sec) Fgure 9: Network saturaton: query falures naton server of a new replca s compared aganst two other rather basc approaches: Random: a random server s selected for replcaton and mgraton, as long as t has the avalable bandwdth capacty for mgraton and replcaton, and enough storage space. Greedy: the cheapest server s selected for replcaton and mgraton, as long as t has the avalable bandwdth capacty for mgraton and replcaton, and enough storage space. As depcted n Fgure 9, our approach (referred to as economc") outperforms the smple approaches regardng the amount of dropped queres havng the bandwdth of the cloud completely saturated. Specfcally, only 5% of the total queres are dropped at ths worst case scenaro. Therefore, our approach multplexes the resources of the cloud very effcently. 8. IMPLEMENTATION AND EXPERIMEN- TAL RESULTS IN A REAL TESTBED We have mplemented a fully workng prototype of Skute on top of Project Voldemort (project-voldemort.com), whch s an open source mplementaton of Dynamo [9] wrtten n Java. Servers are not synchronzed and no centralzed component s requred. The epoch s consdered to be equal to 3 seconds. We have mplemented a fully decentralzed board based on a gosspng protocol, where each server exchanges ts vrtual rent prce perodcally wth a small (log(n), where N s the total number of servers) random subset of servers. Routng tables are mantaned usng a smlar gosspng protocol for routng entres. The perods of these gosspng protocols are assumed to be 1 epoch. In case of mgraton, replcaton or sucde of a vrtual node, the hostng server broadcasts the routng table update usng a dstrbuton tree leveragng the geographcal topology of the servers. Our testbed conssts of N = 4 Skute servers, hosted by 8 machnes (OS: Deban 5..3, Kernel: amd64, CPU: 8 core Intel Xeon CPU 2.66GHz, RAM: 16GB) wth Sun Java 64-Bt VMs (buld 1.6._12-b4) and connected n a 1 Mbps LAN. Accordng to our scenaro, we assume a Skute data cloud spannng across 4 European countres wth 2 datacenters per country. Each datacenter s hosted by a separate machne and contans 5 Fgure 1: Top: Applcaton and control traffc n case of a load peak. Bottom: Average vrtual rent n case of a load peak. Skute servers, whch are consdered to be at the same rack. We consder 3 applcatons, each of M = 5 parttons, wth a mnmum requred avalablty of 2, 3 and 4 replcas respectvely. 25 data tems of 1KB have been evenly nserted n the 3 applcatons. We generate 1 data requests per second usng a Pareto(1,5) key dstrbuton, denoted as applcaton traffc. We refer as control traffc to the data volume transferred for mgratons, replcatons and the mantenance of the boards as well as the routng tables. We frst evaluate the behavor of the system n case of a load peak. At second 198, addtonal 1 requests per second are generated for a unque key. After 1 seconds, at second 28, the popular vrtual node hostng ths unque key s replcated, as shown by the peak n the control traffc n Fgure 1(top). Moreover, as depcted n Fgure 1(bottom), the average vrtual rent prce ncreases durng the load peak, as more physcal resources are requred to serve the ncreased number of requests. It further ncreases after the replcaton of the popular vrtual node, because more storage s used at a server for hostng the new replca of the popular partton. Next, the behavor of the system n case of a server crash s assessed. At second 28, a Skute server collapses. As soon as the vrtual nodes detect the falure (by means of the gosspng protocols), they start replcatng the parttons hosted on the faled Skute server to satsfy agan the mnmum avalablty guarantees. Fgure 11(top) shows that the replcaton process (as revealed by the ncreased control traffc) starts drectly after the crash. Moreover, as depcted n Fgure 11(bottom), the average vrtual rent ncreases durng the replcaton process, because the same storage and processng requrements as before the crash, have to be now satsfed by fewer servers. Fnally, note that n every case and especally when the system s at equlbrum the control traffc s mnmal as compared to the applcaton one. 9. RELATED WORK Dealng wth network falure, strong consstency (whch databases care of) and hgh data avalablty can not be acheved at the same tme [3]. Hgh data avalablty by means of replcaton has been nvestgated n varous contexts, such as P2P systems [23, 17], data clouds, dstrbuted databases [21, 9] and dstrbuted fle systems [13, 24, 1, 11]. In the P2P storage systems PAST [23] and

11 Vrtual Rent Data transferred (KB) x 1 4 server crash Control and Applcaton Traffc Tme (sec) 4 servers 39 servers Average Vrtual Rent per Server Control Applcaton Tme (sec) Fgure 11: Top: Applcaton and control traffc n case of a server crash. Bottom: Average vrtual rent n case of a server crash. Oceanstore [17], the geographcal dversty of the replcas s based on random hashng of data keys. Oceanstore deals wth consstency by seralzng updates on replcas and then applyng them atomcally. In the dstrbuted databases and systems context, Coda [24], Bayou [21] and Fcus [13] allow dsconnected operatons and are reslent to ssues, such as network parttons and outages. Conflcts among replcas are dealt wth dfferent approaches that guarantee event causalty. In dstrbuted data clouds, Amazon Dynamo [9] replcates each data tem at a fxed number of physcally dstnct nodes. Dynamo deals wth load balancng by assumng the unform dstrbuton of popular data tems among nodes through parttonng. However, load balancng based on dynamc changes of query load are not consdered. Data consstency s handled based on vector clocks and a quorum system approach wth a coordnator for each data key. In all the aforementoned systems, replcaton s employed n a statc way,.e. the number of replcas and ther locaton are predetermned. Also, no replcaton cost consderatons are taken nto account and no geographcal dversty of replcas s employed. In [28], data replcas are organzed n multple rngs to acheve query load-balancng. However, only one rng s materalzed (.e. has a routng table) and the other rngs are accessble by teratvely applyng a statc hash functon. Ths statc approach for mappng replcas to servers does not allow to perform advanced optmzatons, such as movng data close to the end user or ensurng the geographcal dversty between replcas. Moreover, as opposed to our approach, the system n [28] does not support a dfferent avalablty level per applcaton or per data tem, whle the data belongng to dfferent applcatons s not separately stored. Some economc-aware approaches are dealng wth the optmal locatons of replcas. Marposa [27] ams at latency mnmzaton n executng complex queres over relatonal dstrbuted databases,.e. not prmary-key access queres on whch we focus. Stes n Marposa exchange data tems (.e. mgrate or replcate them) based on ther expected query rate and ther processng cost. The data tems are exchanged based on ther expected values usng combnatoral auctons, where wnner determnaton s trcky and synchronzaton s requred. In our approach, asynchronous ndvdual decsons are taken by data tems regardng replcaton, mgraton or deleton, so that hgh avalablty s preserved and dynamc load balancng s performed. Also, n [26], a cost model s defned for the factors that affect data and applcaton mgraton for mnmzng latency n replyng queres. Data s mgrated towards the applcaton or the applcaton towards the data based on ther respectve costs that depends on varous aspects, such as query load, replcas placement and network and storage avalablty. On the other hand, n the Mung operatng system [14], a commodty market of storage space has been proposed. Specfcally, storage space s lent by storage servers to users and the rental prces ncrease as the avalable storage runs low, forcng users to release unneeded storage. Ths model s equvalent to that of dynamc prcng per volume n telecommuncaton networks accordng to whch prces ncrease wth the level of congeston,.e. congeston prcng. Occuped storage s assocated to specfc objects that are lnked to bank accounts from whch rent s collected for the storage. Ths approach does not take nto account the dfferent query rates for the varous data tems and t does not have any avalablty objectves. In [15], an approach s proposed for reorganzng replcas evenly n case that new storage s added nto the cloud, whle mnmzng data movement. Relocated data and new replcas are assgned wth hgher probablty to newer servers. Replcaton process randomly determnes the locatons of replcas, whle preservng that no replcas are placed n the same server. However, ths approach does not consder geographcal dstrbuton of replcas or dfferentated avalablty levels to multple applcatons, and t does not take nto account popularty of data tems n the replcaton process. In [19], an approach has been proposed for optmally selectng the query plan to be executed n the cloud n a cost-effcent way consderng the load of remote servers, the latency among servers and the avalablty of servers. Ths approach has smlar objectves to ours, but the focus of our paper s solely on prmary-key queres. In [2] and [8] effcent data management for the consstency of replcated data n dstrbuted databases s addressed by an approach guaranteeng one-copy seralzablty n the former and snapshot solaton n lazy replcated databases (.e. where replcas are synchronzed by separate transactons) n the latter. In our case, we do not expect hgh update rates n a key-value store and therefore concurrent copy of changes to all replcas can be an acceptable approach. However, regardng fault tolerance aganst falures durng updates, the approach of [2] could be employed, so as the replcas of a partton to be organzed n a tree. 1. CONCLUSION In ths paper, we descrbed Skute, a robust, scalable and hghlyavalable key-value store that dynamcally adapts to varyng query load or dsasters by determnng the most cost-effcent locatons of data replcas wth respect to ther popularty and ther clent locatons. We expermentally proved that our approach converges fast to equlbrum, where as predcted by a game-theoretcal model no mgratons happen for steady system condtons. Our approach acheves net beneft maxmzaton for applcaton provders and therefore t s hghly applcable to real busness cases. We have bult a fully workng prototype n a dstrbuted settng that clearly demonstrates the feasblty, the effectveness and the low communcaton overhead of our approach. As a future work, we plan to nvestgate the employment of our approach for more complex data models, such as the one n Bgtable [6]. 11. ACKNOWLEDGMENTS Ths work was partly supported by the EU projects HYDROSYS (224416, DG-INFSO) and OKKAM (21532, ICT).

12 12. REFERENCES [1] A. Adya, W. J. Bolosky, M. Castro, G. Cermak, R. Chaken, J. R. Douceur, J. Howell, J. R. Lorch, M. Themer, and R. P. Wattenhofer. Farste: federated, avalable, and relable storage for an ncompletely trusted envronment. ACM SIGOPS Operatng Systems Revew, 36(SI):1 14, 22. [2] D. Agrawal and A. E. Abbad. The tree quorum protocol: An effcent approach for managng replcated data. In VLDB 9: Proc. of the 16th Internatonal Conference on Very Large Data Bases, pages , Brsbane, Queensland, Australa, 199. [3] P. A. Bernsten and N. Goodman. An algorthm for concurrency control and recovery n replcated dstrbuted databases. ACM Transactons on Database Systems, 9(4): , [4] N. Bonvn, T. G. Papaoannou, and K. Aberer. Dynamc cost-effcent replcaton n data clouds. In Proc. of the Workshop on Automated Control for Datacenters and Clouds, Barcelona, Span, June 29. [5] N. Bonvn, T. G. Papaoannou, and K. Aberer. Cost-effcent and dfferentated data avalablty guarantees n data clouds. In ICDE 1: Proceedngs of the 26th IEEE Internatonal Conference on Data Engneerng, Long Beach, CA, USA, March 21. [6] F. Chang, J. Dean, S. Ghemawat, W. C. Hseh, D. A. Wallach, M. Burrows, T. Chandra, A. Fkes, and R. E. Gruber. Bgtable: a dstrbuted storage system for structured data. In Proc. of the Symposum on Operatng Systems Desgn and Implementaton, Seattle, Washngton, 26. [7] M. Dahln, B. B. V. Chandra, L. Gao, and A. Nayate. End-to-end wan servce avalablty. IEEE/ACM Transactons on Networkng, 11(2):3 313, 23. [8] K. Daudjee and K. Salem. Lazy database replcaton wth snapshot solaton. In VLDB 6: Proc. of the 32nd Internatonal Conference on Very large data bases, pages , Seoul, Korea, 26. [9] G. Decanda, D. Hastorun, M. Jampan, G. Kakulapat, A. Lakshman, A. Plchn, S. Svasubramanan, P. Vosshall, and W. Vogels. Dynamo: amazon s hghly avalable key-value store. In Proc. of ACM Symposum on Operatng Systems Prncples, New York, NY, USA, 27. [1] C. Dellarocas. Goodwll huntng: An economcally effcent onlne feedback mechansm for envronments wth varable product qualty. In Proc. of the Workshop on Agent-Medated Electronc Commerce, July 22. [11] S. Ghemawat, H. Goboff, and S.-T. Leung. The google fle system. In Proc. of Symposum on Operatng Systems Prncples, pages 29 43, Bolton Landng, NY, USA, 23. [12] R. Gummad, S. Grbble, S. Ratnasamy, S. Shenker, and I. Stoca. The mpact of dht routng geometry on reslence and proxmty, 23. [13] R. G. Guy, J. S. Hedemann, and J. T. W. Page. The fcus replcated fle system. ACM SIGOPS Operatng Systems Revew, 26(2):26, [14] G. Heser, F. Lam, and S. Russell. Resource management n the mung sngle-address-space operatng system. In Proc. of Australasan Computer Scence Conference, Perth, Australa, February [15] R. Honcky and E. L. Mller. A fast algorthm for onlne placement and reorganzaton of replcated data. In Proc. of Int. Symposum on Parallel and Dstrbuted Processng, Nce, France, Aprl 23. [16] D. Karger, E. Lehman, T. Leghton, M. Levne, D. Lewn, and R. Pangrahy. Consstent hashng and random trees: Dstrbuted cachng protocols for relevng hot spots on the world wde web. In Proc. of ACM Symposum on Theory of Computng, pages , May [17] J. Kubatowcz, D. Bndel, Y. Chen, S. Czerwnsk, P. Eaton, D. Geels, R. Gummad, S. Rhea, H. Weatherspoon, W. Wemer, C. Wells, and B. Zhao. Oceanstore: an archtecture for global-scale persstent storage. SIGPLAN Not., 35(11):19 21, 2. [18] L. Lamport. The part-tme parlament. ACM Transactons on Computer Systems, 16(2): , [19] W.-S. L., V. S. Batra, V. Raman, W. Han, and I. Narang. Qos-based data access and placement for federated systems. In VLDB 5: Proceedngs of the 31st Internatonal Conference on Very Large Data Bases, pages , Trondhem, Norway, 25. [2] W. Ltwn and T. Schwarz. Lh*rs: a hgh-avalablty scalable dstrbuted data structure usng reed solomon codes. ACM SIGMOD Record, 29(2): , 2. [21] K. Petersen, M. Spretzer, D. Terry, and M. Themer. Bayou: replcated database servces for world-wde applcatons. In Proc. of the 7th workshop on ACM SIGOPS European workshop, Connemara, Ireland, [22] E. Pnhero, W.-D. Weber, and L. A. Barroso. Falure trends n a large dsk drve populaton. In Proc. of 5th USENIX Conference on Fle and Storage Technologes (FAST 7), San Jose, CA, USA, February 27. [23] A. Rowstron and P. Druschel. Storage management and cachng n past, a large-scale, persstent peer-to-peer storage utlty. In Proc. of ACM Symposum on Operatng Systems Prncples, Banff, Alberta, Canada, 21. [24] M. Satyanarayanan, J. J. Kstler, P. Kumar, M. E. Okasak, E. H. Segel, and D. C. Steere. Coda: a hghly avalable fle system for a dstrbuted workstaton envronment. Transactons on Computers, 39(4): , 199. [25] M. Shapro, P. Dckman, and D. Planfossè. Robust, dstrbuted references and acyclc garbage collecton. In Proc. of the Symposum on Prncples of Dstrbuted Computng, Vancouver, Canada, August [26] H. Stocknger, K. Stocknger, E. Schkuta, and I. Wllers. Towards a cost model for dstrbuted and replcated data stores. In Proc. of Euromcro Workshop on Parallel and Dstrbuted Processng, Italy, February 21. [27] M. Stonebraker, R. Devne, M. Kornacker, W. Ltwn, A. Pfeffer, A. Sah, and C. Staeln. An economc paradgm for query processng and data mgraton n marposa. In Proc. of Parallel and Dstrbuted Informaton Systems, Austn, TX, USA, September [28] T. Ptoura, N. Ntarmos and P. Trantafllou. Replcaton, Load Balancng and Effcent Range Query Processng n DHTs. In Proc. of Int. Conference on Extendng Database Technology, Munch, Germany, March 26.

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

Traffic State Estimation in the Traffic Management Center of Berlin

Traffic State Estimation in the Traffic Management Center of Berlin Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

An RFID Distance Bounding Protocol

An RFID Distance Bounding Protocol An RFID Dstance Boundng Protocol Gerhard P. Hancke and Markus G. Kuhn May 22, 2006 An RFID Dstance Boundng Protocol p. 1 Dstance boundng Verfer d Prover Places an upper bound on physcal dstance Does not

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Cloud-based Social Application Deployment using Local Processing and Global Distribution Cloud-based Socal Applcaton Deployment usng Local Processng and Global Dstrbuton Zh Wang *, Baochun L, Lfeng Sun *, and Shqang Yang * * Bejng Key Laboratory of Networked Multmeda Department of Computer

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Recurrence. 1 Definitions and main statements

Recurrence. 1 Definitions and main statements Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.

More information

A Resource-trading Mechanism for Efficient Distribution of Large-volume Contents on Peer-to-Peer Networks

A Resource-trading Mechanism for Efficient Distribution of Large-volume Contents on Peer-to-Peer Networks A Resource-tradng Mechansm for Effcent Dstrbuton of Large-volume Contents on Peer-to-Peer Networks SmonG.M.Koo,C.S.GeorgeLee, Karthk Kannan School of Electrcal and Computer Engneerng Krannet School of

More information

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services When Network Effect Meets Congeston Effect: Leveragng Socal Servces for Wreless Servces aowen Gong School of Electrcal, Computer and Energy Engeerng Arzona State Unversty Tempe, AZ 8587, USA xgong9@asuedu

More information

EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP. Kun-chan Lan and Tsung-hsun Wu

EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP. Kun-chan Lan and Tsung-hsun Wu EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP Kun-chan Lan and Tsung-hsun Wu Natonal Cheng Kung Unversty klan@cse.ncku.edu.tw, ryan@cse.ncku.edu.tw ABSTRACT Voce over IP (VoIP) s one of

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm Internatonal Journal of Grd Dstrbuton Computng, pp.175-190 http://dx.do.org/10.14257/gdc.2014.7.6.14 Optmzaton odel of Relable Data Storage n Cloud Envronment Usng Genetc Algorthm Feng Lu 1,2,3, Hatao

More information

Ad-Hoc Games and Packet Forwardng Networks

Ad-Hoc Games and Packet Forwardng Networks On Desgnng Incentve-Compatble Routng and Forwardng Protocols n Wreless Ad-Hoc Networks An Integrated Approach Usng Game Theoretcal and Cryptographc Technques Sheng Zhong L (Erran) L Yanbn Grace Lu Yang

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

Formulating & Solving Integer Problems Chapter 11 289

Formulating & Solving Integer Problems Chapter 11 289 Formulatng & Solvng Integer Problems Chapter 11 289 The Optonal Stop TSP If we drop the requrement that every stop must be vsted, we then get the optonal stop TSP. Ths mght correspond to a ob sequencng

More information

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture A Desgn Method of Hgh-avalablty and Low-optcal-loss Optcal Aggregaton Network Archtecture Takehro Sato, Kuntaka Ashzawa, Kazumasa Tokuhash, Dasuke Ish, Satoru Okamoto and Naoak Yamanaka Dept. of Informaton

More information

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy Fnancal Tme Seres Analyss Patrck McSharry patrck@mcsharry.net www.mcsharry.net Trnty Term 2014 Mathematcal Insttute Unversty of Oxford Course outlne 1. Data analyss, probablty, correlatons, vsualsaton

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook)

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook) MIT 8.996: Topc n TCS: Internet Research Problems Sprng 2002 Lecture 7 March 20, 2002 Lecturer: Bran Dean Global Load Balancng Scrbe: John Kogel, Ben Leong In today s lecture, we dscuss global load balancng

More information

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Network Aware Load-Balancing via Parallel VM Migration for Data Centers Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung

More information

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays VoIP Playout Buffer Adjustment usng Adaptve Estmaton of Network Delays Mroslaw Narbutt and Lam Murphy* Department of Computer Scence Unversty College Dubln, Belfeld, Dubln, IRELAND Abstract The poor qualty

More information

Master s Thesis. Configuring robust virtual wireless sensor networks for Internet of Things inspired by brain functional networks

Master s Thesis. Configuring robust virtual wireless sensor networks for Internet of Things inspired by brain functional networks Master s Thess Ttle Confgurng robust vrtual wreless sensor networks for Internet of Thngs nspred by bran functonal networks Supervsor Professor Masayuk Murata Author Shnya Toyonaga February 10th, 2014

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints Effectve Network Defense Strateges aganst Malcous Attacks wth Varous Defense Mechansms under Qualty of Servce Constrants Frank Yeong-Sung Ln Department of Informaton Natonal Tawan Unversty Tape, Tawan,

More information

General Auction Mechanism for Search Advertising

General Auction Mechanism for Search Advertising General Aucton Mechansm for Search Advertsng Gagan Aggarwal S. Muthukrshnan Dávd Pál Martn Pál Keywords game theory, onlne auctons, stable matchngs ABSTRACT Internet search advertsng s often sold by an

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton

More information

A Performance Analysis of View Maintenance Techniques for Data Warehouses

A Performance Analysis of View Maintenance Techniques for Data Warehouses A Performance Analyss of Vew Mantenance Technques for Data Warehouses Xng Wang Dell Computer Corporaton Round Roc, Texas Le Gruenwald The nversty of Olahoma School of Computer Scence orman, OK 739 Guangtao

More information

A Novel Auction Mechanism for Selling Time-Sensitive E-Services

A Novel Auction Mechanism for Selling Time-Sensitive E-Services A ovel Aucton Mechansm for Sellng Tme-Senstve E-Servces Juong-Sk Lee and Boleslaw K. Szymansk Optmaret Inc. and Department of Computer Scence Rensselaer Polytechnc Insttute 110 8 th Street, Troy, Y 12180,

More information

How To Improve Power Demand Response Of A Data Center Wth A Real Time Power Demand Control Program

How To Improve Power Demand Response Of A Data Center Wth A Real Time Power Demand Control Program Demand Response of Data Centers: A Real-tme Prcng Game between Utltes n Smart Grd Nguyen H. Tran, Shaole Ren, Zhu Han, Sung Man Jang, Seung Il Moon and Choong Seon Hong Department of Computer Engneerng,

More information

denote the location of a node, and suppose node X . This transmission causes a successful reception by node X for any other node

denote the location of a node, and suppose node X . This transmission causes a successful reception by node X for any other node Fnal Report of EE359 Class Proect Throughput and Delay n Wreless Ad Hoc Networs Changhua He changhua@stanford.edu Abstract: Networ throughput and pacet delay are the two most mportant parameters to evaluate

More information

Multi-Source Video Multicast in Peer-to-Peer Networks

Multi-Source Video Multicast in Peer-to-Peer Networks ult-source Vdeo ultcast n Peer-to-Peer Networks Francsco de Asís López-Fuentes*, Eckehard Stenbach Technsche Unverstät ünchen Insttute of Communcaton Networks, eda Technology Group 80333 ünchen, Germany

More information

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing Effcent Bandwdth Management n Broadband Wreless Access Systems Usng CAC-based Dynamc Prcng Bader Al-Manthar, Ndal Nasser 2, Najah Abu Al 3, Hossam Hassanen Telecommuncatons Research Laboratory School of

More information

How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence

How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence 1 st Internatonal Symposum on Imprecse Probabltes and Ther Applcatons, Ghent, Belgum, 29 June 2 July 1999 How Sets of Coherent Probabltes May Serve as Models for Degrees of Incoherence Mar J. Schervsh

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

AD-SHARE: AN ADVERTISING METHOD IN P2P SYSTEMS BASED ON REPUTATION MANAGEMENT

AD-SHARE: AN ADVERTISING METHOD IN P2P SYSTEMS BASED ON REPUTATION MANAGEMENT 1 AD-SHARE: AN ADVERTISING METHOD IN P2P SYSTEMS BASED ON REPUTATION MANAGEMENT Nkos Salamanos, Ev Alexogann, Mchals Vazrganns Department of Informatcs, Athens Unversty of Economcs and Busness salaman@aueb.gr,

More information

A Cluster Based Replication Architecture for Load Balancing in Peer-to-Peer Content Distribution

A Cluster Based Replication Architecture for Load Balancing in Peer-to-Peer Content Distribution A Cluster Based Replcaton Archtecture for Load Balancng n Peer-to-Peer Content Dstrbuton S.Ayyasamy 1 and S.N. Svanandam 2 1 Asst. Professor, Department of Informaton Technology, Tamlnadu College of Engneerng

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen, Na L and Steven H. Low Engneerng & Appled Scence Dvson, Calforna Insttute of Technology, USA Abstract Speed scalng has been wdely adopted

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

An Evolutionary Game Theoretic Approach to Adaptive and Stable Application Deployment in Clouds

An Evolutionary Game Theoretic Approach to Adaptive and Stable Application Deployment in Clouds An Evolutonary Game Theoretc Approach to Adaptve and Stable Applcaton Deployment n Clouds Chonho Lee Unversty of Massachusetts, Boston Boston, MA 5, USA chonho@csumbedu Yuj Yamano OGIS Internatonal, Inc

More information

"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *

Research Note APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES * Iranan Journal of Scence & Technology, Transacton B, Engneerng, ol. 30, No. B6, 789-794 rnted n The Islamc Republc of Iran, 006 Shraz Unversty "Research Note" ALICATION OF CHARGE SIMULATION METHOD TO ELECTRIC

More information

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters Frequency Selectve IQ Phase and IQ Ampltude Imbalance Adjustments for OFDM Drect Converson ransmtters Edmund Coersmeer, Ernst Zelnsk Noka, Meesmannstrasse 103, 44807 Bochum, Germany edmund.coersmeer@noka.com,

More information

A Dynamic Load Balancing for Massive Multiplayer Online Game Server

A Dynamic Load Balancing for Massive Multiplayer Online Game Server A Dynamc Load Balancng for Massve Multplayer Onlne Game Server Jungyoul Lm, Jaeyong Chung, Jnryong Km and Kwanghyun Shm Dgtal Content Research Dvson Electroncs and Telecommuncatons Research Insttute Daejeon,

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

CLoud computing technologies have enabled rapid

CLoud computing technologies have enabled rapid 1 Cost-Mnmzng Dynamc Mgraton of Content Dstrbuton Servces nto Hybrd Clouds Xuana Qu, Hongxng L, Chuan Wu, Zongpeng L and Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Hong Kong,

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

Hosting Virtual Machines on Distributed Datacenters

Hosting Virtual Machines on Distributed Datacenters Hostng Vrtual Machnes on Dstrbuted Datacenters Chuan Pham Scence and Engneerng, KyungHee Unversty, Korea pchuan@khu.ac.kr Jae Hyeok Son Scence and Engneerng, KyungHee Unversty, Korea sonaehyeok@khu.ac.kr

More information

RECENT research has acknowledged the sensor-cloud infrastructure. Optimal Data Center Scheduling for Quality of Service Management in Sensor-cloud

RECENT research has acknowledged the sensor-cloud infrastructure. Optimal Data Center Scheduling for Quality of Service Management in Sensor-cloud Optmal Data Center Schedug for Qualty of Servce Management n Sensor-cloud Subarna Chatterjee, Student Member, IEEE, Sudp Msra, Senor Member, IEEE, and Samee U. Khan, Senor Member, IEEE Abstract The proposed

More information

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems 1 Mult-Resource Far Allocaton n Heterogeneous Cloud Computng Systems We Wang, Student Member, IEEE, Ben Lang, Senor Member, IEEE, Baochun L, Senor Member, IEEE Abstract We study the mult-resource allocaton

More information

Self-Adaptive Capacity Management for Multi-Tier Virtualized Environments

Self-Adaptive Capacity Management for Multi-Tier Virtualized Environments Self-Adaptve Capacty Management for Mult-Ter Vrtualzed Envronments Ítalo Cunha, Jussara Almeda, Vrgílo Almeda, Marcos Santos Computer Scence Department Federal Unversty of Mnas Geras Belo Horzonte, Brazl,

More information

Network Security Situation Evaluation Method for Distributed Denial of Service

Network Security Situation Evaluation Method for Distributed Denial of Service Network Securty Stuaton Evaluaton Method for Dstrbuted Denal of Servce Jn Q,2, Cu YMn,2, Huang MnHuan,2, Kuang XaoHu,2, TangHong,2 ) Scence and Technology on Informaton System Securty Laboratory, Bejng,

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

Dynamic Pricing for Smart Grid with Reinforcement Learning

Dynamic Pricing for Smart Grid with Reinforcement Learning Dynamc Prcng for Smart Grd wth Renforcement Learnng Byung-Gook Km, Yu Zhang, Mhaela van der Schaar, and Jang-Won Lee Samsung Electroncs, Suwon, Korea Department of Electrcal Engneerng, UCLA, Los Angeles,

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems 2nd Internatonal Conference on Advances n Computer Scence and Engneerng (CSE 2013) An Energy-Effcent Data Placement Algorthm and Node Schedulng Strateges n Cloud Computng Systems Yanwen Xao Massve Data

More information

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to Vol. 45, No. 3, August 2011, pp. 435 449 ssn 0041-1655 essn 1526-5447 11 4503 0435 do 10.1287/trsc.1100.0346 2011 INFORMS Tme Slot Management n Attended Home Delvery Nels Agatz Department of Decson and

More information

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

Software project management with GAs

Software project management with GAs Informaton Scences 177 (27) 238 241 www.elsever.com/locate/ns Software project management wth GAs Enrque Alba *, J. Francsco Chcano Unversty of Málaga, Grupo GISUM, Departamento de Lenguajes y Cencas de

More information

Enterprise Master Patient Index

Enterprise Master Patient Index Enterprse Master Patent Index Healthcare data are captured n many dfferent settngs such as hosptals, clncs, labs, and physcan offces. Accordng to a report by the CDC, patents n the Unted States made an

More information

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS Bogdan Cubotaru, Gabrel-Mro Muntean Performance Engneerng Laboratory, RINCE School of Electronc Engneerng Dubln Cty

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by 6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng

More information

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble

More information

Energy Conserving Routing in Wireless Ad-hoc Networks

Energy Conserving Routing in Wireless Ad-hoc Networks Energy Conservng Routng n Wreless Ad-hoc Networks Jae-Hwan Chang and Leandros Tassulas Department of Electrcal and Computer Engneerng & Insttute for Systems Research Unversty of Maryland at College ark

More information

Economic Models for Cloud Service Markets

Economic Models for Cloud Service Markets Economc Models for Cloud Servce Markets Ranjan Pal and Pan Hu 2 Unversty of Southern Calforna, USA, rpal@usc.edu 2 Deutsch Telekom Laboratores, Berln, Germany, pan.hu@telekom.de Abstract. Cloud computng

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall SP 2005-02 August 2005 Staff Paper Department of Appled Economcs and Management Cornell Unversty, Ithaca, New York 14853-7801 USA Farm Savngs Accounts: Examnng Income Varablty, Elgblty, and Benefts Brent

More information

Proactive Secret Sharing Or: How to Cope With Perpetual Leakage

Proactive Secret Sharing Or: How to Cope With Perpetual Leakage Proactve Secret Sharng Or: How to Cope Wth Perpetual Leakage Paper by Amr Herzberg Stanslaw Jareck Hugo Krawczyk Mot Yung Presentaton by Davd Zage What s Secret Sharng Basc Idea ((2, 2)-threshold scheme):

More information

SUPPLIER FINANCING AND STOCK MANAGEMENT. A JOINT VIEW.

SUPPLIER FINANCING AND STOCK MANAGEMENT. A JOINT VIEW. SUPPLIER FINANCING AND STOCK MANAGEMENT. A JOINT VIEW. Lucía Isabel García Cebrán Departamento de Economía y Dreccón de Empresas Unversdad de Zaragoza Gran Vía, 2 50.005 Zaragoza (Span) Phone: 976-76-10-00

More information

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network) Network-Wde Load Balancng Routng Wth Performance Guarantees Kartk Gopalan Tz-cker Chueh Yow-Jan Ln Florda State Unversty Stony Brook Unversty Telcorda Research kartk@cs.fsu.edu chueh@cs.sunysb.edu yjln@research.telcorda.com

More information

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs Cost Mnmzaton usng Renewable Coolng and Thermal Energy Storage n CDNs Stephen Lee College of Informaton and Computer Scences UMass, Amherst stephenlee@cs.umass.edu Rahul Urgaonkar IBM Research rurgaon@us.bm.com

More information

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs Dstrbuted Optmal Contenton Wndow Control for Elastc Traffc n Wreless LANs Yalng Yang, Jun Wang and Robn Kravets Unversty of Illnos at Urbana-Champagn { yyang8, junwang3, rhk@cs.uuc.edu} Abstract Ths paper

More information

A Lyapunov Optimization Approach to Repeated Stochastic Games

A Lyapunov Optimization Approach to Repeated Stochastic Games PROC. ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, OCT. 2013 1 A Lyapunov Optmzaton Approach to Repeated Stochastc Games Mchael J. Neely Unversty of Southern Calforna http://www-bcf.usc.edu/

More information