Orchestrating Bulk Data Transfers across Geo-Distributed Datacenters

Size: px
Start display at page:

Download "Orchestrating Bulk Data Transfers across Geo-Distributed Datacenters"

Transcription

1 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 1 Orcestrating Bulk Data Transfers across Geo-Distributed Datacenters Yu Wu, Zizong Zang, Cuan Wu, Cuanxiong Guo, Zongpeng Li, Francis CM Lau Department of Computer Science, Te University of Hong Kong, {ywu,zzzang,cwu,fcmlau}@cskuk Microsoft Researc Asia, Beijing, Cina, cguo@microsoftcom Department of Computer Science, University of Calgary, Canada, zongpeng@ucalgaryca Abstract As it as become te norm for cloud providers to ost multiple datacenters around te globe, significant demands exist for inter-datacenter data transfers in large volumes, eg, migration of big data A callenge arises on ow to scedule te bulk data transfers at different urgency levels, in order to fully utilize te available inter-datacenter bandwidt Te Software Defined Networking (SDN) paradigm as emerged recently wic decouples te control plane from te data pats, enabling potential global optimization of data routing in a network Tis paper aims to design a dynamic, igly efficient bulk data transfer service in a geo-distributed datacenter system, and engineer its design and solution algoritms closely witin an SDN arcitecture We model data transfer demands as delay tolerant migration requests wit different finising deadlines Tanks to te flexibility provided by SDN, we enable dynamic, optimal routing of distinct cunks witin eac bulk data transfer (instead of treating eac transfer as an infinite flow), wic can be temporarily stored at intermediate datacenters to mitigate bandwidt contention wit more urgent transfers An optimal cunk routing optimization model is formulated to solve for te best cunk transfer scedules over time To derive te optimal scedules in an online fasion, tree algoritms are discussed, namely a bandwidt-reserving algoritm, a dynamically-adjusting algoritm, and a future-demandfriendly algoritm, targeting at different levels of optimality and scalability We build an SDN system based on te Beacon platform and OpenFlow AIs, and carefully engineer our bulk data transfer algoritms in te system Extensive real-world experiments are carried out to compare te tree algoritms as well as tose from te existing literature, in terms of routing optimality, computational delay and overead Index Terms Bulk data transfers, geo-distributed datacenters, software-defined networking F 1 INTRODUCTION Cloud datacenter systems tat span multiple geograpic locations are common nowadays, aiming to bring services close to users, exploit lower power cost, and enable service robustness in te face of network/power failures Amazon, Google, Microsoft and Facebook ave invested significantly in constructing large-scale datacenters around te globe, to ost teir services [1] A basic demand in suc a geo-distributed datacenter system is to transfer bulk volumes of data from one datacenter to anoter, eg, migration of virtual macines [2], replication of contents like videos [3], and aggregation of big data suc as genomic data from multiple datacenters to one for processing using a MapReduce-like framework [4] Despite dedicated broadband network connections being typically deployed among datacenters of te same cloud provider, te bulk data volumes involved in te inter-site transmissions are often ig enoug to overwelm te backbone optical network, leading to bandwidt contention among disparate transmission tasks Te situation exacerbates at long-distance cross-continent submarine fiber links A critical callenge is ow to efficiently scedule te dynamically-arising, interdatacenter transfer requests, suc tat transmission tasks of different urgency levels, reflected by different data transfer finising deadlines, can be optimally and dynamically arranged to fully exploit te available bandwidt at any time Toug a teoretical, online optimization problem in nature, te callenge could not be resolved witout addressing te practical applicability of te optimization solution Tat is: can an algoritm wic solves te online optimization problem, if any, be practically realized in a real-world datacenter-todatacenter network? It is not easy (if not impossible) to program a global optimization algoritm into a traditional distributed routing network like te Internet, given te lack of general programmability of switces/routers for running extra routing algoritms [5] (limited network programmability is only feasible troug proprietary vendor-specific primitives) and te lack of te global view of te underlying network Te recent Software Defined Networking (SDN) paradigm as sed ligt on easy realization of a centralized optimization algoritm, like one tat solves te bulk data transfer sceduling problem, using standard programming interfaces Wit a logically central controller in place, te transient global network states, eg, topology, link capacity, etc, can be more easily acquired by periodic inquiry messages, wic are fundamental in practical SDN protocols, between te controller and te switces For example, a common solution for topology discovery is tat, te controller generates bot Link Layer Discovery rotocol (LLD) and Broadcast Domain Discovery rotocol (BDD) messages and forwards tem to all te switces; by identifying te receiving message types, te controller can recognize te active connections and derive te (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

2 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 2 network topology Furtermore, wit te global knowledge, a centralized optimal sceduling algoritm can be realized in te controller, wic would be oterwise impossible in its traditional distributed routing counterpart Software defined networking advocates a clean decoupling of te control pat from te data pat in a routing system [6] By allowing per-flow routing decisions at te switces/routers, it empowers te network operators wit more flexible traffic management capabilities, wic are potentially QoS-oriented and globally optimal To realize te SDN paradigm, standards like OpenFlow ave been actively developed [7], wic define standard communication interfaces between te control and data layers of an SDN arcitecture IT giants including Google and Facebook ave advocated te OpenFlow-based SDN arcitecture in teir datacenter systems [8] [9], wile switc vendors including Broadcom, H and NEC ave begun production of OpenFlow-enabled switces/routers in te past 2-3 years [10], aiming towards a new era of easy network programmability Tis paper proposes a novel optimization model for dynamic, igly efficient sceduling of bulk data transfers in a geo-distributed datacenter system, and engineers its design and solution algoritms practically witin an OpenFlow-based SDN arcitecture We model data transfer requests as delay tolerant data migration tasks wit different finising deadlines Tanks to te flexibility of transmission sceduling provided by SDN, we enable dynamic, optimal routing of distinct cunks witin eac bulk data transfer (instead of treating eac transfer as an infinite flow), wic can be temporarily stored at intermediate datacenters and transmitted only at carefully sceduled times, to mitigate bandwidt contention among tasks of different urgency levels Our contributions are summarized as follows First, we formulate te bulk data transfer problem into a novel, optimal cunk routing problem, wic maximizes te aggregate utility gain due to timely transfer completions before te specified deadlines Suc an optimization model enables flexible, dynamic adjustment of cunk transfer scedules in a system wit dynamically-arriving data transfer requests, wic is impossible wit a popularly-modeled flow-based optimal routing model Second, we discuss tree dynamic algoritms to solve te optimal cunk routing problem, namely a bandwidt-reserving algoritm, a dynamically-adjusting algoritm, and a futuredemand-friendly algoritm Tese solutions are targeting at different levels of optimality and computational complexity Tird, we build an SDN system based on te OpenFlow AIs and Beacon platform [11], and carefully engineer our bulk data transfer algoritms in te system Extensive realworld experiments wit real network traffic are carried out to compare te tree algoritms as well as tose in te existing literature, in terms of routing optimality, computational delay and overead In te rest of te paper, we discuss related work in Sec 2, illustrate our system arcitecture and te optimization framework in Sec 3, and present te dynamic algoritms in Sec 4 Details of our SDN system implementation follow in Sec 5 Experiment settings and results are reported in Sec 6 Sec 7 concludes te paper 2 RELATED WORK In te network inside a data center, TC congestion control and FIFO flow sceduling are currently used for data flow transport, wic are unaware of flow deadlines A number of proposals ave appeared for deadline-aware congestion and rate control D 3 [12] exploits deadline information to control te rate at wic eac source ost introduces traffic into te network, and apportions bandwidt at te routers along te pats greedily to satisfy as many deadlines as possible D2TC [13] is a Deadline-Aware Datacenter TC protocol to andle bursty flows wit deadlines A congestion avoidance algoritm is employed, wic uses ECN feedback from te routers and flow deadlines to modify te congestion window at te sender In pfabric [14], switces implement simple prioritybased sceduling/dropping mecanisms, based on a priority number carried in te packets of eac flow, and eac flow starts at te line rate wic trottles back only wen ig and persistent packet loss occurs Differently, our work focuses on transportation of bulk flows among datacenters in a geodistributed cloud Instead of end-to-end congestion control, we enable store-and-forward in intermediate datacenters, suc tat a source data center can send data out as soon as te first-op connection bandwidt allows, wereas intermediate datacenters can temporarily store te data if more urgent/important flows need te next-op link bandwidts Inter-datacenter data transfer is also common today Cen et al [15] conducted a measurement study of a cloud wit five distributed datacenters and revealed tat more tan 45% of te total traffic is attributed to inter-datacenter transmissions, and te percentage is expected to grow furter Laoutaris et al propose NetStitcer [16], a mecanism exploiting a priori knowledge of te traffic patterns across datacenters over time and utilizing te leftover bandwidt and intermediate storage between datacenters for bulk data transfer, to minimize te transmission time of a given volume In contrast, our study focuses on flows wit stringent deadlines, and will not assume any traffic patterns We apply optimization algoritms to dynamically adjust flow transfer scedules under any traffic arrival patterns ostcard [17] models a cost minimization problem for inter-datacenter traffic sceduling, based on te classic time expanded grap [18] wic was first used in NetStitcer Toug relatively easy to formulate, te state explosiveness of te optimization model, wen replicating nodes and links along te time axis, results in a proibitive growt rate of te computation complexity Our work seeks to present a novel optimization model wic can enable efficient dynamic algoritms for practical deployment in an SDN network Cen et al [19] study deadline-constrained bulk data transfer in grid networks Our work differs from teirs by concentrating on a per-cunk routing sceme instead of treating eac transfer as flows, wic renders itself to a more realistic model wit iger complexity in algoritm design In addition, we assume dedicated links between datacenters owned by te cloud provider, and aim to maximize te overall transfer utility instead of minimizing te network congestion (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

3 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 3 In an SDN-based datacenter, Helter et al [20] design ElasticTree, a power manager wic dynamically adjusts te set of active links and switces to serve te canging traffic loads, suc tat te power consumption in te datacenter network is minimized For SDN-based inter-datacenter networking, Jain et al [9] present teir experience wit B4, Google s globally deployed software defined WAN, connecting Google s datacenters across te planet Tey focus on te arcitecture and system design, and sow tat wit a greedy centralized traffic engineering algoritm, all WAN links can acieve an average 70% utilization Hong et al [21] propose SWAN, a system tat centrally controls te traffic flows from different services in an inter-datacenter network Teir work focuses more on coordinating routing policy updates among te switces to avoid transient congestion Falling into te similar line of researc for boosting inter-datacenter bandwidt utilization, our researc focuses more on sceduling of bulk data transfers to meet deadlines, and complement tese existing work well by proposing efficient dynamic optimization algoritms, to guarantee long-term optimal operation of te network Deadline-aware resource sceduling in clouds as attracted growing researc interest A recent work from Maria et al [22] presents a meta-euristic optimization based algoritm to address te resource provisioning (VM) and sceduling strategy in IaaS clouds to meet te QoS requirements We believe tat our work well complements tose in tis category In addition, our work focuses on bulk data flows instead of small flows as we eco te argument of Curtis et al [23] tat in reality only significant flows (eg, ig-trougput elepant flows) sould be managed by a centralized controller, in order to reduce te amount of switc-controller communication 3 SYSTEM ARCHITECTURE AND ROBLEM MODEL 31 SDN-based Arcitecture JSON-RC controller JSON-RC fiber networks of te cloud provider, allowing independent and simultaneous two-way data transmissions Data transfer requests may arise from eac datacenter to move bulk volumes of data to anoter datacenter A gateway server is connected to te core switc in eac datacenter, responsible for aggregating cross-datacenter data transfer requests from te same datacenter, as well as for temporarily storing data from oter datacenters and forwarding tem via te switc It also tracks network topology and bandwidt availability among te datacenters wit te elp of te switces Combined closely wit te SDN paradigm, a central controller is deployed to implement te optimal data transfer algoritms, dynamically configure te flow table on eac switc, and instruct te gateway servers to store or to forward eac data cunk Te layered arcitecture we present realistically resembles B4 [9], wic was designed and deployed by Google for teir G-scale inter-datacenter network: te gateway server plays a similar role of te site controller layer, te controller corresponds well to te global layer, and te core switc at eac location can be deemed as te per-site switc clusters in B4 Te fundamental core services enabling bulk data transfer in a geo-distributed cloud include: Task admission control Once a data transfer task is admitted, we seek to ensure its timely completion witin te specified deadline On te oter and, if completing a transfer task witin te specific deadline is not possible according to te network availability wen te request arrives, te task sould be rejected Data routing Te optimal transmission pats of te data in an accepted task from te source to te destination sould be decided, potentially troug multiple intermediate datacenters Store-and-forward Intermediate datacenters may store te data temporarily and forward tem later It sould be carefully computed wen a data sould be temporarily stored in wic datacenter, as well as wen and at wic rate it sould be forwarded at a later time Te goal of judiciously making te above decisions is to maximize te overall utility of tasks, by best utilizing te available bandwidt along te inter-datacenter links at any given time Gateway Server Datacenter Core switc Openflow AI Gateway Server Core switc Fig 1 Te arcitecture of te system Openflow AI Gateway Server Datacenter Core switc We consider a cloud spanning multiple datacenters located in different geograpic locations (Fig 1) Eac datacenter is connected via a core switc to te oter datacenters Te connections among te datacenters are dedicated, fullduplex links, eiter troug leading tier-1 ISs or private 32 roblem Model Let N represent te set of all datacenters in te system A data transfer request (or task equivalently) J can be described by a five-tuple (S J, D J, t J, T J, U J ), were S J is te source datacenter were te data originates, D J is te destination datacenter were te data is to be moved to, t J is te earliest time J can be transmitted, and T J denotes te maximum amount of time allowed for transfer of task J to complete, ie, all data of task J sould arrive at te destination no later tan t J + T J U J is a weigt modeling te benefit of completing job J, and jobs wit iger importance are associated wit larger weigts Te system runs in a time-slotted fasion Te data of eac job J is segmented into equal-sized cunks at te source datacenter before teir transmission, and W J denotes te corresponding cunk set Consider te system lifespan [0, ] (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

4 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 4 Let J denote te set of all jobs wic arrive and are to be completed in tis span Te binary variable I J denotes if task J is accepted or not, and m,n(t) indicates weter cunk w is transmitted from datacenter m to datacenter n at time t Te available bandwidt of te connection from datacenter m to datacenter n is described by B m,n, as a multiple of a unit bandwidt Te lengt of a sceduling interval (ie, eac time slot) in te system equals te transmission time of a cunk using a unit bandwidt Hence, B m,n is equivalent to te maximum number of cunks tat can be delivered from m to n in a time slot We consider cunk sizes at tens of megabytes in suc bulk data transfer, and a unit bandwidt at te magnitude of tens of Mbps, since te dedicated links between datacenters ave typical bandwidts up to 100 Gbps [24] In tis case, te sceduling interval lengt is at tens of seconds, wic is reasonable since it may not be feasible in practice to adjust flow tables at te switces more frequently tan tat ropagation delays and potential cunk queuing delays are ignored as tey are dominated by te transmission times in bulk data transfer, at te magnitudes of undreds of milliseconds Table 1 summarizes important notation for ease of reference TABLE 1 Table of notations Symbol Definition J Te set of jobs to be completed witin time interval [0, ] J( ) Te set of jobs arrived at time J( ) Te set of unfinised, previously accepted job by time U J Te weigt of job J t J Te earliest time slot for transmission of J T J Te maximum number of time slots allowed for transmission of J S J Te source datacenter were J originates D J Te destination datacenter were J is to be migrated to W J Te cunk set of job J N Te set of all datacenters B m,n Te available bandwidt from datacenter m to n I J Weter job J is accepted m,n(t) Weter transmit cunk w from datacenter m to n at time t 33 Te Optimal Cunk Routing roblem We formulate te problem to an optimization framework to derive te job acceptance decisions I J, 8J 2 J, and cunk routing decisions m,n(t), 8w 2 W J, 8t 2 [t J,t J + T J 1], 8J 2 J, 8m, n 2 N,m6= n subject to: max X J2J U J I J (1) (a) (b) (c) (d) (e) (f) (g) t J +T J 1 t=t J t J +T J 1 t=t J t J +T J 1 t=t J T 0 m2n,m6=s J ( m2n,m6=d J ( t=t J m2n,m6=n ( m2n,m6=n m,s J (t) m,d J (t) m,n(t) S J,m (t)) = I J, 8w 2 W J, 8J 2 J; D J,m (t)) = I J, 8w 2 W J, 8J 2 J; n,m(t)) = 0, 8n 2 N/{S J,D J }, 8w 2 W J, 8J 2 J; t J +T J 1 m,n(t) n,k (t), t=t 0+1 k2n,k6=n 8w 2 W J, 8n 2 N/S J, 8T 0 2 [t J,t J + T J 2], 8J 2 J; m,n(t) apple B m,n, J2J w2w J 8m, n 2 N,m6= n, 8t 2 [0, ]; m,n(t) 2{0, 1}, 8m, n 2 N,m6= n, 8t 2 [t J,t J + T J 1], 8w 2 W J, 8J 2 J; m,n(t) =0, 8m, n 2 N,m6= n, 8t 2 [0,t J ) [ (t J + T J 1, ], 8w 2 W J, 8J 2 J Te objective function maximizes te overall weigt of all te jobs to be accepted A special case were U J ( ) = 1 (8J 2 J) implies te maximization of te total number of accepted jobs Constraint (a) states tat for eac cunk w in eac job J, it sould be sent out from te source datacenter S J at one time slot witin [t J,t J + T J 1] (ie, te valid transmission interval of te job), if it is accepted for transfer at all in te system (ie, if I J =1); on te oter and, te cunk sould arrive at te destination datacenter D J via one of D J s neigboring datacenters witin [t J,t J +T J ] as well, as specified by Constraint (b) Constraint (c) enforces tat at any intermediate datacenter n oter tan te source and destination of cunk w, if it receives te cunk at all in one time slot witin te valid transmission interval of te job, it sould send te cunk out as well witin te interval Wit constraint (d), we ensure tat a cunk sould arrive at a datacenter earlier before it can be forwarded from te datacenter, ie, considering any time slot T 0 witin te valid transmission interval [t J,t J + T J 1] of job J, a datacenter n may send out cunk w in a time slot after T 0 (ie, t J +T J 1 t=t 0+1 by T 0 (ie, k2n,k6=n T 0 n,k (t) =1), only if it as received it earlier t=t J m2n,m6=n m,n(t) =1) Constraint (e) specifies tat te total number of cunks from all jobs to be delivered from datacenter m to n in any time slot t, sould not exceed te bandwidt capacity of te connection between m and n Routing decisions for a cunk w, m,n(t) s, are binary, ie, eiter sent along connection from m to n in t ( m,n(t) =1) or not ( m,n(t) =0), and valid only witin te valid transmission interval of te corresponding job, as given by constraints (f) and (g) Te solutions of our optimization framework translate to reliable routing decisions in te sense tat any accepted job (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

5 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 5 will be delivered to te destination witin te corresponding deadline Tose rejected jobs are dropped immediately at te beginning to save bandwidt, but users may resubmit te jobs at later times Te structure of te optimization problem is similar to tat in a max flow or min-cost flow problem [25], but te difference is apparent as well: we model routing of distinct cunks wic can be stored at intermediate datacenters and forwarded in a later time slot, instead of continuous end-toend flows; terefore, te time dimension is carefully involved to specify te transfer times of te cunks, wic does not appear in a max/min-cost flow problem Te optimization model in (1) is an offline optimization problem in nature Given any job arrival pattern witin [0, ], it decides weter eac job sould be accepted for transfer under bandwidt constraints, and derives te best pats for cunks in accepted jobs, along wic te cunks can reac teir destinations witin te respective deadlines In practice wen te transfer jobs are arriving one after anoter, an online algoritm to make timely decisions on job admission control and routing sceduling is desirable, wic we will investigate in te next section 4 DYNAMIC ALGORITHMS We present tree practical algoritms wic make job acceptance and cunk routing decisions in eac time slot, and acieve different levels of optimality and scalability 41 Te Bandwidt-Reserving Algoritm Te first algoritm onors decisions made in previous time slots, and reserves bandwidt along te network links for sceduled cunk transmissions of previously accepted jobs in its routing computation for newly arrived jobs Let J( ) be te set consisting of only te latest data transfer requests arrived in time slot Define B m,n (t) as te residual bandwidt on eac connection (m, n) in time slot t 2 [ +1, ], excluding bandwidt needed for te remaining cunk transfers of accepted jobs arrived before In eac time slot, te algoritm solves optimization (1) wit job set J( ) and bandwidt B m,n (t) s for duration [ +1, ], and derives admission control decisions for jobs arrived in tis time slot, as well as teir cunk transfer scedules before teir respective deadlines Teorem 1 states te N-ardness of optimization problem in (1) (wit detailed proof in Appendix A) Neverteless, suc a linear integer program may still be solved in reasonable time at a typical scale of te problem (eg, tens of datacenters in te system), using an optimization tool suc as CLEX [26] To cater for larger scale problems, we also propose a igly efficient euristic in Sec 43 More detailed discussions of te solution time follow in Sec 63 Teorem 1: Te optimal cunk routing problem in 1 is Nard 42 Te Dynamically-Adjusting Algoritm Te second algoritm retains job acceptance decisions made in previous time slots, but adjusts routing scedules for cunks of accepted jobs, wic ave not reaced teir respective destinations, togeter wit te admission control and routing computation of newly arrived jobs Let J( ) be te set of data transfer requests arrived in time slot, and J( ) represent te set of unfinised, previously accepted jobs by time slot In eac time slot, te algoritm solves a modified version of optimization (1), as follows: Te set of jobs involved in te computation is J( ) [ J( ) Te optimization decisions to make include: (i) acceptance of newly arrived jobs, ie, I J, 8J 2 J( ); (ii) routing scedules for cunks in newly arrived jobs, ie, m,n(t), 8m, n 2 N,m 6= n, 8t 2 [ +1, ], 8w 2 W J were J 2 J( ); (iii) routing scedules for cunks in previously accepted jobs tat ave not reaced teir destinations, ie, m,n(t), 8m, n 2 N,m6= n, 8t 2 [ +1, ], 8w 2 WJ 0, were J 2 J( ) and W J 0 denotes te set of cunks in J wic ave not reaced te destination D J For eac previously accepted job J 2 J( ), we set I J = 1 werever it appears in te constraints, suc tat te remaining cunks in tese jobs are guaranteed to arrive at teir destinations before te deadlines, even after teir routing adjustments Constraints related to cunks in previously accepted jobs, wic ave reaced teir destinations, are removed from te optimization problem For eac remaining job J 2 J( ), its corresponding t J and T J used in te constraints sould be replaced by t 0 J = +1 and T J 0 = t J + T J 1, given tat te decision interval of te optimization problem as been sifted to [ +1, ] For a cunk w in WJ 0, it may ave been transferred to an intermediate datacenter by time slot, and ence multiple datacenters may ave caced a copy of tis cunk Let (w) be te set of datacenters wic retain a copy of cunk w Te following optimal routing pat of tis cunk can originate from any of te copies Terefore, in constraints (a)(c)(d) on cunk w, we replace S J by (w), eg, constraint (a) on cunk w is modified to t 0 J +T 0 J 1 t=t 0 J ( n2 (w) m2n,m6=n m,n(t) 8w 2 W 0 J n,m(t)) = I J,, 8J 2 J( ) [ J( ) (2) Te detailed formulation of te modified optimization problem is given in (3) subject to: max X J2J( )[J( ) U J I J (3) (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

6 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 6 (a) (b) (c) t 0 J +T 0 J 1 t=t 0 J t 0 J +T 0 J 1 t=t 0 J t 0 J +T 0 J 1 t=t 0 J T 0 ( m,n(t) n2 (w) m2n,m6=n m2n,m6=d J ( m,d J (t) n,m(t)) = I J, 8w 2 W 0 J, 8J 2 J( ) [ J( ); D J,m (t)) = I J, 8w 2 W 0 J, 8J 2 J( ) [ J( ); ( m,n(t) n,m(t)) = 0, m2n,m6=n 8n 2 N/{ (w),d J }, 8w 2 W 0 J, 8J 2 J( ) [ J( ); t 0 J +T J 0 1 m,n(t) n,k (t), (d) t=t 0 m2n,m6=n t=t J 0 +1 k2n,k6=n 8w 2 W 0 J, 8n 2 N/ (w), 8T 0 2 [t 0 J,t0 J + T J 0 2], 8J 2 J( ) [ J( ); (e) m,n(t) apple B m,n J2J( )[J( ) w2w 0 J 8m, n 2 N,m6= n, 8t 2 [ +1, ]; (f) m,n(t) ={0, 1} 8m, n 2 N,m6= n, 8t 2 [t 0 J,t0 J + T J 0 1], 8w 2 W 0 J, 8J 2 J( ) [ J( ); (g) m,n(t) =0, 8m, n 2 N,m6= n, 8t 2 [ +1,t 0 J ) [ (t0 J + T J 0 1, ], 8w 2 W 0 J, 8J 2 J( ) [ J( ); () I(J) =1, 8J 2 J( ) Here in te constraints, for newly arrived jobs in J( ), we set t 0 J = t J and T 0 J = T J Tis second algoritm is more aggressive tan te first one in computing te best routing pats for all remaining cunks in te system, bot from te newly arrived jobs and from old, unfinised jobs More computation is involved Noneteless, we will sow in Sec 63 tat te solution can still be derived in a reasonable amount of time under practical setting wit eavy data transfer traffic It s worty to note tat te optimal solution of of te two optimization problems (1)(3) may not be unique, and we randomly select one for te routing decision It could be interesting to study wic optimal solution is even better in an online fasion, if te job request pattern is known beforeand or can be predicted Tis will be part of our future work 43 Te Future-Demand-Friendly Heuristic We furter propose a simple but efficient euristic to make job acceptance and cunk routing decisions in eac time slot, wit polynomial-time computational complexity, suitable for systems wit larger scales Similar to te first algoritm, te euristic retains routing decisions computed earlier for cunks of already accepted jobs, but only makes decisions for jobs received in tis time slot using te remaining bandwidt On te oter and, it is more future demand friendly tan te first algoritm, by postponing te transmission of accepted jobs as muc as possible, to save bandwidt available in te immediate future in case more urgent transmission jobs may arrive Let J( ) be te set of latest data transfer requests arrived in time slot Te euristic is given in Alg 1 At te job level, te algoritm preferably andles data transfer requests wit iger weigts and smaller sizes (line 1), ie, larger weigt per unit bandwidt consumption For eac cunk in job J, te algoritm cooses a transfer pat wit te fewest number of ops, tat as available bandwidt to forward te cunk from te source to te destination before te deadline (line Algoritm 1 Te Future-Demand-Friendly Heuristic at Time Slot 1: Sort requests in J( ) by U J W J in descending order 2: for eac job J in sorted list J( ) do 3: for eac cunk w 2 W J do 4: Find a sortest pat from S J to D J tat satisfies te following (suppose te pat includes ops): tere is one unit bandwidt available at te i-t op link (1 apple i apple ) in at least one time slot witin te time frame [t J +(i 1) T J,t J + i T J 1] List all te time slots in te frame wen tere is one unit available bandwidt along te i-t op link as 1, i 2,, i L i 5: if suc a pat does not exist ten 6: Reject J, ie, set I J =0, and clear te transmission scedules made for oter cunks in J; 7: break; 8: end if 9: for eac op (m, n) along te sortest pat do 10: suppose it is te i-t op; coose te r-t time slot in te list 1, i 2,, i L i wit probability ; set Lp=1 p xw m,n( r)=1 i and x w m,n(t) =0, 8t 6= i r (ie, transfer cunk w from m to n at time slot i r) 11: end for 12: end for 13: Accept J, ie, set I J =1 14: end for 4) Te rationale is tat a pat wit fewer ops consumes less overall bandwidt to transfer a cunk and wit iger probability to meet te deadline requirement We compute te allowed time window for te transfer of a cunk to appen at te i-t op along its potential pat, by dividing te allowed overall transfer time T J evenly among te ops of te pat As sown in Fig 2, te transfer of te cunk from te source to te first-op intermediate datacenter sould appen witin time window [t J,t J + T J 1], and te transfer at te second op sould appen witin time window [t J + T J,t J + 2T J 1], and so on, in order to guarantee te cunk s arrival at te destination before te deadline of t J +T J Te pat sould be one suc tat tere is at least one unit available bandwidt at eac op witin at least one time slot in te allowed transmission window of tis op (line 4) If suc a pat does not exist, te job sould be rejected (line 6) Oterwise, te algoritm computes te cunk transfer scedule at eac op along te selected pat, by coosing one time slot wen available bandwidt exists on te link, witin te allowed transmission window of tat op Tere can be multiple time slots satisfying tese requirements, and te latest time slot among tem is selected wit te igest probability (line 10) If te transfer scedules for all cunks in te job can be successfully made, te job is accepted (line 13) 0 Source T J [t J,t J + T J time window at 1st op T J 1 2 x 1] [t J + T J,t J +2 T J r time window at 2nd op 1] Destination Fig 2 Assigning allowed time window for a cunk s transfer at eac op along its transmission pat Alg 1 tends to be conservative in cunk forwarding by (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

7 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 7 sceduling te transmissions at a later time possible, wit te rationale to save bandwidt for more urgent transfer requests tat could arrive in te immediate future Te downside is tat it may leave links idle at earlier times We furter design a supplementary algoritm to expedite cunk transfer in suc scenarios, as given in Alg 2 Te idea is simple: te algoritm transfers cunks wit pending future transmission scedules along a link, to fully utilize te available link bandwidt at eac time slot Suc opportunistic forward-sifts of sceduled transmissions reduce future bandwidt consumption in te system, suc tat potentially more job transfer requests can be accommodated Algoritm 2 Opportunistic Cunk Transfer in Eac Time Slot 1: for eac link (m, n) in te system do 2: wile tere is available bandwidt on (m, n) do 3: if tere exists a cunk w wic as been received at datacenter m and is sceduled to be forwarded to datacenter n at a later time ten 4: Send w from m to n 5: else 6: break 7: end if 8: end wile 9: end for Teorem 2 guarantees tat te proposed practical euristic as a polynomial-time complexity (wit detailed proof in Appendix B) We also study te performance of tis euristic in Sec 6, and compare it wit te oter optimization based solutions Teorem 2: Te practical euristic described in Alg 1 and Alg 2 as polynomial-time complexity 5 SYSTEM IMLEMENTATION We ave implemented a prototype bulk data transfer (BDT) system based on te OpenFlow framework Te BDT system resides above te transport layer in te network stack, wit no special requirements on te lower transport layer and network layer protocols, were te standard TC/I stack is adopted Due to te clean design tat will be introduced in tis section, te BDT system is implemented wit only 7K lines of Java code and around 2K lines of yton code, apart from te codes of te configuration files, te GUI control interface, etc 51 Te Key Components Fig 3 depicts te main modules of te system, consisting of bot te central Controller and te distributed Gateway Servers Common in all SDN-based systems, te centralized controller could become te performance bottleneck of te system To realize fine-grained dynamic traffic sceduling, it needs to keep track of global state information, wic translates to stressful resource consumption indering orizontal scalability, in terms of bot memory footprints and processing cycles We terefore follow te common design pilosopy by splitting part of te sopistications to te end system, ie, te gateway server, wile keeping only necessary tasks to te core system, ie, te controller Cunk Buffer Cunk Management Users Transmission Requests Job Aggregator split Signal Handler Transmission /Forward Gateway Sceduler Request Handler et0 et1 Command Queues Command Dispatcer Control at Data at Controller Flow Table Core Switc OpenFlow Interface Data at Fig 3 Te main components in te BDT system Oter Gateways B Controller: Te controller is built based on te Beacon framework [11], capitalizing its ric features including crossplatform portability and multi-tread support Te Request Handler listens to transmission requests from te gateway servers at different datacenters, eac of wic is responsible for collecting data transfer requests from te datacenter it resides in For eac incoming request, a unique global job id is assigned and returned to te corresponding gateway server Te job id will be reclaimed later once eiter te request is rejected or te data transfer is completed Te Sceduler periodically retrieves te received requests and computes te cunk routing scedules, following one dynamic algoritm we design Eac accepted cunk is labelled wit a timestamp indicating wen its transmission sould begin, and te transmission command is pused to a command queue Te Command Dispatcer furter periodically retrieves te commands wit timestamps no later tan te current time from te command queue, and sends tem to te corresponding gateway servers and core switces We implement te Command Dispatcer in te Adaptor design pattern, wic can be easily customized to be compliant wit different versions of OpenFlow specifications Currently, our system is compliant wit OpenFlow version 100, supported by most off-te-self commodity switc products B Gateway Server: Eac gateway server is attaced to two separate networks, one (et0) is for control signaling wit te controller, and te oter (et1) is for data transmission Traffic along te control pat is significantly less tan tat along te data pat, since all signaling is via plain ligt-weigt JSON messages Te data pat corresponds exactly to te dedicated links between te datacenters Te Job aggregator collects all te incoming data transfer requests from witin te same datacenter, and forwards tem to te controller to scedule Buffer Management plays te role of managing all received cunks at te gateway server, including tose destined to tis datacenter and tose temporarily stored wile waiting to be forwarded by te Transmission/Forward module Te Transmission/Forward module functions by following te transmission instructions issued from te controller For eac transmission instruction, a new tread is created to transmit te corresponding cunk Terefore, multiple transmissions can appen concurrently, and te maximal number of transmission treads allowed are constrained by te link capacity Actual data transmissions appen directly between two gateway servers, and te sender gateway server sould first (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

8 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 8 notify te recipient gateway server of te identity (job id, cunk id) of te cunk transmitted at te same TC connection to facilitate te management at te Buffer Management module at te latter Te Signal Handler module andles te signals communicated wit te controller via te control pat Multiple signals are excanged between te controller and a gateway server For instance, te controller instructs te gateway server wic cunks to transmit to wic datacenters as te next op at te current moment, and te gateway server may notify te controller of any transmission failure due to network failures B Core Switc: Te core switc wires te co-located gateway server wit te oter gateway servers troug te dedicated links Te controller reconfigures te flow tables inside eac core switc via standard OpenFlow AIs, according to te calculated sceduling decisions Fig 4 sows a sequence diagram describing te primary control signals between different modules at te controller and a gateway server Two important background treads are constantly running (marked wit loop in Fig 4) at te controller: (1) te sceduler tread periodically collects all te received requests during te last time slot to calculate te sceduling decisions, and inserts te results to te command queue; (2) Te command dispatcer tread periodically retrieves te generated routing decisions and forwards tem to te corresponding gateway servers and core switces 52 Oter Design Higligts Dynamic component model Te modules in our system are integrated into te Java OSGi framework [27] as independent bundles, wic can be deployed and upgraded at runtime, witout sutting down te controller More specifically, we select Equinox [28] as bot our development and deployment platform, wic is a certified implementation of OSGi R4x core framework specification Feasibility in te production network By integrating FlowVisor[29] as te transparent proxy between te controller and te core switc, we can logically slice dedicated bandwidts out of te pysical links for transmissions, acieving a rigorous performance isolation from te ordinary traffic in te production network By carefully designing te flow spaces, users can specify weter teir traffic will go troug te BDT network or te traditional best effort one Easy deployment of new sceduling algoritm Our controller implementation features a simple interface BDT Scedulable for incorporation of different sceduling algoritms Due to space limit, readers are referred to Appendix C for more igligts of our system design For instance, our future-demand-friendly euristic is implemented wit less tan 600 lines of code Efficient andling of overwelming numbers of TC connections A standard GNU/Linux distribution (eg, Gentoo in our cases) aims to optimize TC performance in a wide range of environments, wic may not be optimal in our system deployment Terefore, we ave carefully configured te TC buffer and tuned te kernel performance parameters to optimize te bulk transfer performance in our system Some key parameters are listed in Table 2 netipv4tcp congestion control cubic netcoresomaxconn 8192 netipv4tcp tw recycle 1 netipv4tcp tw reuse 1 netipv4tcp fack 1 TABLE 2 Key TC parameters to configure 6 ERFORMANCE EVALUATION 61 Experimental Setting Referring to a series of surveys (eg, [30]) on te scale of commercial cloud systems, we emulate a geo-distributed cloud wit 10 datacenters, eac of wic is emulated wit a ig-end IBM BladeCenter HS23 cluster [31] Eac gateway server is implemented using a mounted blade server wit a 16-core Intel Xeon E processor and 80GB RAM, were only 4 cores and 4GB RAM are dedicatedly reserved for te gateway server functionality Hooked to an OpenFlow-enabled H3500-series core switc [32] (equipped wit crossbar switcing fabric of up to 1536 Gbps), eac gateway server is connected to 4 oter gateway servers via CAT6 Eternet cables (10 Gbps trougput) To emulate limited bandwidt on dedicated links between datacenters, we configure te rate limits of eac core switc on a per-port basis To make te experimental environment more callenging, te controller module is deployed on a laptop, wit a 4-core Intel Core i7 and 16GB RAM, and a maximal 10GB memory allocated to te Java virtual macine A number of user robots are emulated on te blade servers to submit data transfer requests to te respective gateway server In eac second, te number of requests generated in eac datacenter is randomly selected from te range of [0, 10], wit te size of te respective bulk data ranging from 100 MB to 100 GB (real random files are generated and transferred in our system) Te destination of te data transfer is randomly selected among te oter datacenters Te data transfer requests can be categorized into two types according to teir deadlines: (1) urgent transmissions wit ars deadlines (ie, a deadline randomly generated between 10 seconds and 80 seconds), and (2) less urgent transmissions wit relative loose deadlines (ie, 80 seconds to 640 seconds) We define as te percentage of te less urgent transmission tasks generated in te system at eac time Te cunk size is 100 MB, reasonable for bulk data transfer wit sizes up to tens of GB Te lengt of eac time slot is 10 seconds, and te unit bandwidt is ence 80 Mbps Witout loss of generality, te weigts of jobs U J are assigned values between 10 and 100, indicating te importance of te jobs Unless stated oterwise, te evaluation results presented in tis section are based on collected logs (around 12GB) after a continuous run of 40 minutes of te system, during wic around 127 TB transfer traffic is incurred wen = 10% We ave implemented four cunk sceduling algoritms and compared teir performance under te above general setup: B RES te bandwidt-reserving algoritm given in Sec 41 B INC te dynamically-adjusting algoritm given in (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

9 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 9 Gateway Job Request Job ID Request Handler Sceduler Command Dispatcer Core Switc loop Fetc Requests Requests Received Clear Request Queues Job ID, accepted / reject Scedule Computing loop Transmission Command Update Command Queues OFMod command Fig 4 Te control logic between te controller and a gateway server Sec 42 B HEU our future-demand-friendly euristic given in Alg 1 and Alg 2 in Sec 43 B RSF a random store-and-forward algoritm, in wic wen and were a cunk is to be forwarded are randomly decided Most existing work on inter-datacenter data transfer eiter assume known or predictable request arrival patterns [16] (eg, diurnal patterns) or enforce routing pat reservation for continuous network flows [12], wile our algoritms do not require so Hence it is difficult to make direct comparisons wit tose work Neverteless, we will establis special experiment setup in Sec 63 and Sec 65 (to make te algoritms comparable), and compare our algoritms wit ostcard [17] and NetStitcer [16], respectively 62 Aggregate Weigt and Job Acceptance Fig 5 plots te aggregate weigt of accepted jobs in te entire system over time For RSF, we treat te jobs tat finis transfer before teir deadlines as accepted We can clearly see tat INC performs te best, benefiting from its dynamic cunk routing adjustment RSF performs te worst, necessitating a more efficient deadline-aware sceduling algoritm An interesting observation is tat HEU outperforms RES, toug te former is a euristic and te latter is te optimal solution of te optimal cunk sceduling problem We believe tis is credited to te future demand friendliness implemented in HEU: RES optimally scedules transfer requests received in te latest time slot but does not consider any subsequently arriving requests; HEU is conservative in its bandwidt occupancy during its cunk routing sceduling, leaving more available bandwidt for potential, subsequent urgent jobs Fig 7 plots te number of accepted jobs in te system over time INC, RES and HEU accept a similar number of jobs over time, wile RSF accepts te least Comparing to Fig 5, tis result reveals tat INC is able to admit more important jobs tan te oter algoritms do, and performs te best in supporting service differentiation among jobs of different levels of practical importance We furter verify if similar observations old wen te percentage of less urgent jobs,, varies Fig 6 and Fig 8 sow similar trends at different values of 63 Computation complexity of te algoritms We examine te computation complexity of different algoritms, in terms of te time te control spends on calculating te cunk routing scedules using eac of te algoritms, referred to as te sceduling delay Note tat an important requirement is tat te sceduling delay of an algoritm sould be less tan te lengt of a time slot, ie, 10 seconds In Fig 9, y axis represents te sceduling delay of te algoritms at te central controller in logaritm scale We can see tat INC and RES consume more computation time as expected, due to solving an integer optimization problem However, as we claimed earlier, bot algoritms are still efficient enoug under our practical settings, wit sceduling delays muc less tan te sceduling interval lengt (10 seconds) HEU incurs similar computation overead to RSF, implying te applicability of HEU to systems at larger scales Altoug ostcard [17] targets a scenario different from ours (ie, to minimize te operational cost), we still implement it to investigate te time tat te controller needs to run its sceduling algoritm, under a similar setting to tat of te oter algoritms: we assign an identical cost per traffic unit transferred on eac link (a i,j in [17]), and use te same job arrival series wit = 10% We can see in Fig 9 tat te sceduling delay of ostcard is muc larger (sometimes more tan 15 minutes), due to a muc more complicated optimization problem formulated Fig 10 plots te average sceduling delays of te algoritms over time, at different values of Te sceduling delays by INC and RES increase as grows, since bot te number of cunks and te lifetime of a job are larger wit larger, wic contribute to te complexity of te integer optimization problems On te oter and, te sceduling delays of HEU remain at similar values wit te increase of system scale, wic reveals te good scalability of our practical euristic Similarly, we also evaluate te maximum number of concurrent job requests tat can be andled witin te sceduling (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

10 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing HEU RSF INC RES Aggregate weigt % 5% 10% 15% 20% α Fig 5 Aggregate weigt of accepted jobs: = 10% Fig 6 Aggregate weigt of accepted jobs at different percentages of less urgent jobs Total number of accepted jobs HEU RSF INC RES 0% 5% 10% 15% 20% α Fig 7 Total number of accepted jobs: = 10% Fig 8 Total number of accepted jobs at different percentages of less urgent jobs Sceduling delay (ms) HEU RSF INC RES 0% 5% 10% 15% 20% α Fig 9 Sceduling delay: = 10% Fig 10 Sceduling delay at different percentages of less urgent jobs delay constraint, ie, 10 seconds Fig 17 plots te average maximum job rate for tree different algoritms We can see tat INC and RES can only accomodate quite a limited job rate, ie, 408 and 1824 respectively, as opposed to a job rate of acieved by HEU In our experimental setting, eac transfer job as an average of 500 cunks (50 GB) to transmit, and tus te job rate can be greatly increased if te job size is reasonably constrained since te complexity of solving te integer problems mainly depends on te number of cunks and te lifetime of te jobs (specified by te deadline) 64 Resource Consumption at Gateway Servers Next we investigate te resource consumption on te gateway servers in Fig 11 to Fig 16, due to andling data transmissions and control signaling Te results sown are averages of te corresponding measurements on te 10 gateway servers Te average CU consumption per gateway server given in Fig 11 mostly represents te CU usage for andling control signaling wit te controller We can see tat te CU consumption is similar wen eac of te four algoritms is deployed, wic increases slowly as more transmission tasks accumulate in te system, to be sceduled, over time Te memory consumption in Fig 13 follows te similar trend Te bandwidt consumption in Fig 15 represents te average data volumes transmitted on eac dedicated link We see tat te peak bandwidt consumption of RSF is muc iger tan tat of te oter tree algoritms, wic sows tat te random beavior of RSF leads to ig bandwidt consumption wit a low aggregate weigt acieved, wile te oter tree, especially INC, incur less bandwidt consumption wile acieving a better aggregate weigt (see Fig 5) Fig 12, Fig 14, and Fig 16 furter verify te better performance of our proposed online algoritms (ie, INC, RES, HEU) as compared to RSF, wit muc slower growt rates of resource consumption wit te increase of system load As compared (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

11 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 11 to CU consumption, memory consumption and bandwidt consumption are relatively dominating on te gateway servers were a large number of TC connections are establised HEU RES INC Size of Data Transferred (TB) INC NetStitcer Time (ours) Fig 18 Size of data transferred over time max # of job requests per second Fig 17 Maximum job rates accommodated by tree different algoritms 65 Link Utilization As mentioned earlier, te only real-world inter-datacenter bulk data transfer system we are aware of is NetStitcer [16] Different from ours, NetStitcer relies on a priori knowledge of te traffic pattern on te network links over time, wit a sceduling goal of minimizing te transfer time of a given volume of data It applies te time expansion grap tecnique wic is later adopted by ostcard [17], wose computation overead prevents te sceduling from appening as frequently as ours Terefore, to make a fair comparison, we set te sceduling interval in tis set of experiments to 30 minutes, and te cunk size is configured to 18 GB, accordingly We consider data transfer requests arising from te same source datacenter and destined to te same destination datacenter, wit te rest 8 datacenters as potential intermediate store-andforward nodes An average of 100 transfer requests (180 GB, identical weigts) are issued at te source datacenter every 30 minutes in a 12-our span, wit te deadlines configured to te end of te timespan (used only wen running our algoritm) Te link bandwidts are configured similarly to our previous experiments Fig 18 presents te size of data arriving at te destination by our INC algoritm and NetStitcer during te 12-our span We can see tat te size of data transmitted by NetStitcer decreases gradually wen te links become saturated, wereas INC performs better wit a stable trougput over time We believe te reason is as follows: altoug NetStitcer allows inactive replicas of cunks caced at te intermediate nodes to tackle te extreme situations (eg, a node can go off-line, or its uplink can be substantially reduced ), only te active replica can be sceduled wen te routing pats of a data transfer job need to be recomputed Differently, INC can better exploit te available bandwidt in te system as any intermediate datacenter received a specific cunk can serve as te source afterwards, acieving iger link utilization 7 CONCLUSION Tis paper presents our efforts to tackle an arising callenge in geo-distributed datacenters, ie, deadline-aware bulk data transfers Inspired by te emerging Software Defined Networking (SDN) initiative tat is well suited to deployment of an efficient sceduling algoritm wit te global view of te network, we propose a reliable and efficient underlying bulk data transfer service in an inter-datacenter network, featuring optimal routing for distinct cunks over time, wic can be temporarily stored at intermediate datacenters and forwarded at carefully computed times For practical application of te optimization framework, we derive tree dynamic algoritms, targeting at different levels of optimality and scalability We also present te design and implementation of our Bulk Data Transfer (BDT) system, based on te Beacon platform and OpenFlow AIs Experiments wit realistic settings verify te practicality of te design and te efficiency of te tree algoritms, based on extensive comparisons wit scemes in te literature AENDIX A ROOF OF THEOREM 1 roof: First, te inequality sign in constraint (d) ( ) can be more rigorously specified as te equality sign (=), enforced by constraints (a), (b) and (c) For ease of reference, we denote te revised problem as orig Next, we construct a new grap (Fig 19) Let t min be te minimal value among te earliest time slots for te job transmissions to appen, ie, t min = min t J Similarly, we J2J define t max as te latest deadline of all te job transmissions, ie, t max = max {t J + T J } Ten, te detailed construction J2J of te new grap can be carried out as follows: Te topology contains N (t max t min + 1) nodes (N = N ), wit nodes at eac row i represented as n i,tmin,n i,tmin+1,,n i,tmax Between eac pair of nodes n r1,j,n r2,j+1 (r 1,r 2 2 [1,N],j 2 [t min,t max 1]) in two neigbouring columns, add a directed link n r1,j! n r2,j+1, wit a bandwidt B r1,r 2 if r 1 6= r 2, or +1 if r 1 = r 2 Te source and destination nodes for eac cunk w 2 W J belonging to J 2 J correspond to nodes n SJ,t J and n DJ,t J +T J in Fig 19, respectively We consider a new optimization problem, denoted as new, wic computes job acceptance and routing pats of eac cunks of eac job in te newly-constructed grap, suc (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

12 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 12 CU Usage (%) HEU 15 RSF INC RES % 5% 10% 15% 20% α Fig 11 CU performance: = 10% Fig 12 CU performance at different percentages of less urgent jobs Memory (MB) HEU RSF INC RES 0% 5% 10% 15% 20% α Fig 13 Memory performance: = 10% Fig 14 Memory performance at different percentages of less urgent jobs Bandwidt (Mbps) HEU RSF INC RES 0% 5% 10% 15% 20% α Fig 15 Bandwidt consumption: = 10% Fig 16 Bandwidt consumption at different percentages of less urgent jobs tat te aggregate weigt of all accepted jobs is maximized Especially, if any transmission occurs from r 1 to r 2 at time j, it corresponds to a link n r1,j! n r2,j+1 in Fig 19 If eac job is considered as an individual commodity, problem new is a maximum multi-commodity flow problem wic as been proven to be N-Complete [33] It is easy to see tat any job J 2 J is accepted in new if and only if J is accepted in orig On te oter and, it takes only polynomial time to reduce problem new to problem orig by consolidating all te nodes in a single row as well as te associated links wit te following detailed steps: For all te nodes in te r-t row, ie, n r,tmin,n r,tmin+1,,n r,tmax, create a single node n r For eac pair of links between any two neigbouring columns at different rows, eg, n r1,j! n r2,j+1,j 2 [t min,t max 1], create a new link between te two newly-created nodes n r1 and n r2 wit a bandwidt B r1,r 2 Remove all te links between any two neigbouring columns at te same rows Remove all te original nodes and links Hence new is polynomial-time reducible to orig, wic implies, new is no arder tan orig Based on te reduction teorem (Lemma 348 in [34]), we can derive tat orig is N ard Te original problem in (1) is ence also N ard AENDIX B ROOF OF THEOREM 2 N J roof: Let s first ceck te complexity of Alg 1 Assume represents te number of jobs to be sceduled, max w (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

13 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 13 n 1, tmin n 1, tmin +1 ossible Transmission of Job J 0 n 1, tmax public static void initialize (boolean clear) {} n 2, tmin n N, t min n SJ0, t J0 n DJ0, t J0 +T J0 Fig 19 Te newly-constructed network for new n 2, tmax n N, t max //Some concrete sceduler CommandQueue cmd_queue = CommandQueuegetQueue(); cmd_queueadd_command(cmdpre_fetc_time, cmdtime_to_invoke, cmdcmd_to_switc, cmdsrc_ip, cmddest_ip, cmdjob_id, cmdcunk_index); represents te maximal number of cunks in a single job, T max represents te maximally allowed number of time slots for transmission of a single job, and N represents te number of datacenters A detailed complexity analysis of individual statements in te algoritm is as follows: 1) Te complexity of sorting te requests (line 1) is O(N J log(n J )) 2) Te complexity of searcing for a sortest pat is O(N 2 log(n)) [35] Once te pat is fixed, it takes at most O(N T max ) steps to verify weter te pat is valid, as te pat as at most N ops and at most T max time slots need cecking for eac op 3) In Line 6, te transmission scedules, x w m,n(t), of at most max w cunks need to be clearly, and te complexity is terefore O(N 2 T max max w ) 4) Te complexities of line 9 and 10 is O(N T max ) As a result, te overall complexity of Alg 1 is terefore O(N J log(n J )+N J max w (N 2 log(n)+n T max + N 2 T max max w + N T max )), wic is polynomial Similarly, Alg 2 can be proved to ave polynomial-time complexity, wit easier efforts Ten, Teorem 2 can be readily proved AENDIX C THE ROGRAMMING INTERFACE OF BDT To-be-sceduled jobs are automatically updated by te framework via Class BDT Job Set, and te singleton object CommandQueue is used to obtain te command queue to save te sceduling commands tat will be triggered at te specified time Te example code snippets are listed as follows //BDT_Scedulablejava public interface BDT_Scedulable { public void scedule(); } //BDT_Job_Setjava /** clear weter to remove te previously accepted jobs * Te set contains only te latest jobs if clear is true; * Te set contains bot latest jobs and previously accepted ones if clear is false */ ACKNOWLEDGEMENT Te researc was supported in part by Wedge Network Inc, and grants from RGC under te contract HKU and HKU REFERENCES [1] Data Center Map, ttp://wwwdatacentermapcom/datacenterstml [2] K K Ramakrisnan, Senoy, and J Van der Merwe, Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approac, in roceedings of te 2007 SIGCOMM worksop on Internet network management, ser INM 07, New York, NY, USA, 2007, pp [3] Y Wu, C Wu, B Li, L Zang, Z Li, and F C M Lau, Scaling Social Media Applications into Geo-Distributed Clouds, in INFOCOM, 2012 [4] J Dean and S Gemawat, MapReduce: Simplified Data rocessing on Large Clusters, Commun ACM, vol 51, no 1, pp , Jan 2008 [5] A Greenberg, G Hjalmtysson, D A Maltz, A Myers, J Rexford, G Xie, H Yan, J Zan, and H Zang, A Clean Slate 4D Approac to Network Control and Management, ACM SIGCOMM Computer Communication Review, vol 35, no 5, pp 41 54, 2005 [6] SDN, ttps://wwwopennetworkingorg/sdn-resources/sdn-definition [7] N McKeown, T Anderson, H Balakrisnan, G M arulkar, L L eterson, J Rexford, S Senker, and J S Turner, OpenFlow: Enabling Innovation in Campus Networks, Computer Communication Review, vol 38, no 2, pp 69 74, 2008 [8] U Hoelzle, Openflow@ google, Open Networking Summit, 2012 [9] S Jain, A Kumar, S Mandal, J Ong, L outievski, A Sing, S Venkata, J Wanderer, J Zou, M Zu et al, B4: Experience wit a Globally-deployed Software Defined WAN, in roceedings of te ACM SIGCOMM 2013 conference on SIGCOMM ACM, 2013, pp 3 14 [10] S J Vaugan-Nicols, OpenFlow: Te Next Generation of te Network? Computer, vol 44, no 8, pp 13 15, 2011 [11] Beacon Home, ttps://openflowstanfordedu/display/beacon/home [12] C Wilson, H Ballani, T Karagiannis, and A Rowtron, Better Never tan Late: Meeting Deadlines in Datacenter Networks, in roceedings of te ACM SIGCOMM, New York, NY, USA, 2011, pp [13] B Vamanan, J Hasan, and T Vijaykumar, Deadline-aware Datacenter TC (d2tcp), in roceedings of te ACM SIGCOMM 2012 conference on Applications, tecnologies, arcitectures, and protocols for computer communication ACM, 2012, pp [14] M Alizade, S Yang, M Sarif, S Katti, N McKeown, B rabakar, and S Senker, pfabric: Minimal Near-optimal Datacenter Transport, in roceedings of te ACM SIGCOMM 2013 Conference on SIGCOMM, ser SIGCOMM 13 New York, NY, USA: ACM, 2013, pp [15] Y Cen, S Jain, V K Adikari, Z-L Zang, and K Xu, A First Look at Inter-data Center Traffic Caracteristics via Yaoo! Datasets in INFOCOM, 2011 [16] N Laoutaris, M Sirivianos, X Yang, and Rodriguez, Inter-datacenter Bulk Transfers wit Netstitcer, in roceedings of te ACM SIGCOMM 2011 conference, New York, NY, USA, 2011, pp [17] Y Feng, B Li, and B Li, ostcard: Minimizing Costs on Inter- Datacenter Traffic wit Store-and-Forward, in roceedings of te nd International Conference on Distributed Computing Systems Worksops, ser ICDCSW 12 Wasington, DC, USA: IEEE Computer Society, 2012, pp [18] E Kler, K Langkau, and M Skutella, Time-Expanded Graps for Flow-Dependent Transit Times, in roc 10t Annual European Symposium on Algoritms Springer, 2002, pp (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

14 Tis article as been accepted for publication in a future issue of tis journal, but as not been fully edited Content may cange prior to final publication Citation information: DOI /TCC , IEEE Transactions on Cloud Computing 14 [19] B B Cen and V-B rimet, Sceduling Deadline-constrained Bulk Data Transfers to Minimize Network Congestion, in roceedings of te Sevent IEEE International Symposium on Cluster Computing and te Grid, 2007, pp [20] B Heller, S Seetaraman, Maadevan, Y Yiakoumis, Sarma, S Banerjee, and N McKeown, ElasticTree: Saving Energy in Data Center Networks in NSDI, vol 3, 2010, pp [21] C-Y Hong, S Kandula, R Maajan, M Zang, V Gill, M Nanduri, and R Wattenofer, Acieving Hig Utilization wit Software-Driven WAN, in roceedings of te ACM SIGCOMM, Hong Kong, Cina, 2013 [22] M A Rodriguez and R Buyya, Deadline Based Resource rovisioning and Sceduling Algoritm for Scientific Workflows on Clouds, IEEE Transactions on Cloud Computing, vol 2, no 2, April 2014 [23] A R Curtis, J C Mogul, J Tourriles, Yalagandula, Sarma, and S Banerjee, DevoFlow: Scaling Flow Management for Higperformance Networks, SIGCOMM Comput Commun Rev, vol 41, no 4, pp , Aug 2011 [24] H Liu, C F Lam, and C Jonson, Scaling Optical Interconnects in Datacenter Networks Opportunities and Callenges for wdm, in Hig erformance Interconnects (HOTI), 2010 IEEE 18t Annual Symposium on IEEE, 2010, pp [25] R K Auja, T L Magnanti, and J B Orlin, Network Flows: Teory, Algoritms, and Applications Englewood Cliffs, NJ: rentice Hall, 1993 [26] CLEX Optimizer, ttp://www-01ibmcom/software/commerce/ optimization/cplex-optimizer/ [27] OSGi Alliance, ttp://wwwosgiorg/main/homeage [28] Equinox, ttp://eclipseorg/equinox/ [29] R Serwood, G Gibb, K-K Yap, G Appenzeller, M Casado, N McKeown, and G arulkar, Flowvisor: A Network Virtualization Layer, OpenFlow Switc Consortium, Tec Rep, 2009 [30] Google Data Center FAQ, ttp://wwwdatacenterknowledgecom/ arcives/2012/05/15/google-data-center-faq/ [31] IBM BladeCenter HS23, ttp://ttp://www-03ibmcom/systems/ bladecenter/ardware/servers/s23/specstml [32] H 3500 and 3500 yl Switc Series, ttp://17007www1pcom/us/en/ networking/products/switces/h 3500 and 3500 yl Switc Series/ indexaspx#uezk2tiydm [33] S Even, A Itai, and A Samir, On te Complexity of Time Table and Multi-commodity Flow roblems, in roceedings of te 16t Annual Symposium on Foundations of Computer Science Wasington, DC, USA: IEEE Computer Society, 1975, pp [34] T H Cormen, C Stein, R L Rivest, and C E Leiserson, Introduction to Algoritms, 2nd ed McGraw-Hill Higer Education, 2001 [35] R Bellman, On a Routing roblem, Quarterly of Applied Matematics, vol 16, pp 87 90, 1958 Yu Wu received te BE and ME degrees in computer science and tecnology from Tsingua University, Beijing, Cina, in 2006 and 2009, respectively, and te D degree in computer science from te Univeristy of Hong Kong, Hong Kong, in 2013 He is currently a ostdoctoral Scolar wit te Department of Electrical, Computer, and Energy Engineering, Arizona State university, Tempe, AZ, USA His researc interests include cloud computing, mobile computing, network virtualization and content-centric network Zizong Zang received is BSc degree in 2011 from te Department of Computer Science, Sun Yat-sen University, Cina He is currently a D student in te Department of Computer Science, Te University of Hong Kong, Hong Kong His researc interests include networks and systems Cuan Wu received er BE and ME degrees in 2000 and 2002 from Department of Computer Science and Tecnology, Tsingua University, Cina, and er D degree in 2008 from te Department of Electrical and Computer Engineering, University of Toronto, Canada Se is currently an associate professor in te Department of Computer Science, Te University of Hong Kong, Cina Her researc interests include cloud computing, peer-to-peer networks and online/mobile social network Se is a member of IEEE and ACM Cuanxiong Guo (M 03) received te D degree in communications and information systems from Nanjing Institute of Communications Engineering, Nanjing, Cina, in 2000 He is a Senior Researcer wit te Wireless and Networking Group, Microsoft Researc Asia, Beijing, Cina His researc interests include network systems design and analysis, datacenter networking, data-centric networking, network security, networking support for operating systems, and cloud computing Zongpeng Li received is BE degree in Computer Science and Tecnology from Tsingua University (Beijing) in 1999, is MS degree in Computer Science from University of Toronto in 2001, and is D degree in Electrical and Computer Engineering from University of Toronto in 2005 Since August 2005, e as been wit te Department of Computer Science in te University of Calgary Zongpeng s researc interests are in computer networks and network coding Francis CM Lau (SM, IEEE) received is D in Computer Science from te University of Waterloo, Canada He as been a faculty member in te Department of Computer Science, Te University of Hong Kong, since 1987, were e served as te department ead from 2000 to 2006 His researc interests include networking, parallel and distributed computing, algoritms, and application of computing to art He is te editor-in-cief of te Journal of Interconnection Networks (c) 2015 IEEE ersonal use is permitted, but republication/redistribution requires IEEE permission See ttp://wwwieeeorg/publications_standards/publications/rigts/indextml for more information

Schedulability Analysis under Graph Routing in WirelessHART Networks

Schedulability Analysis under Graph Routing in WirelessHART Networks Scedulability Analysis under Grap Routing in WirelessHART Networks Abusayeed Saifulla, Dolvara Gunatilaka, Paras Tiwari, Mo Sa, Cenyang Lu, Bo Li Cengjie Wu, and Yixin Cen Department of Computer Science,

More information

Comparison between two approaches to overload control in a Real Server: local or hybrid solutions?

Comparison between two approaches to overload control in a Real Server: local or hybrid solutions? Comparison between two approaces to overload control in a Real Server: local or ybrid solutions? S. Montagna and M. Pignolo Researc and Development Italtel S.p.A. Settimo Milanese, ITALY Abstract Tis wor

More information

Design and Analysis of a Fault-Tolerant Mechanism for a Server-Less Video-On-Demand System

Design and Analysis of a Fault-Tolerant Mechanism for a Server-Less Video-On-Demand System Design and Analysis of a Fault-olerant Mecanism for a Server-Less Video-On-Demand System Jack Y. B. Lee Department of Information Engineering e Cinese University of Hong Kong Satin, N.., Hong Kong Email:

More information

How To Ensure That An Eac Edge Program Is Successful

How To Ensure That An Eac Edge Program Is Successful Introduction Te Economic Diversification and Growt Enterprises Act became effective on 1 January 1995. Te creation of tis Act was to encourage new businesses to start or expand in Newfoundland and Labrador.

More information

Can a Lump-Sum Transfer Make Everyone Enjoy the Gains. from Free Trade?

Can a Lump-Sum Transfer Make Everyone Enjoy the Gains. from Free Trade? Can a Lump-Sum Transfer Make Everyone Enjoy te Gains from Free Trade? Yasukazu Icino Department of Economics, Konan University June 30, 2010 Abstract I examine lump-sum transfer rules to redistribute te

More information

The EOQ Inventory Formula

The EOQ Inventory Formula Te EOQ Inventory Formula James M. Cargal Matematics Department Troy University Montgomery Campus A basic problem for businesses and manufacturers is, wen ordering supplies, to determine wat quantity of

More information

Optimized Data Indexing Algorithms for OLAP Systems

Optimized Data Indexing Algorithms for OLAP Systems Database Systems Journal vol. I, no. 2/200 7 Optimized Data Indexing Algoritms for OLAP Systems Lucian BORNAZ Faculty of Cybernetics, Statistics and Economic Informatics Academy of Economic Studies, Bucarest

More information

- 1 - Handout #22 May 23, 2012 Huffman Encoding and Data Compression. CS106B Spring 2012. Handout by Julie Zelenski with minor edits by Keith Schwarz

- 1 - Handout #22 May 23, 2012 Huffman Encoding and Data Compression. CS106B Spring 2012. Handout by Julie Zelenski with minor edits by Keith Schwarz CS106B Spring 01 Handout # May 3, 01 Huffman Encoding and Data Compression Handout by Julie Zelenski wit minor edits by Keit Scwarz In te early 1980s, personal computers ad ard disks tat were no larger

More information

M(0) = 1 M(1) = 2 M(h) = M(h 1) + M(h 2) + 1 (h > 1)

M(0) = 1 M(1) = 2 M(h) = M(h 1) + M(h 2) + 1 (h > 1) Insertion and Deletion in VL Trees Submitted in Partial Fulfillment of te Requirements for Dr. Eric Kaltofen s 66621: nalysis of lgoritms by Robert McCloskey December 14, 1984 1 ackground ccording to Knut

More information

Verifying Numerical Convergence Rates

Verifying Numerical Convergence Rates 1 Order of accuracy Verifying Numerical Convergence Rates We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, suc as te grid size or time step, and

More information

An inquiry into the multiplier process in IS-LM model

An inquiry into the multiplier process in IS-LM model An inquiry into te multiplier process in IS-LM model Autor: Li ziran Address: Li ziran, Room 409, Building 38#, Peing University, Beijing 00.87,PRC. Pone: (86) 00-62763074 Internet Address: jefferson@water.pu.edu.cn

More information

The modelling of business rules for dashboard reporting using mutual information

The modelling of business rules for dashboard reporting using mutual information 8 t World IMACS / MODSIM Congress, Cairns, Australia 3-7 July 2009 ttp://mssanz.org.au/modsim09 Te modelling of business rules for dasboard reporting using mutual information Gregory Calbert Command, Control,

More information

What is Advanced Corporate Finance? What is finance? What is Corporate Finance? Deciding how to optimally manage a firm s assets and liabilities.

What is Advanced Corporate Finance? What is finance? What is Corporate Finance? Deciding how to optimally manage a firm s assets and liabilities. Wat is? Spring 2008 Note: Slides are on te web Wat is finance? Deciding ow to optimally manage a firm s assets and liabilities. Managing te costs and benefits associated wit te timing of cas in- and outflows

More information

College Planning Using Cash Value Life Insurance

College Planning Using Cash Value Life Insurance College Planning Using Cas Value Life Insurance CAUTION: Te advisor is urged to be extremely cautious of anoter college funding veicle wic provides a guaranteed return of premium immediately if funded

More information

Optimizing Desktop Virtualization Solutions with the Cisco UCS Storage Accelerator

Optimizing Desktop Virtualization Solutions with the Cisco UCS Storage Accelerator Optimizing Desktop Virtualization Solutions wit te Cisco UCS Accelerator Solution Brief February 2013 Higligts Delivers linear virtual desktop storage scalability wit consistent, predictable performance

More information

Floodless in SEATTLE: A Scalable Ethernet Architecture for Large Enterprises

Floodless in SEATTLE: A Scalable Ethernet Architecture for Large Enterprises Floodless in SEATTLE: A Scalable Eternet Arcitecture for Large Enterprises Cangoon Kim Mattew Caesar Jennifer Rexford Princeton University Princeton University Princeton University Abstract IP networks

More information

2.23 Gambling Rehabilitation Services. Introduction

2.23 Gambling Rehabilitation Services. Introduction 2.23 Gambling Reabilitation Services Introduction Figure 1 Since 1995 provincial revenues from gambling activities ave increased over 56% from $69.2 million in 1995 to $108 million in 2004. Te majority

More information

Improved dynamic programs for some batcing problems involving te maximum lateness criterion A P M Wagelmans Econometric Institute Erasmus University Rotterdam PO Box 1738, 3000 DR Rotterdam Te Neterlands

More information

Distances in random graphs with infinite mean degrees

Distances in random graphs with infinite mean degrees Distances in random graps wit infinite mean degrees Henri van den Esker, Remco van der Hofstad, Gerard Hoogiemstra and Dmitri Znamenski April 26, 2005 Abstract We study random graps wit an i.i.d. degree

More information

Referendum-led Immigration Policy in the Welfare State

Referendum-led Immigration Policy in the Welfare State Referendum-led Immigration Policy in te Welfare State YUJI TAMURA Department of Economics, University of Warwick, UK First version: 12 December 2003 Updated: 16 Marc 2004 Abstract Preferences of eterogeneous

More information

SWITCH T F T F SELECT. (b) local schedule of two branches. (a) if-then-else construct A & B MUX. one iteration cycle

SWITCH T F T F SELECT. (b) local schedule of two branches. (a) if-then-else construct A & B MUX. one iteration cycle 768 IEEE RANSACIONS ON COMPUERS, VOL. 46, NO. 7, JULY 997 Compile-ime Sceduling of Dynamic Constructs in Dataæow Program Graps Soonoi Ha, Member, IEEE and Edward A. Lee, Fellow, IEEE Abstract Sceduling

More information

Channel Allocation in Non-Cooperative Multi-Radio Multi-Channel Wireless Networks

Channel Allocation in Non-Cooperative Multi-Radio Multi-Channel Wireless Networks Cannel Allocation in Non-Cooperative Multi-Radio Multi-Cannel Wireless Networks Dejun Yang, Xi Fang, Guoliang Xue Arizona State University Abstract Wile tremendous efforts ave been made on cannel allocation

More information

SAMPLE DESIGN FOR THE TERRORISM RISK INSURANCE PROGRAM SURVEY

SAMPLE DESIGN FOR THE TERRORISM RISK INSURANCE PROGRAM SURVEY ASA Section on Survey Researc Metods SAMPLE DESIG FOR TE TERRORISM RISK ISURACE PROGRAM SURVEY G. ussain Coudry, Westat; Mats yfjäll, Statisticon; and Marianne Winglee, Westat G. ussain Coudry, Westat,

More information

Research on the Anti-perspective Correction Algorithm of QR Barcode

Research on the Anti-perspective Correction Algorithm of QR Barcode Researc on te Anti-perspective Correction Algoritm of QR Barcode Jianua Li, Yi-Wen Wang, YiJun Wang,Yi Cen, Guoceng Wang Key Laboratory of Electronic Tin Films and Integrated Devices University of Electronic

More information

Derivatives Math 120 Calculus I D Joyce, Fall 2013

Derivatives Math 120 Calculus I D Joyce, Fall 2013 Derivatives Mat 20 Calculus I D Joyce, Fall 203 Since we ave a good understanding of its, we can develop derivatives very quickly. Recall tat we defined te derivative f x of a function f at x to be te

More information

Note: Principal version Modification Modification Complete version from 1 October 2014 Business Law Corporate and Contract Law

Note: Principal version Modification Modification Complete version from 1 October 2014 Business Law Corporate and Contract Law Note: Te following curriculum is a consolidated version. It is legally non-binding and for informational purposes only. Te legally binding versions are found in te University of Innsbruck Bulletins (in

More information

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient. The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data

More information

FINITE DIFFERENCE METHODS

FINITE DIFFERENCE METHODS FINITE DIFFERENCE METHODS LONG CHEN Te best known metods, finite difference, consists of replacing eac derivative by a difference quotient in te classic formulation. It is simple to code and economic to

More information

Simultaneous Location of Trauma Centers and Helicopters for Emergency Medical Service Planning

Simultaneous Location of Trauma Centers and Helicopters for Emergency Medical Service Planning Simultaneous Location of Trauma Centers and Helicopters for Emergency Medical Service Planning Soo-Haeng Co Hoon Jang Taesik Lee Jon Turner Tepper Scool of Business, Carnegie Mellon University, Pittsburg,

More information

Strategic trading in a dynamic noisy market. Dimitri Vayanos

Strategic trading in a dynamic noisy market. Dimitri Vayanos LSE Researc Online Article (refereed) Strategic trading in a dynamic noisy market Dimitri Vayanos LSE as developed LSE Researc Online so tat users may access researc output of te Scool. Copyrigt and Moral

More information

Pre-trial Settlement with Imperfect Private Monitoring

Pre-trial Settlement with Imperfect Private Monitoring Pre-trial Settlement wit Imperfect Private Monitoring Mostafa Beskar University of New Hampsire Jee-Hyeong Park y Seoul National University July 2011 Incomplete, Do Not Circulate Abstract We model pretrial

More information

Geometric Stratification of Accounting Data

Geometric Stratification of Accounting Data Stratification of Accounting Data Patricia Gunning * Jane Mary Horgan ** William Yancey *** Abstract: We suggest a new procedure for defining te boundaries of te strata in igly skewed populations, usual

More information

Lecture 10: What is a Function, definition, piecewise defined functions, difference quotient, domain of a function

Lecture 10: What is a Function, definition, piecewise defined functions, difference quotient, domain of a function Lecture 10: Wat is a Function, definition, piecewise defined functions, difference quotient, domain of a function A function arises wen one quantity depends on anoter. Many everyday relationsips between

More information

Unemployment insurance/severance payments and informality in developing countries

Unemployment insurance/severance payments and informality in developing countries Unemployment insurance/severance payments and informality in developing countries David Bardey y and Fernando Jaramillo z First version: September 2011. Tis version: November 2011. Abstract We analyze

More information

AutoVFlow: Autonomous Virtualization for Wide-area OpenFlow Networks

AutoVFlow: Autonomous Virtualization for Wide-area OpenFlow Networks AutoVFlow: Autonomous Virtualization for Wide-area OpenFlow Networks Hiroaki Yamanaka, Eiji Kawai, Suji Isii, and Sinji Simojo (NICT) @EWSDN 2014 3 SEP 2014 1 Agenda Background and Problem Proposal of

More information

Math 113 HW #5 Solutions

Math 113 HW #5 Solutions Mat 3 HW #5 Solutions. Exercise.5.6. Suppose f is continuous on [, 5] and te only solutions of te equation f(x) = 6 are x = and x =. If f() = 8, explain wy f(3) > 6. Answer: Suppose we ad tat f(3) 6. Ten

More information

2 Limits and Derivatives

2 Limits and Derivatives 2 Limits and Derivatives 2.7 Tangent Lines, Velocity, and Derivatives A tangent line to a circle is a line tat intersects te circle at exactly one point. We would like to take tis idea of tangent line

More information

Staffing and routing in a two-tier call centre. Sameer Hasija*, Edieal J. Pinker and Robert A. Shumsky

Staffing and routing in a two-tier call centre. Sameer Hasija*, Edieal J. Pinker and Robert A. Shumsky 8 Int. J. Operational Researc, Vol. 1, Nos. 1/, 005 Staffing and routing in a two-tier call centre Sameer Hasija*, Edieal J. Pinker and Robert A. Sumsky Simon Scool, University of Rocester, Rocester 1467,

More information

A system to monitor the quality of automated coding of textual answers to open questions

A system to monitor the quality of automated coding of textual answers to open questions Researc in Official Statistics Number 2/2001 A system to monitor te quality of automated coding of textual answers to open questions Stefania Maccia * and Marcello D Orazio ** Italian National Statistical

More information

CHAPTER 7. Di erentiation

CHAPTER 7. Di erentiation CHAPTER 7 Di erentiation 1. Te Derivative at a Point Definition 7.1. Let f be a function defined on a neigborood of x 0. f is di erentiable at x 0, if te following it exists: f 0 fx 0 + ) fx 0 ) x 0 )=.

More information

Computer Science and Engineering, UCSD October 7, 1999 Goldreic-Levin Teorem Autor: Bellare Te Goldreic-Levin Teorem 1 Te problem We æx a an integer n for te lengt of te strings involved. If a is an n-bit

More information

How To Provide Qos Based Routing In The Internet

How To Provide Qos Based Routing In The Internet CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this

More information

A strong credit score can help you score a lower rate on a mortgage

A strong credit score can help you score a lower rate on a mortgage NET GAIN Scoring points for your financial future AS SEEN IN USA TODAY S MONEY SECTION, JULY 3, 2007 A strong credit score can elp you score a lower rate on a mortgage By Sandra Block Sales of existing

More information

OpenFlow -Enabled Cloud Backbone Networks Create Global Provider Data Centers. ONF Solution Brief November 14, 2012

OpenFlow -Enabled Cloud Backbone Networks Create Global Provider Data Centers. ONF Solution Brief November 14, 2012 OpenFlow -Enabled Cloud Backbone Networks Create Global Provider Data Centers ONF Solution Brief November 14, 2012 Table of Contents 2 OpenFlow-Enabled Software-Defined Networking 2 Executive Summary 3

More information

Tis Problem and Retail Inventory Management

Tis Problem and Retail Inventory Management Optimizing Inventory Replenisment of Retail Fasion Products Marsall Fiser Kumar Rajaram Anant Raman Te Warton Scool, University of Pennsylvania, 3620 Locust Walk, 3207 SH-DH, Piladelpia, Pennsylvania 19104-6366

More information

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal

More information

Torchmark Corporation 2001 Third Avenue South Birmingham, Alabama 35233 Contact: Joyce Lane 972-569-3627 NYSE Symbol: TMK

Torchmark Corporation 2001 Third Avenue South Birmingham, Alabama 35233 Contact: Joyce Lane 972-569-3627 NYSE Symbol: TMK News Release Torcmark Corporation 2001 Tird Avenue Sout Birmingam, Alabama 35233 Contact: Joyce Lane 972-569-3627 NYSE Symbol: TMK TORCHMARK CORPORATION REPORTS FOURTH QUARTER AND YEAR-END 2004 RESULTS

More information

Instantaneous Rate of Change:

Instantaneous Rate of Change: Instantaneous Rate of Cange: Last section we discovered tat te average rate of cange in F(x) can also be interpreted as te slope of a scant line. Te average rate of cange involves te cange in F(x) over

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

Catalogue no. 12-001-XIE. Survey Methodology. December 2004

Catalogue no. 12-001-XIE. Survey Methodology. December 2004 Catalogue no. 1-001-XIE Survey Metodology December 004 How to obtain more information Specific inquiries about tis product and related statistics or services sould be directed to: Business Survey Metods

More information

Note nine: Linear programming CSE 101. 1 Linear constraints and objective functions. 1.1 Introductory example. Copyright c Sanjoy Dasgupta 1

Note nine: Linear programming CSE 101. 1 Linear constraints and objective functions. 1.1 Introductory example. Copyright c Sanjoy Dasgupta 1 Copyrigt c Sanjoy Dasgupta Figure. (a) Te feasible region for a linear program wit two variables (see tet for details). (b) Contour lines of te objective function: for different values of (profit). Te

More information

Software-Defined Networking. Starla Wachsmann. University Of North Texas

Software-Defined Networking. Starla Wachsmann. University Of North Texas Running head: Software-Defined Networking (SDN) Software-Defined Networking Starla Wachsmann University Of North Texas What is Software-Defined Networking? Software-Defined Networking has one consistent

More information

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs).

More information

Architecture of distributed network processors: specifics of application in information security systems

Architecture of distributed network processors: specifics of application in information security systems Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia vlad@neva.ru 1. Introduction Modern

More information

1. Case description. Best practice description

1. Case description. Best practice description 1. Case description Best practice description Tis case sows ow a large multinational went troug a bottom up organisational cange to become a knowledge-based company. A small community on knowledge Management

More information

Free Shipping and Repeat Buying on the Internet: Theory and Evidence

Free Shipping and Repeat Buying on the Internet: Theory and Evidence Free Sipping and Repeat Buying on te Internet: eory and Evidence Yingui Yang, Skander Essegaier and David R. Bell 1 June 13, 2005 1 Graduate Scool of Management, University of California at Davis (yiyang@ucdavis.edu)

More information

An Orientation to the Public Health System for Participants and Spectators

An Orientation to the Public Health System for Participants and Spectators An Orientation to te Public Healt System for Participants and Spectators Presented by TEAM ORANGE CRUSH Pallisa Curtis, Illinois Department of Public Healt Lynn Galloway, Vermillion County Healt Department

More information

OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS

OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS New Developments in Structural Engineering and Construction Yazdani, S. and Sing, A. (eds.) ISEC-7, Honolulu, June 18-23, 2013 OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS JIALI FU 1, ERIK JENELIUS

More information

Tangent Lines and Rates of Change

Tangent Lines and Rates of Change Tangent Lines and Rates of Cange 9-2-2005 Given a function y = f(x), ow do you find te slope of te tangent line to te grap at te point P(a, f(a))? (I m tinking of te tangent line as a line tat just skims

More information

Testing Challenges for Modern Networks Built Using SDN and OpenFlow

Testing Challenges for Modern Networks Built Using SDN and OpenFlow Using SDN and OpenFlow July 2013 Rev. A 07/13 SPIRENT 1325 Borregas Avenue Sunnyvale, CA 94089 USA Email: Web: sales@spirent.com www.spirent.com AMERICAS 1-800-SPIRENT +1-818-676-2683 sales@spirent.com

More information

Shell and Tube Heat Exchanger

Shell and Tube Heat Exchanger Sell and Tube Heat Excanger MECH595 Introduction to Heat Transfer Professor M. Zenouzi Prepared by: Andrew Demedeiros, Ryan Ferguson, Bradford Powers November 19, 2009 1 Abstract 2 Contents Discussion

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility

More information

Area-Specific Recreation Use Estimation Using the National Visitor Use Monitoring Program Data

Area-Specific Recreation Use Estimation Using the National Visitor Use Monitoring Program Data United States Department of Agriculture Forest Service Pacific Nortwest Researc Station Researc Note PNW-RN-557 July 2007 Area-Specific Recreation Use Estimation Using te National Visitor Use Monitoring

More information

Dynamically Scalable Architectures for E-Commerce

Dynamically Scalable Architectures for E-Commerce MKWI 2010 E-Commerce und E-Business 1289 Dynamically Scalable Arcitectures for E-Commerce A Strategy for Partial Integration of Cloud Resources in an E-Commerce System Georg Lackermair 1,2, Susanne Straringer

More information

4 Internet QoS Management

4 Internet QoS Management 4 Internet QoS Management Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology stadler@ee.kth.se September 2008 Overview Network Management Performance Mgt QoS Mgt Resource Control

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS

OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS ERIC T. CHUNG AND BJÖRN ENGQUIST Abstract. In tis paper, we developed and analyzed a new class of discontinuous

More information

A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks

A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks Didem Gozupek 1,Symeon Papavassiliou 2, Nirwan Ansari 1, and Jie Yang 1 1 Department of Electrical and Computer Engineering

More information

Software Defined Network Application in Hospital

Software Defined Network Application in Hospital InImpact: The Journal of Innovation Impact: ISSN 2051-6002 : http://www.inimpact.org Special Edition on Innovation in Medicine and Healthcare : Vol. 6. No. 1 : pp.1-11 : imed13-011 Software Defined Network

More information

OpenFlow Based Load Balancing

OpenFlow Based Load Balancing OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple

More information

An Intelligent Framework for Vehicular Ad-hoc Networks using SDN Architecture

An Intelligent Framework for Vehicular Ad-hoc Networks using SDN Architecture 435 An Intelligent Framework for Vehicular Ad-hoc Networks using SDN Architecture Balamurugan.V School of Computing Science and Engineering, VIT University Chennai Campus, 600127, Tamilnadu, India. Abstract

More information

ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE

ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE Byeong U. Park 1 and Young Kyung Lee 2 Department of Statistics, Seoul National University, Seoul, Korea Tae Yoon Kim 3 and Ceolyong Park

More information

Once you have reviewed the bulletin, please

Once you have reviewed the bulletin, please Akron Public Scools OFFICE OF CAREER EDUCATION BULLETIN #5 : Driver Responsibilities 1. Akron Board of Education employees assigned to drive Board-owned or leased veicles MUST BE FAMILIAR wit te Business

More information

Operation go-live! Mastering the people side of operational readiness

Operation go-live! Mastering the people side of operational readiness ! I 2 London 2012 te ultimate Up to 30% of te value of a capital programme can be destroyed due to operational readiness failures. 1 In te complex interplay between tecnology, infrastructure and process,

More information

Analysis of Network Segmentation Techniques in Cloud Data Centers

Analysis of Network Segmentation Techniques in Cloud Data Centers 64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology

More information

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches

More information

In other words the graph of the polynomial should pass through the points

In other words the graph of the polynomial should pass through the points Capter 3 Interpolation Interpolation is te problem of fitting a smoot curve troug a given set of points, generally as te grap of a function. It is useful at least in data analysis (interpolation is a form

More information

Math Test Sections. The College Board: Expanding College Opportunity

Math Test Sections. The College Board: Expanding College Opportunity Taking te SAT I: Reasoning Test Mat Test Sections Te materials in tese files are intended for individual use by students getting ready to take an SAT Program test; permission for any oter use must be sougt

More information

Notes: Most of the material in this chapter is taken from Young and Freedman, Chap. 12.

Notes: Most of the material in this chapter is taken from Young and Freedman, Chap. 12. Capter 6. Fluid Mecanics Notes: Most of te material in tis capter is taken from Young and Freedman, Cap. 12. 6.1 Fluid Statics Fluids, i.e., substances tat can flow, are te subjects of tis capter. But

More information

The Keys for Campus Networking: Integration, Integration, and Integration

The Keys for Campus Networking: Integration, Integration, and Integration The Keys for Campus Networking: Introduction Internet Protocol (IP) is considered the working-horse that the vast majority of current and future applications use as the key technology for information exchange,

More information

Network Virtualization and Application Delivery Using Software Defined Networking

Network Virtualization and Application Delivery Using Software Defined Networking Network Virtualization and Application Delivery Using Software Defined Networking Project Leader: Subharthi Paul Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu Keynote at

More information

Dynamic Competitive Insurance

Dynamic Competitive Insurance Dynamic Competitive Insurance Vitor Farina Luz June 26, 205 Abstract I analyze long-term contracting in insurance markets wit asymmetric information and a finite or infinite orizon. Risk neutral firms

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Software Defined Networking & Openflow

Software Defined Networking & Openflow Software Defined Networking & Openflow Autonomic Computer Systems, HS 2015 Christopher Scherb, 01.10.2015 Overview What is Software Defined Networks? Brief summary on routing and forwarding Introduction

More information

TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS. Swati Dhingra London School of Economics and CEP. Online Appendix

TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS. Swati Dhingra London School of Economics and CEP. Online Appendix TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS Swati Dingra London Scool of Economics and CEP Online Appendix APPENDIX A. THEORETICAL & EMPIRICAL RESULTS A.1. CES and Logit Preferences: Invariance of Innovation

More information

For Sale By Owner Program. We can help with our for sale by owner kit that includes:

For Sale By Owner Program. We can help with our for sale by owner kit that includes: Dawn Coen Broker/Owner For Sale By Owner Program If you want to sell your ome By Owner wy not:: For Sale Dawn Coen Broker/Owner YOUR NAME YOUR PHONE # Look as professional as possible Be totally prepared

More information

Multivariate time series analysis: Some essential notions

Multivariate time series analysis: Some essential notions Capter 2 Multivariate time series analysis: Some essential notions An overview of a modeling and learning framework for multivariate time series was presented in Capter 1. In tis capter, some notions on

More information

Pretrial Settlement with Imperfect Private Monitoring

Pretrial Settlement with Imperfect Private Monitoring Pretrial Settlement wit Imperfect Private Monitoring Mostafa Beskar Indiana University Jee-Hyeong Park y Seoul National University April, 2016 Extremely Preliminary; Please Do Not Circulate. Abstract We

More information

Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX

Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: shieyuan@cs.nctu.edu.tw

More information

SDN, a New Definition of Next-Generation Campus Network

SDN, a New Definition of Next-Generation Campus Network SDN, a New Definition of Next-Generation Campus Network Contents Campus Evolution and Development Trends... 1 Three Changes to Drive the Campus Network Development... 2 Fundamental Changes in User Behaviors...2

More information

CloudAnalyst: A CloudSim-based Tool for Modelling and Analysis of Large Scale Cloud Computing Environments

CloudAnalyst: A CloudSim-based Tool for Modelling and Analysis of Large Scale Cloud Computing Environments 433-659 DISTRIBUTED COMPUTING PROJECT, CSSE DEPT., UNIVERSITY OF MELBOURNE CloudAnalyst: A CloudSim-based Tool for Modelling and Analysis of Large Scale Cloud Computing Environments MEDC Project Report

More information

Using Intelligent Agents to Discover Energy Saving Opportunities within Data Centers

Using Intelligent Agents to Discover Energy Saving Opportunities within Data Centers 1 Using Intelligent Agents to Discover Energy Saving Opportunities witin Data Centers Alexandre Mello Ferreira and Barbara Pernici Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico

More information

SHAPE: A NEW BUSINESS ANALYTICS WEB PLATFORM FOR GETTING INSIGHTS ON ELECTRICAL LOAD PATTERNS

SHAPE: A NEW BUSINESS ANALYTICS WEB PLATFORM FOR GETTING INSIGHTS ON ELECTRICAL LOAD PATTERNS CIRED Worksop - Rome, 11-12 June 2014 SAPE: A NEW BUSINESS ANALYTICS WEB PLATFORM FOR GETTING INSIGTS ON ELECTRICAL LOAD PATTERNS Diego Labate Paolo Giubbini Gianfranco Cicco Mario Ettorre Enel Distribuzione-Italy

More information

Research on Risk Assessment of PFI Projects Based on Grid-fuzzy Borda Number

Research on Risk Assessment of PFI Projects Based on Grid-fuzzy Borda Number Researc on Risk Assessent of PFI Projects Based on Grid-fuzzy Borda Nuber LI Hailing 1, SHI Bensan 2 1. Scool of Arcitecture and Civil Engineering, Xiua University, Cina, 610039 2. Scool of Econoics and

More information

f(a + h) f(a) f (a) = lim

f(a + h) f(a) f (a) = lim Lecture 7 : Derivative AS a Function In te previous section we defined te derivative of a function f at a number a (wen te function f is defined in an open interval containing a) to be f (a) 0 f(a + )

More information

Analysis of IP Network for different Quality of Service

Analysis of IP Network for different Quality of Service 2009 International Symposium on Computing, Communication, and Control (ISCCC 2009) Proc.of CSIT vol.1 (2011) (2011) IACSIT Press, Singapore Analysis of IP Network for different Quality of Service Ajith

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Analog vs. Digital Transmission

Analog vs. Digital Transmission Analog vs. Digital Transmission Compare at two levels: 1. Data continuous (audio) vs. discrete (text) 2. Signaling continuously varying electromagnetic wave vs. sequence of voltage pulses. Also Transmission

More information

QoS issues in Voice over IP

QoS issues in Voice over IP COMP9333 Advance Computer Networks Mini Conference QoS issues in Voice over IP Student ID: 3058224 Student ID: 3043237 Student ID: 3036281 Student ID: 3025715 QoS issues in Voice over IP Abstract: This

More information