Network Functions Virtualization with Soft Real-Time Guarantees

Size: px
Start display at page:

Download "Network Functions Virtualization with Soft Real-Time Guarantees"

Transcription

1 Network Functons Vrtualzaton wth Soft Real-Tme Guarantees Yang L, Lnh Th Xuan Phan and Boon Thau Loo Unversty of Pennsylvana Abstract Network functons are ncreasngly beng commodtzed as software applances on off-the-shelf machnes, popularly known as Network Functons Vrtualzaton (NFV). Whle ths trend provdes economcs of scale, a key challenge s to ensure that the performance of vrtual applances match that of hardware boxes. We present the desgn and mplementaton of NFV-RT, a system that dynamcally provsons resources n an NFV envronment to provde tmng guarantees. Specfcally, gven a set of servce chans that each consst of some network functons, NFV-RT ams at maxmzng the total number of requests that can be assgned to the cloud for each servce chan, whle ensurng that the assgned requests meet ther deadlnes. Our approach uses a lnear programmng model wth randomzed roundng to effcently and proactvely obtan a near-optmal soluton. Our smulaton shows that, gven a cloud wth thousands of machnes and servce chans, NFV-RT requres only a few seconds to compute the soluton, whle acceptng three tmes the requests compared to baselne heurstcs. In addton, under some specal settngs, NFV-RT can provde sgnfcant performance mprovement. Our evaluaton on a local testbed shows that 94% of the packets of the submtted requests meet ther deadlnes, whch s three tmes that of prevous reactve-based solutons. I. INTRODUCTION Network functons are beng commodtzed as software applances on off-the-shelf machnes, popularly known as Network Functons Vrtualzaton (NFV) [5]. NFV enables elastc scalng and poolng of resources n a cost-effectve manner and can be deployed n a more agle fashon than tradtonal hardware based applances. Example network servces nclude functonaltes for routng, load balancng, deep-packet nspecton, and frewalls [4], [16]. Whle ths approach provdes economcs of scale, a key challenge s to ensure that the performance of these vrtual applances matches that of tradtonal hardware boxes. Ths s because, unlke dedcated hardware applances, vrtual network applances are deployed n vrtual machnes (VMs) on commodty servers, e.g., n the cloud. Current solutons are ether heurstcs-based (e.g., autoscalers on publc clouds) whch provdes no guarantees, or reactve n nature, where the cloud operator s alerted only after the SLAs have been volated. To address ths challenge, ths paper presents NFV-RT, a platform for composable cloud servces wth soft real-tme guarantees. NFV-RT dynamcally provsons resources n an NFV envronment to provde packet-wse tmng guarantees to servce requests. Specfcally, we make the followng contrbutons: Mathematcal modelng and analyss. We formulate the resource provsonng problem that NFV-RT has to solve wth a mathematcal model. Gven a set of servce chans that each consst of some network functons, NFV-RT ams at maxmzng the total number of requests that can be assgned to the cloud for each servce chan, whle ensurng that the assgned requests meet ther deadlnes. Our approach uses servce chan consoldaton wth tmng abstracton and a lnear programmng technque wth randomzed roundng to proactvely obtan a near-optmal soluton. Ths approach scales well for large data center deployments, and t can optmze real-tme performance even n onlne scenaros where new NFV chans are added on demand. Implementaton and evaluaton. To evaluate our approach, we have mplemented a smulator for NFV-RT. Our smulaton results show that, for a cloud wth thousands of machnes and thousands of servce chan requests, NFV-RT took only a few seconds to compute the soluton, whle acceptng three tmes more requests than a baselne heurstcs does. The results also demonstrate that under some specal settngs, NFV-RT can provde sgnfcant performance mprovement. In addton, we have developed a prototype mplementaton of NFV-RT, based on RT-Xen [22], a real-tme vrtualzaton platform bult upon Xen. Our evaluaton on a local 40-core testbed shows that NFV-RT enabled 94% of the packets of the submtted requests to complete before ther deadlnes, whch s three tmes compared to the baselne soluton. II. CLOUD MODEL We consder a cloud provder that supports many cloud tenants, each of whom runs one or more NFV servce chans n the cloud, servcng traffc on behalf of her customers. The goal of the cloud provder s to develop a resource provsonng strategy that maxmzes the number of requests wth SLA guarantees (meetng deadlnes) whle ensurng solaton among tenants. Towards ths, we am to fnd an assgnment such that () the total number of accepted requests of all tenants s maxmzed, () the delay experenced by each packet of an accepted request does not exceed ts relatve deadlne, and () servces of dfferent tenants cannot execute n the same VM. In ths paper, we focus on sngle-path flows through the servce chans; enablng mult-path flows s an avenue for future work. NFV-RT s a system for resource provsonng that the cloud provder can use to meet the above goal. We begn by presentng a mathematcal model used by NFV-RT. Our model consders a typcal fat tree [10] network topology popularly used n data centers, where each node v denotes a swtch or a rack of machnes, and each edge (v,v ) denotes a network lnk connectng v and v. Fgure 1 shows a threelayer fat tree: the top, mddle, and bottom layers are made of sets of core swtches, end-of-row swtches (EoR), and racks of machnes (M), respectvely. Each core swtch s connected to all the EoRs and serves as a pont of presence (PoP) of the cloud. All NFV traffc must enter and leave through the PoPs. Pod. The cloud has multple dsjont pods that are connected through core swtches (e.g., two pods n Fgure 1). Each pod contans a number of EoRs and ther connected racks.

2 EoR swtch ToR swtch Rack Machne Core swtch Fg. 1. An example fat tree topology. Rack. Each rack contans a top-of-rack swtch (ToR) and several physcal machnes that each run a vrtual machne montor (VMM). We model the collecton of machnes wthn a rack m as an aggregated node wth cpu m cores, where cpu m s the total number of cores of the machnes n the rack m. Gven a set of predefned executable servces of an NFV servce chan, we denote by wcet s (L) the worst-case executon tme (WCET) that a servce s takes to process a packet of sze L. (The WCET of a servce can be obtaned usng well-known WCET analyss methods [20].) As the sze of the output packet of a servce can dffer from that of the nput packet (e.g., n the case of compressng/decompressng or encrypton/decrypton servces), we denote by γ s the scalng functon of s: f L s the sze of an nput packet of s, then the sze of ts correspondng output packet s at most γ s (L). Tenant. The profle of a tenant s defned as tenant (v s,vt, s 1,s2,,sc,d ), where v s and v t are the core swtches through whch the traffc enters and leaves the cloud, respectvely; s 1,s2,,sc s the servce chan (of length c ) that the tenant ntends to route all her customers traffc through; and d s the relatve deadlne, whch s the longest tolerable packet-wse end-to-end delay. The traffc demand of a customer of a tenant s called a request. A request s a tuple request (td,α ), where td s the dentfer of the tenant, and α s the maxmum packet rate (packets/s). NFV-RT consders one servce chan per tenant, but t can easly be extended to allow more servce chans per tenant. For ease of presentaton, we assume the same packet sze for all requests of a tenant. The sze of each ncomng packet of tenant s denoted by L 0, and the sze of the output packet after traversng the frst j servces of the servce chan of the tenant s denoted by L j,.e., L j = γ s j (L j 1 ) for all j > 0. For each ncomng request, NFV-RT needs to spawn VMs on the physcal machnes to execute the servces. An assgnment of a request from tenant nvolves assgnng (a) one or more VMs, and ther correspondng machnes, that execute every servce of the tenant, and (b) a path startng from v s, passng through the VMs that execute the servces, followng the order of the servce chan, and endng wth v t. If a request s accepted, an assgnment for the request must be gven. An assgnment of a set of requests s made of an assgnment of every request n the set. III. OVERVIEW OF NFV-RT Fgure 2 shows an overvew of NFV-RT. It contans two nteractng components: () the controller, whch s responsble for communcatng wth customers of the cloud s tenants and for performng the deployment of the servces on the cloud based on a resource assgnment; and () the resource manager, Pod Statc Input Cloud Model Tenants Controller Set forwardng rules vm0 Swtches Resource Create/delete VMs Requests Manager wth servces RT VMM Machnes Dynamc Input NFV-RT Cloud Fg. 2. An overvew of NFV-RT. whch s responsble for determnng an assgnment for new requests, and for keepng track of the current status of the cloud and the current assgnment of exstng requests. The controller takes two sets of nputs: (1) the cloud model (c.f. Secton II) and a set of tenants, and (2) a set of customers requests, whch may arrve dynamcally at run tme. It then passes these nputs to the resource manager, whch wll perform the request admsson test and compute an assgnment for the accepted requests. The controller wll then deploy the computed assgnment on the cloud (e.g., communcates wth the VMM n each machne to create and confgure VMs, and sets up network paths for the accepted requests). To enable real-tme guarantees, each machne of the cloud runs a real-tme VMM that supports real-tme schedulng at both VMM and guest OS levels (e.g., RT-Xen [22]). NFV-RT assumes that the VMM scheduler and the guest OS wthn each VM follow the Earlest Deadlne Frst (EDF) schedulng polcy [11] (whch s supported by exstng VMM mplementatons [22]), so as to acheve hgh resource utlzaton. However, our system can easly be extended to other schedulng polces. The resource manager works n two phases, as llustrated n Fgures 3(a) and 3(b). In the ntal phase, t performs the servce chan consoldaton and tmng abstracton to mnmze the communcaton and VM overhead, based on whch t then determnes an assgnment for an ntal set of requests. In the onlne phase, t dynamcally determnes a new assgnment for each new set of requests as they arrve at run tme. In the next two sectons, we descrbe these two phases n greater detal. vm1 IV. INITIAL RESOURCE PROVISIONING As shown n Fgure 3(a), the ntal assgnment conssts of three consecutve stages: (1) Servce request consoldaton, (2) Pod assgnment, and (3) Machne assgnment per pod. We descrbe these stages n detal below. A. Stage 1: Servce Request Consoldaton Overvew. In ths stage, we consoldate the servce chan of each tenant 1 nto a consoldated servce chan (CSC) and abstract ts resource requrements nto a tmng nterface. The CSC contans the same servces as the orgnal servce chan does, but adjacent servces may be merged together to form a consoldated servce to be executed on a sngle VM. To reduce the VM overhead, our consoldaton ams to mnmze the number of consoldated servces. The CSC tmng nterface gves the condtons on the ncomng packet rate and the CPU resource requred to ensure each nstance of the CSC meets the deadlne. Based on ths nformaton, we can send requests 1 To ensure solaton among tenants, two tenants cannot share the same VM; hence, consoldaton s done for each tenant ndvdually.

3 Internal State (a) Intal assgnment phase (b) Onlne assgnment phase Fg. 3. NFV-RT resource provsonng method. to a CSC nstance f they satsfy the packet rate condton, and we can assgn the consoldated servces to machnes f the machnes can satsfy the requred CPU resource. Specfcally, the CSC tmng nterface s made of two parts: () the CPU resource requred by each consoldated servce, gven n the form of (perod, budget), whch specfes that ths consoldated servce requres budget executon tme unts every perod tme unts; and () the maxmum packet rate,.e., the maxmum number of packets that can be sent through each nstance of the CSC every second. The CSC satsfes the followng property: If the traffc of some requests goes through a CSC nstance, then the packets of the requests can meet ther deadlnes f (a) the total packet rate sent through ths nstance s at most the CSC s maxmum packet rate, (b) the VMs that execute the consoldated servces of the nstance are each gven the requred CPU resource, and (c) there s suffcent bandwdth between the VMs that execute any two adjacent consoldated servces. Fgure 4 shows an example servce chan wth three servces s 1,s 2,s 3. It s consoldated nto a CSC wth two consoldated servces, [s 1,s 2 ] and [s 3 ]. The CSC tmng nterface specfes that () [s 1,s 2 ] requres a budget of 1.5ms (equal to ts WCETs, shown wthn the box) of CPU tme every 2ms, and [s 3 ] requres 2ms of CPU tme every 2ms; and () ths CSC can receve a maxmum packet rate of 500 packets/s. After the servce consoldaton, the requests of each tenant are packed nto a number of aggregated requests (based on the requests packet rates) that wll be assgned to dfferent nstances of the CSC. NFV-RT wll then determne an assgnment for these CSC nstances. By constructon, every aggregated request that s accepted and receves an assgnment wll meet ts packet-wse deadlne, assumng that the schedulng overhead s neglgble. We next dscuss the consoldaton and servce aggregaton n detal. 1) Condtons for Consoldaton and Abstracton: Consder a tenant, and denote the maxmum packet rate of ts CSC by cap (# packets/sec). Then, the CSC should satsfy three condtons: (1) Feasblty condton: Each consoldated servce requres at most one core; (2) Traffc condton: The value of cap s at most one tenth of the smallest lnk bandwdth of the cloud; and (3) Deadlne condton: If we take the packet arrval perod (.e., 1/cap) as the maxmum delay for executng Servce Chan: CSC: max packet rate: 500/s s1 s2 s3 0.5(ms) 1(ms) 2(ms) [s1, s2] 1.5(ms) Perod=2(ms) Budget=1.5(ms) [s3] 2(ms) Perod=2(ms) Budget=2(ms) Requests: 100 packets/s 100 packets/s 200 packets/s 200 packets/s 300 packets/s 500 packets/s 400 packets/s CSC Instance 1 CSC Instance 2 Fg. 4. An example of servce request consoldaton. The number wthn each servce represents ts WCET. each servce of the CSC (nstead of the WCET), then the total delay for executng all consoldated servces and transmttng a packet between subsequent consoldated servces s at most the tenant s deadlne. We now dscuss each condton n detal. Feasblty Condton. Every consoldated servce of a CSC nstance of the tenant wll be executed by a unque VM. Lke n exstng real-tme systems research [11], we assume that a VM can run on only one core at any tme. Therefore, the CPU utlzaton of each consoldated servce must be no more than 1 to feasbly schedule t on a core,.e., c max j=1 { cap wcet s j (L j 1 ) } 1. (1) In Eq. (1), s j denotes the j-th consoldated servce, ) denotes ts WCET (for each nput packet, whch wcet s j (L j 1 s of sze L j 1 ), and thus ts utlzaton s wcet s j (L j 1 ) cap. Traffc Condton. We mpose a soft constrant on the maxmum amount of traffc that can traverse a CSC nstance, based on the ntuton that t s easer to fnd small chunks of avalable bandwdth than to fnd a large one. NFV-RT uses one tenth of the smallest lnk bandwdth of the cloud as the upper bound on the traffc rate,.e., c max (cap L j ) mn e k {b k }, (2) j=0 10 where b k s the bandwdth of each lnk e k. The above bound s an emprcal bound that comes from the analyss of the Lnear Programmng (LP). Essentally, ths bound s to ensure that, after we obtan a (fractonal) soluton of the LP, randomly roundng the obtaned soluton nto an nteger assgnment wll output an almost optmal result. Due to space constrants, we omt the detals here. Deadlne Condton. We frst brefly explan our estmaton of the maxmum delay a consoldated servce takes to process a packet. Recall from Secton III that, to enable predctable delay, each machne runs a real-tme VMM, such as RT- Xen [22], whch schedules VMs on the physcal cores under EDF. NFV-RT wll create VMs wth desgnated perods p j and budgets b j, where p j and b j are specfed by the nterfaces of the consoldated servces. In other words, each VM j requests the CPU for at most b j executon tme unts every p j tme unts, where 0 < b j p j. Due to ths, when multple VMs share the same core under EDF, f the total utlzaton of all the VMs s no more than one (.e., j b j /p j 1), every VM j s guaranteed to receve at least b j executon tme unts every p j tme unts [11]. Snce the b j CPU tme unts can be gven towards the end of the perod, each consdolated servce may experence a worst-case delay of up to p j tme unts.

4 Algorthm 1 A dynamc program for determnng the shortest consoldated chan 1) Input: s 1,s 2,,s c wth functon wcet(, j), whch s the WCET for executng the sequence of servces s,s +1,,s j, and perod T. 2) Defne functon f () as the length of the shortest chan for executng servces s 1,s 2,,s. Defne f (0) = 0. 3) For from 1 to c: f () = mn j [0, 1] wcet( j+1,) T { f ( j) + 1} 4) Output f (c) and the correspondng CSC. We now establsh the deadlne condton. For each tenant, f the maxmum packet rate of the CSC s cap, and the length of the CSC s l, then when usng the perod (1/cap) as the delay of each servce, the end-to-end delay of a packet wll be d tr + (1/cap + d tr )l, where d tr s the maxmum delay for transmttng a packet between any two machnes n the same pod. (To mnmze transmsson delay, each CSC nstance s always assgned to the same pod; however, our formulaton can easly be modfed to relax ths condton.) Note that the adjacent servces n the chan can be merged and executed n the same VM f ther total CPU utlzaton s no more than one. Moreover, after mergng the servces, the number of consoldated servces decreases, whch n turn decreases the total estmated delay for executng the consoldated servces, as well as the total transmsson delay (snce packets traversng servces executng wthn the same VM do not need to go through the network). Hence, to ensure that the delay s at most the deadlne d of the servce chan, our goal s to fnd the largest cap such that there exsts a way to merge the adjacent servces to obtan a CSC (wth length l) such that the end-toend delay s at most d,.e., d tr + (1/cap + d tr )l d. (3) 2) Algorthms for Consoldaton and Abstracton: We now present an algorthm for fndng the maxmum cap and the correspondng CSC based on the deadlne condton (Eq.(3)), usng an upper bound derved from the feasblty and traffc condtons (Eqs. (1) and (2)). We use Algorthm 1 as a sub-procedure, whch takes any gven cap and returns the shortest length of the consoldated servce chan. The shortest length of CSC. Formally, the problem of fndng an optmal way to consoldate a servce chan for a fxed value of cap can be stated as follows: Gven a servce chan S = s 1,s2,,sc wth perod T (= 1/cap), partton S nto dsjont segments S 1,S 2,S k wth the smallest k such that, for any S x, the WCET of the sequence of servces 2 n S k s less than the perod. In other words, f we denote by wcet(, j) the WCET of the sequence of servces s,s +1,,s j, then for any segment S x that starts from s and ends wth s j, wcet(, j) T. To solve ths problem, we propose a dynamc program (Algorthm 1), whch fnds the smallest partton for every subchan that starts wth s 1. When computng for the sub-chan from s 1 to s, whch s of length f (), the algorthm enumerates all possble ways to partton the chan nto two sub-chans, where the second sub-chan can form a sngle segment (the 2 The WCET of a sequence of servces s not always equal to the sum of the WCETs of the ndvdual servces (for nstance, servces that share the same procedure for parsng the packet, see [15]). Algorthm 2 Fnd the max cap for deadlne condton 1) Input: s 1,s 2,,s c wth functon wcet(, j), whch s the WCET for executng the sequence of servces s,s +1,,s j. 2) For l from 1 to c: (a) Bnary-search the packet rate to fnd the maxmum packet rate such that the correspondng shortest servce chan has length at most l. In each step of the bnary search, use Algorthm 1 to fnd the shortest CSC. If the length s less than or equal to l, search for a larger α; otherwse, search for a smaller α. (b) Assume α s the largest packet rate, f d tr + (1/α + d tr )l d, lst α as a canddate, and drop t otherwse. 3) Among all the canddates, pck cap to be the largest among them. WCET less than the perod). Snce the smallest partton of the frst sub-chan s stored by functon f, t suffces to smply store the sze of the smallest partton among all such bpartton (see Algorthm 1 for detals). It s straghtforward to see that the algorthm s optmal and takes O(c 2 ) tme to compute, where c s the length of the orgnal servce chan. Wth the above dynamc program for computng the length of the shortest possble consoldated servce chan, we can now desgn the algorthm to determne the maxmum cap that satsfes the deadlne condton (Algorthm 2). The algorthm enumerates all possble lengths (from 1 to c) for the CSC, and for each length, t performs a bnary search to fnd the correspondng maxmum cap. One can easly show that the algorthm always output the optmal cap, and the tme t takes s O(c 3 logα max ), where α max s the largest possble value of cap, whch can be obtaned from the feasblty condton (Eq. (1)) and the traffc condton (Eq. (2)). After obtanng the maxmum value of cap that satsfes all three consoldaton condtons, the actual CSC can be drectly found usng Algorthm 1. For the CPU resource requrement of each consoldated servce, the perod s equal to 1/cap and the budget s equal to the WCET of the consoldated servce. 3) Packng Requests nto Aggregated requests: After obtanng the CSC for each tenant, we wll pack the requests of a tenant nto a set of aggregated requests that each can be executed usng an nstance of the CSC. In other word, the total packet rate of each aggregated request should be at most the maxmum packet rate cap. Observe that ths s a standard bn packng problem, where each request s an tem wth sze equal to ts packet rate, and each aggregated request s a bn wth sze cap. There are well-studed algorthms for solvng the bn packng problem effcently wth good approxmaton ratos that we can use. For nstance, the Frst Ft Decreasng (FFD) algorthm [8] guarantees to fnd a soluton usng at most 71/60 OPT + 1 bns, where OPT s the mnmum number of bns needed to pack all the tems. After consoldatng the servces and requests, we explan how to assgn them to the cloud n the next two stages. We wll use the followng slghtly abused notaton for an aggregated request: request (v s,vt, s 1,s2,,sc,α,#requests ), where v s s the startng node, v t s the endng node, s 1,s 2,,sc s the consoldated servce chan, α s the packet rate, and #requests s the total number of orgnal requests that ths aggregated request contans. We denote by β j the traffc rate

5 Algorthm 3 ILP for assgnng requests to pods mn λ s.t., x, j = 1,, j,x, j {0,1} (1) j j,x, j β n λ b n j, j,x, j β out λ b out j (2) j,x, j u λ podcpu j (3) after traversng the frst j servces, and wcet j the WCET per packet of the j-th (consoldated) servce. We also denote β n and β out as the ncomng and outgong traffc rates, respectvely (.e., β n = β 0 and β out = β c ). B. Stage 2: Pod Assgnment Ths stage s essentally a pre-assgnment, whch ams to evenly splt the resource demands of all requests of all tenants nto dfferent pods. In the next stage, we wll determne the actual assgnment for each pod ndvdually. Assgnng requests to pods s a multdmensonal bn packng problem, whch can be formulated as an nteger lnear program (ILP). Towards ths, we compute for each pod the maxmum total bandwdth avalable from core swtches to racks and from racks to core swtches (usng a standard maxflow algorthm), and denote them by b n and b out, respectvely. Denote the total number of CPU cores of pod by podcpu. For each request, denote the CPU utlzaton of ts jth servce by u j. Then, the total CPU utlzaton of the request s u = c j=1 u j. For any request and any pod j, defne the bnary varable x, j to ndcate whether request s assgned to pod j, and denote λ as the resource utlzaton. We formulate the request assgnment as the ILP n Algorthm 3. Snce solvng an ILP can be neffcent, we use an LP wth roundng approach to acheve better scalablty. For ths, we let the varables x, j to be chosen wthn the range [0,1] (nstead of ntegers {0,1}), and solve the LP verson. For any request, we choose a random number and assgn t to pod j wth the probablty x, j obtaned by the LP soluton. One can show that ths LP wth roundng technque produces a soluton wth objectve value (λ) ncreased by a factor of at most (1 + δ), for some small δ > 0, compared to that of the ILP soluton. (Due to space constrants, we omt the detals of the analyss, but a smlar analyss can be found n [13]). C. Stage 3: Machne Assgnment To assgn a lst of requests to the machnes n a pod, we agan formulate the assgnment as an ILP, then solve the LP verson and apply roundng to obtan an almost optmal soluton. In the followng, all operatons are done for each pod ndvdually. 1) Graph Replcaton for the ILP formulaton: We use a graph replcaton technque to formulate the assgnment as an ILP. In the followng, we denote by c the length of the longest servce chan of all requests. We create (c + 1) replcatons of the pod, denoted by G 0,G 1,,G c +1, and the path assgned to a servce chan, s 1,s 2,,s c, wll be broken nto sub-paths for the sub-chans s 1,s 2, s 2,s 3,..., where the jth sub-path s assgned usng G j. For each rack m, let m j be the replcaton of m n G j. We Algorthm 4 ILP for assgnng requests on each pod max s.t., #requests x,e,0 1, e out(v EoR ), j [1,c 1], x,e,0 e out(v EoR ) e n(v EoR ) x,e,0 = 0 (1) e n(v EoR ) x,e, j = e out(v EoR ) x,e, j (2), j,m, x,m, j 1 + x,e, j = x,m, j + x,e, j (3) e n(m) e out(m) e, x,e, j β j b e, m, x,e m, j u j cpu m (4), j, j, j,e,x,e, j {0,1} (5) create an edge from m j to m j+1 (for each j), denoted by e m j. Hence, a path from the frst replcaton to the cth replcaton can be translated nto an assgnment for the request and vce versa. We wll formulate the ILP for fndng such a path. To mnmze communcaton overhead, we forbd a path from usng any core swtches apart from ts startng and endng swtches, v s and v t, and we consder the assgnment wthout v s and v t. Once the sub-assgnments between the servces are determned, the complete assgnment s acheved by smply connectng v s and v t to the begnnng and the end, respectvely. Graph Abstracton. To reduce the complexty of solvng the LP that fnds a path n the replcated graph, we perform abstracton: Use an aggregated EoR to abstract all physcal EoRs, and call t v EoR. The edge between the aggregated EoR and a rack s defned to have bandwdth equal to the total bandwdth between ths rack and all physcal EoRs. Thus, after the abstracton, each pod becomes a tree of depth 2. 2) ILP Formulaton: We now gve the ILP formulaton for fndng the path n the (abstracted) graph. For any node v (whch s ether a rack or the aggregated EoR), let out(v) and n(v) denote the set of edges that start from and end wth v, respectvely. For each request and each edge e, we defne the bnary varables x,e, j to ndcate whether the replcaton of e n G j s used n the path for request. Wth a slght abuse of notaton, for each rack m, we use x,m, j to ndcate whether the edge e m j s used n the path for request. The problem can be formulated as the ILP shown n Algorthm 4. The frst three constrants are to ensure that the assgnment of request s a vald path startng from v EoR n G 0 and endng wth v EoR n G c. Constrants (4) assert there are enough bandwdth on every lnk and enough CPUs on every rack. As usual, we relax x,e, j to be n the range [0,1] and solve the LP verson (n ths case, the soluton s a network flow). 3) Roundng: To transform the fractonal soluton nto an nteger assgnment, we use random roundng to obtan a lst of nteger solutons and pck one of them. For any request, we assgn t wth the path that starts from the edge e out(v EoR ) wth probablty x,e,0 ; followng the edges, at each node choose an outgong edge wth probablty proportonal to the x,e, j values; and repeat untl reachng the endng v EoR. Note that the paths obtaned from the above roundng are paths n the abstracted graph, and VMs are assgned only

6 to racks nstead of to actual machnes. To obtan an actual assgnment, we use the followng two extensons: Aggregated EoR to physcal EoR. When the path goes through the v EoR, we wll randomly pck a physcal EoR to transform ths assgnment from the abstracted graph to the real cloud. Snce the traffc rate of the each path s no larger than one tenth of the edge bandwdth (traffc condton), t can be shown usng Chernoff nequalty that wth hgh probablty, almost all edges (at least 1 δ fracton, for some small δ > 0) wll be assgned wth enough bandwdth n the orgnal cloud. Packng VMs. In order to determne whch machne of the rack that a VM should be assgned to, for each rack, we pack all VMs that are assgned to t nto machnes,.e., usng utlzaton as key and the number of avalable cores of the machnes as bns, and perform FFD to determne the fracton of VMs that cannot be packed. Among all the paths generated by the random roundng, we pck one where () bandwdth on edges s almost satsfed, () for each rack, the VMs assgned to t can be almost packed, and () the number of accepted requests s almost as large as the soluton of LP, where almost always means (1 + δ) fracton, for some small δ > 0. (To guarantee the traffc not exceedng the bandwdth, one only needs to dvde the orgnal bandwdth by (1 + δ) and use the result as the nput to the solver.) Usng Chernoff bound, t can be shown that such a path exsts and can be found f roundng s performed for suffcently many tmes. In our evaluaton, we observe that performng roundng for 20 tmes s suffcent. V. ONLINE ASSIGNMENT As shown n Fgure 3(b), the resource manager mantans as ts nternal state the exstng set of CSCs obtaned n the ntal phase, the exstng assgnment (.e., the exstng set of CSC nstances and ther current assgnments), and the current cloud status (avalable bandwdth and CPU utlzatons). After the ntal phase, NFV-RT wll dynamcally perform the onlne assgnment as new requests arrve at run tme. Lke n the ntal phase, NFV-RT packs the requests nto CSC nstances and then assgns the nstances to the cloud. It works n two stages, as follows. A. Stage 1: Assgn to exstng CSC nstances Gven each new request R, NFV-RT frst checks all exstng CSC nstances of the same tenant to fnd one that has enough slack between ts current packet rate and ts maxmum packet rate (cap) to ft the request n. If such a CSC nstance exsts, R wll be accepted and drectly assgned to that nstance. If no such CSC nstance exsts, NFV-RT wll reshuffle the exstng CSC nstances usng bn packng formulaton (whch can be solved usng exstng bn packng algorthms, such as FFD), as follows: The bns are the CSC nstances, and the tems are the requests of the tenant. The capacty of the bn s the maxmum packet rate of the CSC, and the sze of an tem s the packet rate of the correspondng request. If all requests of the tenant, ncludng the new request R, can be packed usng only the exstng CSC nstances, we accept R and mgrate the requests accordng to the output of the bn packng soluton (f needed). Note that ths mgraton only nvolves re-routng the packet flows of the exstng requests of the same tenant but not mgratng the servces. B. Stage 2: Create a new CSC Instance If t s nfeasble to assgn the new request usng only the exstng CSC nstances, NFV-RT wll create a new CSC nstance for the new request and assgn t to the cloud. To assgn the new nstance to the cloud, NFV-RT attempts to fnd an assgnment that results n balanced resource for dfferent pods and for dfferent racks n each pod, by usng a two-level balancng strategy. If no such assgnment exsts, NFV-RT wll perform Pod LP to re-balance the resource usage of the exstng assgnment. Two-level balancng. We defne the n-bandwdth and outbandwdth of a pod as the bandwdth from the core swtches to ts racks and from ts racks to the core swtches, respectvely. We select a pod usng the followng method, whch we call frst level balancng: frst, terate through all pods, and for each pod, calculate the fractons of the n-bandwdth, out-bandwdth, and cores that are remaned, respectvely, f the new CSC nstance s assgned to the pod; then, pck the pod wth the smallest maxmum value of the three fractons. After choosng the pod, we select a rack n the pod to assgn the entre new CSC nstance. Ths s done usng a second level balancng, as follows. We terate through each rack and perform two steps: () calculate the fractons of CPU cores n the rack and the bandwdth of lnks adjacent to the rack that are remaned, respectvely, f the new CSC s assgned to the rack, and () fnd a path wth suffcent avalable bandwdth for the CSC nstance from the startng core swtch, passng through the rack, to the endng core swtch. More specfcally, for each rack rk, the step () s done by fndng two EoR swtches, s 1 s 2, such that the lnks (v s, s 1, rk, s 2, v t ) have suffcent bandwdth for the new CSC nstance. Among the racks for whch a path can be found, we pck the one wth the smallest maxmum value of the CPU and bandwdth fractons. If no assgnment can be found, we wll perform VM or traffc mgraton to better organze the resources n the chosen pod,.e., perform a re-assgnment of exstng CSC nstances. Ths s done through the followng Pod LP. Pod LP. We use the ILP n Algorthm 4 wth the followng modfcatons: In Constrant (1), replace e out(v EoR ) x,e,0 1 wth = 1, to ensure that all requests wll get an assgnment. Defne a new varable γ as a rato reflectng the most congested lnk (most loaded rack). Then, n Constrant (4), replace, j x,e, j β j b e wth γb e, and, j x,e m, ju j cpu m whch γcpu m. Fnally, change the objectve to mnmzng γ. We solve the obtaned ILP usng the LP wth roundng technque, as was done n Secton IV-C3. By the desgn of the new ILP, the output assgnment wll mnmze rack load and lnk congeston. After re-balancng the requests assgnment n the pod, we use the two-level balancng heurstcs agan to fnd an assgnment for the new request. If no assgnment can be found, the request wll be rejected.

7 VI. EVALUATION We evaluated the performance and scalablty of NFV-RT usng both smulaton and on an actual testbed. We mplemented a prototype of NFV-RT wth all of ts functonaltes to perform resource provsonng for NFV requests. We used Python to mplement both the controller and the resource manager of NFV-RT. For the LP solver, we used Gurob [6] wth 32 parallel threads. Wthn each machne, RT-Xen [22] s used as the real-tme VMM, runnng on Ubuntu Server 12.04, 64-bt. Although RT-Xen only supports VM budget specfed n mllsecond granularty (perod s n mcrosecond), t s suffcent for the purpose of llustraton, and we smply round up the budget to the nearest mllsecond. Software network brdges n Lnux are set up for the communcaton between VMs. After the resource manager has determned an assgnment, the controller uses RPC to create VMs on the machnes and to confgure both the servces to execute and the next hop to forward packets. We compared our approach aganst a greedy algorthm as our baselne. The greedy strategy constantly montors the status of the cloud to detect resource bottlenecks; f a VM wth hgh CPU utlzaton (over 85%) s found, t splts the load and spawns a new VM, then attempts to place the new VM n the same rack as the overloaded VM. If the rack s full, t tres the racks that are two-hop away (.e., racks n the same pod), before consderng any other racks. In addton, for smulaton, whenever a request arrves at run tme, t checks whether assgnng ths request to the cloud usng the planned assgnment wll cause a resource bottleneck; f so, t mmedately attempts to spawn a new VM to avod the bottleneck (nstead of reactvely respondng after a bottleneck s observed). If no locaton can be found for spawnng the new VM, the request wll be rejected. A. Smulaton Setup. We used a 16-core machne to smulate a cloud setup wth the same topology as shown n Fgure 1. The setup contans 1600 machnes, wth each havng 4 CPU cores, and every 40 machnes form a rack. The cloud s dvded nto 10 pods, wth 4 racks and 2 EoR swtches each. There are 4 core swtches that are shared by all pods. Every lnk n the cloud has a bandwdth of 10Gb/s. There are ten servces that the cloud can execute, each wth WCET randomly pcked from the range [5,50] µs. There are 50 tenants, each wth a servce chan of length at most 10 and a deadlne between 5ms and 10ms. The startng and endng cores swtches were randomly pcked. A request of a tenant was generated by randomly choosng packet rate from 1000 to 4000 packets/s. Hence, the largest traffc rate s about 50Mbps. Once the requests are assgned to pods, the actual assgnment for each pod s ndependent of each other; hence, the machne assgnment for the pods s fully parallelzable. Therefore, the executon tme of NFV-RT conssts of (1) the tme to pack the requests nto CSC nstances, (2) the tme to assgn CSC nstances to dfferent pods, and (3) the maxmum tme for the machne assgnment among all pods. Percentage (%) Tme (s) (a) Sngle tral acceptance rate Rato Tme (s) (b) Aggregated performance Fg. 5. Smulaton for real tme performance Effcency of LP wth roundng. We evaluated NFV-RT wth 30K requests and 10 randomly generated test cases, where we recorded both the executon tme for fndng the assgnment and the number of accepted requests (the acceptance rate). The results show that our LP roundng based resource manager s hghly effcent: the average executon tme of NFV-RT s less than 5s, whch s orders of magntude better than the tradtonal ILP based solutons, whch can take 1800s [14] for the offlne stage. In addton, NFV-RT s also effectve n utlzng the resource: t accepts about 75% of the requests, and almost all lnks adjacent to the core swtches are fully utlzed,.e., no more requests can be assgned to the cloud. Real-tme performance. We next evaluated the onlne realtme performance of NFV-RT and the baselne strategy (where requests arrve dynamcally at run tme). For ths, we chose a 10-mnute nterval and generated 60K requests (.e., about 100 new requests every second). The total traffc rate of each tenant formed a bmodal dstrbuton over tme, so as to capture the bursty nature of data center network [7], [2]. Specfcally, for each tenant, we selected two tme slots as the traffc peaks, and for each request, we randomly chose one peak, generated a number x from the normal dstrbuton N (0, 150), and fnally, let the startng tme and endng tme be the peak tme ±x. We generated 10 test cases and constantly recorded the current acceptance rate (the number of accepted requests dvded by the number of requests so far). Acceptance rate. Fgure 5(a) shows the results for a sngle tral, where the x-axs represents tme, and the y-axs gves the current acceptance rate. Observe that NFV-RT always outperforms the baselne, and the performance mprovement ncreases wth tme. Ths s expected, snce long runnng requests accumulate n the cloud over tme, and t becomes more dffcult for a greedy strategy to fully utlze the cloud. Fgure 5(b) shows the aggregated results of 10 trals. The y-axs gves the acceptance rate of NFV-RT dvded by that of the baselne, averaged over 10 trals. As shown n the fgure, NFV-RT accepts about 3 tmes the requests compared to the baselne. Note that the smaller mprovement at the begnnng s expected, because the cloud had a lot of resources avalable and the baselne was consumng resources aggressvely. When the resources n the cloud are almost saturated (at around 100s), the baselne performance began to drop substantally. In contrast, NFV-RT started to demonstrate ts ablty to schedule requests effcently under lmted resource avalablty. The aggregated results also show that the onlne assgnment of NFV-RT ncurs only small overhead (less than 0.5ms f Pod

8 Executon tme (s) K machnes 8K machnes 16K machnes #Requests (x10 3 ) (a) Vary the number of requests Executon tme (s) K requests 20K requests 30K requests #Machnes (x10 3 ) (b) Vary the number of machnes Fg. 6. Smulaton for scalablty LP for each pod s nvoked at most once every second, whch allows us to process over a thousand requests per second). Schedulable rate. The desgn of NFV-RT guarantees that the packets of every accepted request wll meet ther deadlnes. In contrast, wthout consderng the tmng constrants, the greedy baselne does not have such a guarantee: although our smulaton ensures that CPUs and lnks are never overloaded, accepted requests mght stll mss ther deadlnes. Specfcally, let the schedulable rate be the percentage of requests that meet ther deadlnes. Then, n Fgure 5(a), the acceptance rate of NFV-RT s also ts schedulable rate, whereas the acceptance rate of the baselne s an upper bound of ts best possble schedulable rate. Further, the performance mprovement of NFV-RT over the baselne n terms of schedulable rate s at least equal to that was shown n Fgure 5(b). Wth or wthout Pod LP. We evaluated NFV-RT wth and wthout Pod LP. We observed that, n the same settng, havng Pod LP does not offer sgnfcant mprovement. However, there are scenaros where the beneft of Pod LP can be arbtrary large, such as the followng. Consder a pod wth two racks, rk 0 and rk 1. At tme 1, a large number of CPU-ntensve requests arrve, whch use up all CPUs n both racks. At tme 2, all requests assgned to rk 0 fnsh and thus, rk 0 becomes fully avalable whereas rk 1 has no CPU left. At tme 3, a large number of bandwdth-ntensve requests arrve, each of whch needs very lttle CPU resource. Wthout usng Pod LP (.e., no mgraton), NFV-RT can only use rk 0 to assgn these requests. However, the new assgnment gven by Pod LP allows requests from rk 1 to mgrate to rk 0, makng the bandwdth on the lnks adjacent to rk 1 avalable for the new requests (whch s prevously unusable as rk 1 has no CPU left). Theoretcally, the gan could be made arbtrarly large f we set the CPU requrement of the new requests to be arbtrarly small. We nvestgated one case n ths settng, and we observed that Pod LP enables over 25% more requests to be accepted. Scalablty. We evaluated the scalablty of NFV-RT by consderng larger cloud topologes. We consder respectvely 1.6K, 8K, and 16K machnes by ncreasng the number of pods n the cloud and keepng the sze and the topology of each pod unchanged. Note that the executon tme n ths evaluaton s for the ntal phase of NFV-RT, and when enterng the onlne phase, the overhead of NFV-RT for schedulng requests s neglgble (less than 0.5ms). We vared the number of requests from 5K to 35K. Fgures 6(a) and 6(b) show the executon tme when varyng the number of requests and the number of machnes, respectvely. The executon tme grows lnearly n terms of the number of requests, but grows sghtly less than lnear n terms of the number of machnes. Ths s because the complexty of LP depends on both the number of machnes and the number of edges, and scalng up the topology wll ncrease both of them. In all cases, the ntal phase s completed wthn 35 seconds, even for cloud wth 16K machnes. B. Actual Testbed We evaluated NFV-RT n a local cluster consstng of 40 cores n total across 4 physcal machnes hostng VMs. An addton physcal machne serves as the traffc generator. The traffc of a request wll always start from the generator, travel through a lst of VMs, and be sent back to the generator. The generator wll record the sendng tme and recevng tme of every packet to compute the packet s delay. VMs. Two of the VM hosts have 4 Intel(R) Xeon(R) 2.40GHz CPU cores wth 12GB RAM, and the other two have 16 Intel(R) Xeon(R) 2.10GHz cores (32 core threads wth hyperthreadng) wth 24GB RAM. The 16 core machnes have hyper-threadng enabled, and hence, we are able to get 32 parallelsm. All VMs run Ubuntu Server 12.04, 64-bt wth 256MB RAM. One core s reserved for the VMM n each 4-core machne, and two cores are reserved for the VMM n each 16-core machne, all wth 2GB RAM. We run 4 VMs on each of the quad-core machnes, and 32 VMs on each of the 16 core machnes, for a total of 72 VMs, of whch 66 are used for runnng the servce chans. Network. On our testbed, we generated delays by runnng the machnes across dfferent racks. All three racks (more specfcally, the two 16-core machnes and the swtch that connects the two 4-core machnes) and the traffc generator are connected by another swtch. The local cloud s vewed as a sngle pod, wth each lnk havng 1Gb/s bandwdth. Servces. We mplemented two types of servces n C: frewall and network address translaton (NAT). The frewall and NAT wll both attempt to match the source IP of a packet wth a lst of pre-defned rules, and, respectvely, mark a specfc bt of the packet and change the source IP to a dfferent IP when a match s found. Each servce has a parameter specfyng the number of rules t needs to match. For nstance, FW50000 and NAT stand for frewall wth 50K rules and NAT wth 100K rules, respectvely. We used four dfferent servces: FW50000 (WCET 2.5ms), FW (WCET 5ms), NAT50000 (WCET 2.5ms), and NAT (WCET 5ms). The WCET was estmated by measurement [20]. Requests. We generated 5 tenants, three of whom have servce chans of length 1 and the remanng two have servce chans of length 4. The servce chans are chosen randomly wth no repeated servce for each tenant. The deadlne of each tenant s the sum of the WCETs of ts servces plus a random number chosen between 10ms and 20ms. Each tenant has 10 requests, and the packet rate of each request s randomly chosen from the range [50,100] (.e., the maxmum data rate s about 1.2Mb/s). All requests arrve at the begnnng and last for 2 mnutes. In other words, we created a network burst at tme zero, and examned the performance.

9 Fgure 7 shows the CDF of the packet delay. The x-axs represents the delay/deadlne rato, and the y-axs represents the percentage of the packets that meet ther deadlnes. For each algorthm, the pont wth x-value equal to 1 ndcates the percentage of packets that meet ther deadlnes (the actual numbers are lsted n the legend). The results show that Percentage (%) 1 Delay/deadlne NFV-RT provdes tmng guarantees for more than 94% of the packets, whch s approxmately three tmes that can be guaranteed by baselne (31.39%). We further observed that the packets that mssed ther dead- Fg. 7. The delay/deadlne CDF lnes under NFV-RT were ether caused by network outlers or happened near the tme that the socket was just created. 3 We also observed that the CDF under the greedy baselne s stablzed at a percentage value of less than 50%,.e., over 50% of the packets have arbtrarly long delays. We nvestgated the data and confrmed that these packets were lost and never receved by the generator. In contrast, the maxmum packet delay under NFV-RT s always bounded, and only a very small fracton (about 1%) of the packets have delays larger than 4 tmes ther deadlnes. Fnally, NFV-RT takes only a few mllseconds to fnd an assgnment n our experments. VII. RELATED WORK Exstng work n NFV resource management does not consder the dynamc deployment of servces n a general vrtualzed cloud settng wth real-tme constrants. For nstance, PLayer [9] and SIMPLE [14] are effectve traffc steerng tools for routng traffc flows between statc mddleboxes (MBox), but they do not consder vrtualzaton or dynamc MBox placement. Smlarly, CoMb [15] consders dynamc MBox placement but n a specal settng where the routng path s fxed, and ts goal s smply to determne whch machnes on the path should be chosen to execute the MBoxes. Stratos [3] montors the cloud to detect resource bottlenecks and responses accordngly by duplcatng or mgratng VMs. However, the reactve nature of Stratos leads to poor delay guarantees (e.g., when a bottleneck s detected, some deadlnes may have been mssed). In contrast, NFV-RT proactvely assgns the resources to servces based on a formal analyss, thus enablng better tmng guarantees and performance predctablty. NFV-RT can enable mllsecond-level delay guarantees, whch s not possble under such a reactve approach. There exsts an extensve lterature n cloud resource management (e.g. [18], [1], [19]) but they target non-real-tme applcatons. Technques for vrtual machne placement and mgraton have also been developed, e.g., [21], [12], [17]. However, these technques do not smultaneously solve the VM placement and the traffc steerng problem, whch n general can lead to sub-optmal solutons. We are not aware of any exstng work that can provde formal tmng guarantees for real-tme servces n the cloud envronments. 3 Incoporatng these overheads wll be consdered n our future work. VIII. CONCLUSION We have presented the desgn, mplementaton and evaluaton of NFV-RT, a real-tme resource provsonng system for NFV. NFV-RT ntegrates tmng analyss wth several novel technques, such as servce chan consoldaton, tmng abstracton, and lnear programmng wth roundng, to enable effcent resource provsonng whle ensurng tmng guarantees. Our evaluaton usng both smulaton and emulaton shows that not only NFV-RT s effectve n meetng deadlnes and offers sgnfcant real-tme performance mprovement compared to a baselne approach, but t s also hghly effcent and scalable. ACKNOWLEDGEMENT Supported n part by ONR N , ONR N , and NSF grants CNS , CNS , ECCS , CNS , CNS , CNS and the Intel-NSF Partnershp for Cyber-Physcal Systems Securty and Prvacy. REFERENCES [1] H. Ballan, K. Jang, T. Karaganns, C. Km, D. Gunawardena, and G. O Shea. Chatty tenants and the cloud network sharng problem. In NSDI, [2] T. Benson, A. Anand, A. Akella, and M. Zhang. Understandng data center traffc characterstcs. Computer Communcaton Revew, [3] A. Gember, A. Krshnamurthy, S. S. John, R. Grandl, X. Gao, A. Anand, T. Benson, A. Akella, and V. Sekar. Stratos: A network-aware orchestraton layer for mddleboxes n the cloud. CoRR, [4] A. Gember, P. Prabhu, Z. Ghadyal, and A. Akella. Toward softwaredefned mddlebox networkng. In HotNets, [5] R. Guerzon et al. Network functons vrtualsaton: an ntroducton, benefts, enablers, challenges and call for acton, ntroductory whte paper. In SDN and OpenFlow World Congress, [6] I. Gurob Optmzaton. Gurob optmzer reference manual, [7] J. W. Jang, T. Lan, S. Ha, M. Chen, and M. Chang. Jont VM placement and routng for data center traffc engneerng. In INFOCOM, [8] D. S. Johnson and M. R. Garey. A 71/60 theorem for bn packng. Journal of Complexty, 1(1):65 106, [9] D. A. Joseph, A. Tavakol, and I. Stoca. A polcy-aware swtchng layer for data centers. In SIGCOMM, [10] S. Kandula, S. Sengupta, A. G. Greenberg, P. Patel, and R. Chaken. The nature of data center traffc: measurements & analyss. In IMC, [11] J. W. S. Lu. Real-Tme Systems. Prentce Hall, [12] X. Meng, V. Pappas, and L. Zhang. Improvng the scalablty of data center networks wth traffc-aware vrtual machne placement. In INFOCOM, [13] R. Motwan and P. Raghavan. Randomzed algorthms. Chapman & Hall/CRC, [14] Z. A. Qaz, C. Tu, L. Chang, R. Mao, V. Sekar, and M. Yu. Smplefyng mddlebox polcy enforcement usng SDN. In SIGCOMM, [15] V. Sekar, N. Eg, S. Ratnasamy, M. K. Reter, and G. Sh. Desgn and mplementaton of a consoldated mddlebox archtecture. In NSDI, [16] J. Sherry, S. Hasan, C. Scott, A. Krshnamurthy, S. Ratnasamy, and V. Sekar. Makng mddleboxes someone else s problem: network processng as a cloud servce. In SIGCOMM, [17] V. Shrvastava, P. Zerfos, K. Lee, H. Jamjoom, Y. Lu, and S. Banerjee. Applcaton-aware vrtual machne mgraton n data centers. In INFOCOM, [18] D. Shue, M. J. Freedman, and A. Shakh. Performance solaton and farness for mult-tenant cloud storage. In OSDI, pages , [19] D. B. Terry, V. Prabhakaran, R. Kotla, M. Balakrshnan, M. K. Agulera, and H. Abu-Lbdeh. Consstency-based servce level agreements for cloud storage. In SOSP, [20] R. Wlhelm, J. Engblom, A. Ermedahl, N. Holst, S. Thesng, D. B. Whalley, G. Bernat, C. Ferdnand, R. Heckmann, T. Mtra, F. Mueller, I. Puaut, P. P. Puschner, J. Staschulat, and P. Stenström. The worst-case executon-tme problem - overvew of methods and survey of tools. ACM Trans. Embedded Comput. Syst., 7(3), [21] T. Wood, P. J. Shenoy, A. Venkataraman, and M. S. Yousf. Black-box and gray-box strateges for vrtual machne mgraton. In NSDI, [22] S. X, M. Xu, C. Lu, L. T. X. Phan, C. D. Gll, O. Sokolsky, and I. Lee. Real-Tme Mult-Core Vrtual Machne Schedulng n Xen. In EMSOFT.

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014 Agle Traffc Mergng for Data Center Networks Qng Y and Suresh Sngh Portland State Unversty, Oregon June 10 th, 2014 Agenda Background and motvaton Power optmzaton model Smulated greedy algorthm Traffc mergng

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Network Aware Load-Balancing via Parallel VM Migration for Data Centers Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung

More information

Real-Time Process Scheduling

Real-Time Process Scheduling Real-Tme Process Schedulng ktw@cse.ntu.edu.tw (Real-Tme and Embedded Systems Laboratory) Independent Process Schedulng Processes share nothng but CPU Papers for dscussons: C.L. Lu and James. W. Layland,

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process Dsadvantages of cyclc TDDB47 Real Tme Systems Manual scheduler constructon Cannot deal wth any runtme changes What happens f we add a task to the set? Real-Tme Systems Laboratory Department of Computer

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Heuristic Static Load-Balancing Algorithm Applied to CESM

Heuristic Static Load-Balancing Algorithm Applied to CESM Heurstc Statc Load-Balancng Algorthm Appled to CESM 1 Yur Alexeev, 1 Sher Mckelson, 1 Sven Leyffer, 1 Robert Jacob, 2 Anthony Crag 1 Argonne Natonal Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439,

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP) 6.3 / -- Communcaton Networks II (Görg) SS20 -- www.comnets.un-bremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes

More information

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers Journal of Computatonal Informaton Systems 7: 13 (2011) 4740-4747 Avalable at http://www.jofcs.com A Load-Balancng Algorthm for Cluster-based Mult-core Web Servers Guohua YOU, Yng ZHAO College of Informaton

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information

Virtual Network Embedding with Coordinated Node and Link Mapping

Virtual Network Embedding with Coordinated Node and Link Mapping Vrtual Network Embeddng wth Coordnated Node and Lnk Mappng N. M. Mosharaf Kabr Chowdhury Cherton School of Computer Scence Unversty of Waterloo Waterloo, Canada Emal: nmmkchow@uwaterloo.ca Muntasr Rahan

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Cloud Auto-Scaling with Deadline and Budget Constraints

Cloud Auto-Scaling with Deadline and Budget Constraints Prelmnary verson. Fnal verson appears In Proceedngs of 11th ACM/IEEE Internatonal Conference on Grd Computng (Grd 21). Oct 25-28, 21. Brussels, Belgum. Cloud Auto-Scalng wth Deadlne and Budget Constrants

More information

A generalized hierarchical fair service curve algorithm for high network utilization and link-sharing

A generalized hierarchical fair service curve algorithm for high network utilization and link-sharing Computer Networks 43 (2003) 669 694 www.elsever.com/locate/comnet A generalzed herarchcal far servce curve algorthm for hgh network utlzaton and lnk-sharng Khyun Pyun *, Junehwa Song, Heung-Kyu Lee Department

More information

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems Schedulablty Bound of Weghted Round Robn Schedulers for Hard Real-Tme Systems Janja Wu, Jyh-Charn Lu, and We Zhao Department of Computer Scence, Texas A&M Unversty {janjaw, lu, zhao}@cs.tamu.edu Abstract

More information

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks Energy Effcent Routng n Ad Hoc Dsaster Recovery Networks Gl Zussman and Adran Segall Department of Electrcal Engneerng Technon Israel Insttute of Technology Hafa 32000, Israel {glz@tx, segall@ee}.technon.ac.l

More information

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs Cost Mnmzaton usng Renewable Coolng and Thermal Energy Storage n CDNs Stephen Lee College of Informaton and Computer Scences UMass, Amherst stephenlee@cs.umass.edu Rahul Urgaonkar IBM Research rurgaon@us.bm.com

More information

CLoud computing technologies have enabled rapid

CLoud computing technologies have enabled rapid 1 Cost-Mnmzng Dynamc Mgraton of Content Dstrbuton Servces nto Hybrd Clouds Xuana Qu, Hongxng L, Chuan Wu, Zongpeng L and Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Hong Kong,

More information

In some supply chains, materials are ordered periodically according to local information. This paper investigates

In some supply chains, materials are ordered periodically according to local information. This paper investigates MANUFACTURING & SRVIC OPRATIONS MANAGMNT Vol. 12, No. 3, Summer 2010, pp. 430 448 ssn 1523-4614 essn 1526-5498 10 1203 0430 nforms do 10.1287/msom.1090.0277 2010 INFORMS Improvng Supply Chan Performance:

More information

An ILP Formulation for Task Mapping and Scheduling on Multi-core Architectures

An ILP Formulation for Task Mapping and Scheduling on Multi-core Architectures An ILP Formulaton for Task Mappng and Schedulng on Mult-core Archtectures Yng Y, We Han, Xn Zhao, Ahmet T. Erdogan and Tughrul Arslan Unversty of Ednburgh, The Kng's Buldngs, Mayfeld Road, Ednburgh, EH9

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Checkng and Testng in Nokia RMS Process

Checkng and Testng in Nokia RMS Process An Integrated Schedulng Mechansm for Fault-Tolerant Modular Avoncs Systems Yann-Hang Lee Mohamed Youns Jeff Zhou CISE Department Unversty of Florda Ganesvlle, FL 326 yhlee@cse.ufl.edu Advanced System Technology

More information

Dynamic Fleet Management for Cybercars

Dynamic Fleet Management for Cybercars Proceedngs of the IEEE ITSC 2006 2006 IEEE Intellgent Transportaton Systems Conference Toronto, Canada, September 17-20, 2006 TC7.5 Dynamc Fleet Management for Cybercars Fenghu. Wang, Mng. Yang, Ruqng.

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook)

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook) MIT 8.996: Topc n TCS: Internet Research Problems Sprng 2002 Lecture 7 March 20, 2002 Lecturer: Bran Dean Global Load Balancng Scrbe: John Kogel, Ben Leong In today s lecture, we dscuss global load balancng

More information

Load Balancing By Max-Min Algorithm in Private Cloud Environment

Load Balancing By Max-Min Algorithm in Private Cloud Environment Internatonal Journal of Scence and Research (IJSR ISSN (Onlne: 2319-7064 Index Coperncus Value (2013: 6.14 Impact Factor (2013: 4.438 Load Balancng By Max-Mn Algorthm n Prvate Cloud Envronment S M S Suntharam

More information

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud IJCSI Internatonal Journal of Computer Scence Issues, Vol. 0, Issue, No, March 03 ISSN (Prnt): 694-084 ISSN (Onlne): 694-0784 www.ijcsi.org 37 Proft-Aware DVFS Enabled Resource Management of IaaS Cloud

More information

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 畫 類 別 : 個 別 型 計 畫 半 導 體 產 業 大 型 廠 房 之 設 施 規 劃 計 畫 編 號 :NSC 96-2628-E-009-026-MY3 執 行 期 間 : 2007 年 8 月 1 日 至 2010 年 7 月 31 日 計 畫 主 持 人 : 巫 木 誠 共 同

More information

INSTITUT FÜR INFORMATIK

INSTITUT FÜR INFORMATIK INSTITUT FÜR INFORMATIK Schedulng jobs on unform processors revsted Klaus Jansen Chrstna Robene Bercht Nr. 1109 November 2011 ISSN 2192-6247 CHRISTIAN-ALBRECHTS-UNIVERSITÄT ZU KIEL Insttut für Informat

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

Formulating & Solving Integer Problems Chapter 11 289

Formulating & Solving Integer Problems Chapter 11 289 Formulatng & Solvng Integer Problems Chapter 11 289 The Optonal Stop TSP If we drop the requrement that every stop must be vsted, we then get the optonal stop TSP. Ths mght correspond to a ob sequencng

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

SPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background:

SPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background: SPEE Recommended Evaluaton Practce #6 efnton of eclne Curve Parameters Background: The producton hstores of ol and gas wells can be analyzed to estmate reserves and future ol and gas producton rates and

More information

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet 2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be

More information

Optimal Map Reduce Job Capacity Allocation in Cloud Systems

Optimal Map Reduce Job Capacity Allocation in Cloud Systems Optmal Map Reduce Job Capacty Allocaton n Cloud Systems Marzeh Malemajd Sharf Unversty of Technology, Iran malemajd@ce.sharf.edu Danlo Ardagna Poltecnco d Mlano, Italy danlo.ardagna@polm.t Mchele Cavotta

More information

taposh_kuet20@yahoo.comcsedchan@cityu.edu.hk rajib_csedept@yahoo.co.uk, alam_shihabul@yahoo.com

taposh_kuet20@yahoo.comcsedchan@cityu.edu.hk rajib_csedept@yahoo.co.uk, alam_shihabul@yahoo.com G. G. Md. Nawaz Al 1,2, Rajb Chakraborty 2, Md. Shhabul Alam 2 and Edward Chan 1 1 Cty Unversty of Hong Kong, Hong Kong, Chna taposh_kuet20@yahoo.comcsedchan@ctyu.edu.hk 2 Khulna Unversty of Engneerng

More information

Optimization of network mesh topologies and link capacities for congestion relief

Optimization of network mesh topologies and link capacities for congestion relief Optmzaton of networ mesh topologes and ln capactes for congeston relef D. de Vllers * J.M. Hattngh School of Computer-, Statstcal- and Mathematcal Scences Potchefstroom Unversty for CHE * E-mal: rwddv@pu.ac.za

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and Ths artcle appeared n a journal publshed by Elsever. The attached copy s furnshed to the author for nternal non-commercal research and educaton use, ncludng for nstructon at the authors nsttuton and sharng

More information

CloudMedia: When Cloud on Demand Meets Video on Demand

CloudMedia: When Cloud on Demand Meets Video on Demand CloudMeda: When Cloud on Demand Meets Vdeo on Demand Yu Wu, Chuan Wu, Bo L, Xuanja Qu, Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Emal: {ywu,cwu,xjqu,fcmlau}@cs.hku.hk Department

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika.

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika. VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network) Network-Wde Load Balancng Routng Wth Performance Guarantees Kartk Gopalan Tz-cker Chueh Yow-Jan Ln Florda State Unversty Stony Brook Unversty Telcorda Research kartk@cs.fsu.edu chueh@cs.sunysb.edu yjln@research.telcorda.com

More information

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing Effcent Bandwdth Management n Broadband Wreless Access Systems Usng CAC-based Dynamc Prcng Bader Al-Manthar, Ndal Nasser 2, Najah Abu Al 3, Hossam Hassanen Telecommuncatons Research Laboratory School of

More information

A Lyapunov Optimization Approach to Repeated Stochastic Games

A Lyapunov Optimization Approach to Repeated Stochastic Games PROC. ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, OCT. 2013 1 A Lyapunov Optmzaton Approach to Repeated Stochastc Games Mchael J. Neely Unversty of Southern Calforna http://www-bcf.usc.edu/

More information

1 Example 1: Axis-aligned rectangles

1 Example 1: Axis-aligned rectangles COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton

More information

Sangam - Efficient Cellular-WiFi CDN-P2P Group Framework for File Sharing Service

Sangam - Efficient Cellular-WiFi CDN-P2P Group Framework for File Sharing Service Sangam - Effcent Cellular-WF CDN-P2P Group Framework for Fle Sharng Servce Anjal Srdhar Unversty of Illnos, Urbana-Champagn Urbana, USA srdhar3@llnos.edu Klara Nahrstedt Unversty of Illnos, Urbana-Champagn

More information

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits Lnear Crcuts Analyss. Superposton, Theenn /Norton Equalent crcuts So far we hae explored tmendependent (resste) elements that are also lnear. A tmendependent elements s one for whch we can plot an / cure.

More information

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by 6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng

More information

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines DBA-VM: Dynamc Bandwdth Allocator for Vrtual Machnes Ahmed Amamou, Manel Bourguba, Kamel Haddadou and Guy Pujolle LIP6, Perre & Mare Cure Unversty, 4 Place Jusseu 755 Pars, France Gand SAS, 65 Boulevard

More information

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

Research of concurrency control protocol based on the main memory database

Research of concurrency control protocol based on the main memory database Research of concurrency control protocol based on the man memory database Abstract Yonghua Zhang * Shjazhuang Unversty of economcs, Shjazhuang, Shjazhuang, Chna Receved 1 October 2014, www.cmnt.lv The

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS Bogdan Cubotaru, Gabrel-Mro Muntean Performance Engneerng Laboratory, RINCE School of Electronc Engneerng Dubln Cty

More information

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME, ISSUE, FEBRUARY ISSN 77-866 Logcal Development Of Vogel s Approxmaton Method (LD- An Approach To Fnd Basc Feasble Soluton Of Transportaton

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

Self-Adaptive Capacity Management for Multi-Tier Virtualized Environments

Self-Adaptive Capacity Management for Multi-Tier Virtualized Environments Self-Adaptve Capacty Management for Mult-Ter Vrtualzed Envronments Ítalo Cunha, Jussara Almeda, Vrgílo Almeda, Marcos Santos Computer Scence Department Federal Unversty of Mnas Geras Belo Horzonte, Brazl,

More information

Credit Limit Optimization (CLO) for Credit Cards

Credit Limit Optimization (CLO) for Credit Cards Credt Lmt Optmzaton (CLO) for Credt Cards Vay S. Desa CSCC IX, Ednburgh September 8, 2005 Copyrght 2003, SAS Insttute Inc. All rghts reserved. SAS Propretary Agenda Background Tradtonal approaches to credt

More information

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng Envronment Congcong Xong, Long Feng, Lxan Chen A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Network Services Definition and Deployment in a Differentiated Services Architecture

Network Services Definition and Deployment in a Differentiated Services Architecture etwork Servces Defnton and Deployment n a Dfferentated Servces Archtecture E. kolouzou, S. Manats, P. Sampatakos,. Tsetsekas, I. S. Veners atonal Techncal Unversty of Athens, Department of Electrcal and

More information

Conferencing protocols and Petri net analysis

Conferencing protocols and Petri net analysis Conferencng protocols and Petr net analyss E. ANTONIDAKIS Department of Electroncs, Technologcal Educatonal Insttute of Crete, GREECE ena@chana.tecrete.gr Abstract: Durng a computer conference, users desre

More information

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems 1 Applcaton of Mult-Agents for Fault Detecton and Reconfguraton of Power Dstrbuton Systems K. Nareshkumar, Member, IEEE, M. A. Choudhry, Senor Member, IEEE, J. La, A. Felach, Senor Member, IEEE Abstract--The

More information

Price Competition in an Oligopoly Market with Multiple IaaS Cloud Providers

Price Competition in an Oligopoly Market with Multiple IaaS Cloud Providers Prce Competton n an Olgopoly Market wth Multple IaaS Cloud Provders Yuan Feng, Baochun L, Bo L Department of Computng, Hong Kong Polytechnc Unversty Department of Electrcal and Computer Engneerng, Unversty

More information

Power Low Modified Dual Priority in Hard Real Time Systems with Resource Requirements

Power Low Modified Dual Priority in Hard Real Time Systems with Resource Requirements Power Low Modfed Dual Prorty n Hard Real Tme Systems wth Resource Requrements M.Angels Moncusí, Alex Arenas {amoncus,aarenas}@etse.urv.es Dpt d'engnyera Informàtca Matemàtques Unverstat Rovra Vrgl Campus

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

Sngle Snk Buy at Bulk Problem and the Access Network

Sngle Snk Buy at Bulk Problem and the Access Network A Constant Factor Approxmaton for the Sngle Snk Edge Installaton Problem Sudpto Guha Adam Meyerson Kamesh Munagala Abstract We present the frst constant approxmaton to the sngle snk buy-at-bulk network

More information

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng

More information

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures Mnmal Codng Network Wth Combnatoral Structure For Instantaneous Recovery From Edge Falures Ashly Joseph 1, Mr.M.Sadsh Sendl 2, Dr.S.Karthk 3 1 Fnal Year ME CSE Student Department of Computer Scence Engneerng

More information

Energy Conserving Routing in Wireless Ad-hoc Networks

Energy Conserving Routing in Wireless Ad-hoc Networks Energy Conservng Routng n Wreless Ad-hoc Networks Jae-Hwan Chang and Leandros Tassulas Department of Electrcal and Computer Engneerng & Insttute for Systems Research Unversty of Maryland at College ark

More information