Computer Networks 55 (2011) Contents lists available at ScienceDirect. Computer Networks. journal homepage:

Size: px
Start display at page:

Download "Computer Networks 55 (2011) 3503 3516. Contents lists available at ScienceDirect. Computer Networks. journal homepage: www.elsevier."

Transcription

1 Computer Networks 55 (2011) Contents lsts avalable at ScenceDrect Computer Networks journal homepage: Bonded defct round robn schedulng for mult-channel networks Dessslava Nkolova, Chrs Blonda PATS Research Group, Dept. Math. and Computer Scence, Unversty of Antwerp, Mddelhemlaan, 1, B-2020 Antwerp, Belgum Interdscplnary Insttute for BroadBand Technology (IBBT), Belgum artcle nfo abstract Artcle hstory: Receved 9 May 2010 Receved n revsed form 13 January 2011 Accepted 7 July 2011 Avalable onlne 22 July 2011 Keywords: Schedulng Multple channels Access networks Wred networks DOCSIS3.0 Channel bondng In order to ncrease the lnk capacty n telecommuncaton networks the bandwdth of multple channels can be aggregated by transmttng on them smultaneously. The latest data-over-cable servce nterface specfcaton (DOCSIS 3.0) for hybrd fber coax networks defnes a mechansm for channel bondng at the lnk layer. Thus, the scheduler at the cable modem termnaton system, whch dstrbutes the packets on the network, not only has to support per-flow queung but also has to dstrbute the packets to one modem over possbly several channels. In ths artcle we propose two downstream mult-channel packet schedulng algorthms desgned to support schedulng amongst flows possbly usng dfferent numbers of channels. Both algorthms are based on the defct round robn (DRR) scheduler. The bonded defct round robn (BDRR) algorthm, has complexty dependent only on the number of the channels and requres only one queue per flow. It s shown that the algorthm s a latency-rate server and the latency s derved. Furthermore, BDRR bounds the packet reorderng and the maxmum bounds on the packet delay and the reorder buffer needed at the recever are calculated. The paper explores also a second algorthm whch has more smlartes wth load balancng algorthms. It uses fully ndependent channel schedulers thus avodng the need for modfcaton n the sngle channel DRR algorthm. The transmsson channel for a packet s selected upon ts arrval. However, the algorthm does not bound the latency and packet reorder for flows assgned to receve on multple channels. Flows for whch such bound s needed should be assgned on a sngle channel. Ó 2011 Elsever B.V. All rghts reserved. 1. Introducton One of the currently most deployed wred broadband access network solutons s the hybrd fber coax (HFC) network. It uses the legacy communty antenna televson cables and s a pont-to-multpont network wth tree topology. The termnal equpment placed at the root of a HFC network s referred to as a cable modem termnaton system (CMTS). It s connected va coaxal or both optcal and coaxal cable to cable modems (CM) stuated at the customer premses. There s a famly of data-over-cable Correspondng author at: London Center for Nanotechnology, Gordon Street 17-19, WC1H 0AH, London, UK. Tel.: E-mal address: desse.nkolova@gmal.com (D. Nkolova). servce nterface specfcatons (DOCSIS) whch standardze the physcal and the medum access control (MAC) layers and also the QoS support. The speed of a DOCSIS-based network depends on the modulaton used on the physcal layer. For example a network based on the DOCSIS 2.0 standard [1] offers effectve downstream bandwdth (from the CMTS to the CMs) of approxmately 40 Mb/s provded 256QAM modulaton s used. Increasng further the modulaton would ncrease the speed but s technologcally expensve. Another access network technology, the FTTx (fberto-the-home, fber-to-the-busness or fber-to-the-curb), uses optcal fber and has ganed lots of momentum n the recent years. The most promsng fber network s consdered the passve optcal network (PON). There exst a number of dfferent PON types lke ATM PON (APON) /$ - see front matter Ó 2011 Elsever B.V. All rghts reserved. do: /j.comnet

2 3504 D. Nkolova, C. Blonda / Computer Networks 55 (2011) and Ethernet PON (EPON) to name some of them. They are also pont-to-multpont networks and offer very hgh speeds. For example the throughput of EPON approaches 1 Gb/s [2]. In order to be able to provde hgher bandwdths to compete wth the FTTx offerngs a new technology was specfed n the DOCSIS 3.0 specfcaton for HFC networks [3]. Partcularly DOCSIS 3.0 defnes a mechansm of forwardng upstream and downstream packets between the CMTS and the CM by utlzng the bandwdth of multple physcal layer channels. In ths way the throughput can be sgnfcantly ncreased wthout ncreasng the modulaton and stll use the legacy cable. The mechansm s termed channel bondng and s realzed on MAC level. By balancng the CMs amongst the channels the full capacty of a channel bonded system can be utlzed. However DOC- SIS 3.0 sgnfcantly expands the downstream servce offerng by requrng the DOCSIS 3.0 CM to be capable of recevng and transmttng on multple channels smultaneously. The ndvdual CMs can have dfferent capactes n terms of the number of channels they can receve smultaneously. The legacy CMs can receve on only one channel, whle the DOCSIS 3.0 CMs on 2, 3 and more channels. An example of a system wth 4 channels s gven on Fg. 1. There are 4 CMs reached by these channels. However, the modems have dfferent capactes n terms of the number of channels they can receve on smultaneously and are balanced amongst the channels. CM1 has 4 recevers and s assgned to all channels. CM2 and CM4 have each 2 recevers and are assgned correspondngly to the bondng groups (BG): BG1 consstng of frequences f1 and f2 and BG2 consstng of f2 and f3. Fnally, CM 3 has only one recever and s assgned to f4. Thus a packet transmtted on channel f1 wll be receved by CM1 and CM2, whle a packet transmtted on channel f2 by CM1, CM2 and CM4. Smlarly a packet transmtted on channel f3 would be receved by CM1 and CM4, and a packet transmtted on f4 wll by CM1 and CM3. The CMTS may dynamcally change the assgnment of a CM to a dfferent BG. However, ths ncurs some protocol overhead and delay, thus, does not happen on a short tmescale. In ths paper we consder only statc channel assgnment.e. the CMTS does not dynamcally reassgn CMs. The scheduler at the CMTS has to dstrbute the packets over the set of the downstream (DS) channels for delvery Fg. 1. Frequency assgnment. to a sngle CM. Each complete packet s transmtted on a sngle channel. The channels can have dfferent modulaton and thus dfferent bt rates. The packets are tagged wth a sequence number. In ths way the proper packet sequencng s not lost f there are dfferent latences on the channels. The CM restores the orgnal sequence before forwardng the packets to the user devces. DOCSIS provdes qualty-of-servce (QoS) by mappng packets nto servce flows. A CM has at least one servce flow for each drecton. Thus, the scheduler at the CMTS should support per-flow queung. It s also expected that t s work conservng and utlzes effcently the avalable bandwdth on all channels. Exstng load balancng (also called strpng or load sharng) algorthms for dstrbutng packets over multple channels, lke [4 8], do not schedule amongst flows. Exstng mult-channel schedulng algorthms [9 13], for technologes lke pont to pont networks, standardzed n [14] and for Ethernet systems [15], presume that all flows can be transmtted on all channels. A schematc vew of the system model s gven on Fg. 2. These schedulng algorthms use a sngle scheduler, whch selects the next packet to be transmtted accordng to the schedulng dscplne and then sends t on any free channel. In networks where all flows can be transmtted on all channels they are work conservng and fully utlze the avalable capacty. However, when appled to a system, lke the DOCSIS 3.0 HFC network, where the flows can not be dstrbuted over all channels, such dscplnes mght lead to scheduler blockng. For example, when the sngle scheduler selects the next flow to transmt a packet, t s possble that none of the free channels s a channel on whch ths flow can transmt. Thus the scheduler wll reman blocked untl one of these channels becomes free. The contrbuton of ths artcle s that t proposes and analyzes two mult-channel schedulng algorthms, whch can schedule flows that can receve only on a selecton of the avalable channels. Even though they both use defct round robn (DRR) as the channel scheduler, they use dfferent types of queung whch results n dfferent performance. The output queung DRR (OutQ-DRR) s a smple extenson to the DRR for multple channels. The reason s that t does not requre modfcaton to the sngle channel DRR algorthm. The mult-channel schedulng s acheved by addng a packet dstrbuton process precedng the sngle channels DRR. Thus the OutQ-DRR s potentally easy and cheap mult-channel schedulng soluton. The bonded DRR (BDRR), on the other hand, s an algorthm whch modfes the enqueue and dequeue processes of the sngle channel DRR and as such s more complex to mplement. Our analyss shows that the BDRR bounds the latency and packet reorderng. These bounds are derved. The packet reorderng bound ndcates the sze of the buffer needed at the clent sde. The latency suggests the maxmum delay a packet from a leaky-bucket shaped flow wll experence, whch s of mportance for tme-crtcal applcatons. We have also shown n our analyss that OutQ-DRR s not a latency-rate server for flows whch can be transmtted on multple channels. Thus, f the network needs to provde to a flow delay guarantee or lmt the packet reorderng, t has to be assgned on one channel,

3 D. Nkolova, C. Blonda / Computer Networks 55 (2011) Fg. 2. System model of pont-to-pont mult-channel networks. n whch case the OutQ-DRR bounds the latency and packet reorderng. Before descrbng the two possble queung mechansms n Secton 3, the followng secton contans more detals on QoS provsonng n DOCSIS 3.0 and revews related work ncludng the DRR algorthm. The parameters and varables necessary to be defned for ths algorthm and ther adaptaton for use n mult-channel envronment are dscussed n Secton 4. In Secton 5 an algorthm for output queung s descrbed and n the followng secton the BDRR algorthm whch combnes DRR wth nput queung of the packets s presented. Theoretcal analyss of the BDRR algorthm, ts latency and packet reorder are provded and derved n Secton 7, followed by smulaton results to support the analyss. In the last secton the conclusons are drawn. 2. Background and related work 2.1. QoS n DOCSIS 3.0 DOCSIS 3.0 nherts the Qualty-of-Servce (QoS) support from the older specfcatons. The QoS support n HFC networks s realzed va mappng the packets to servce flows (SF) and schedulng these SFs accordng to a set of QoS parameters. These parameters may nclude traffc prorty, token bucket rate shapng/lmtng, reserved (guaranteed) data rate, latency and jtter guarantees. The downstream packets are classfed nto downstream SFs based on ther Ethernet header and/or the TCP/IP/UDP headers as confgured by the servce provder. Thus, a SF s a undrectonal flow of packets that s provded a partcular Qualty of Servce. A CM has at least one SF for each drecton referred to as prmary SF and can have many dfferent SF. The scheduler at the CMTS arbtrates the allocaton of the DS bandwdth amongst the dfferent SFs. Thus a DOCISIS 3.0 scheduler should be able to support per flow queung. The MAC layer dffers sgnfcantly for the downstream (DS) and the upstream (US). In the US the CMTS arbtrates the transmssons from the CM va grants. The standard allows for fve dfferent schedulng servce types rangng from best effort, whch uses request va contenton channel to unsolcted grants servce, where the CMTS allocates bandwdth on regular ntervals. They dffer on the request grant mechansm and the QoS parameters. Often dfferent schedulers are used for the dfferent servce types. For example the man requrements for the best effort servce scheduler s to have low complexty, guarantee farness, solaton of the flows and a maxmum reserved rate. Thus, any packet round robn scheduler combned wth a traffc shaper can be used. The unsolcted grants servce has very strngent delay and delay varaton requrements and a sorted prorty scheduler s more sutable for ths servce. Moreover, n the upstream concatenaton and fragmentaton s allowed. Dfferent aspects of the upstream MAC layer and schedulng are studed n [16 19]. Due to the nherent dfferences n the MAC layer defnton between the upstream and the DS transmsson, the bondng mechansms themselves are qute dfferent n the two drectons. In ths artcle we concentrate solely on the downstream drecton Related work There are a sgnfcant number of per flow queung algorthms for sngle channel reported n the lterature. Sorted prorty schedulng algorthms lke WF 2 Q+ [20], and STFQ [21], have complexty dependng on the number of flows/ users to be scheduled. Round robn algorthms lke DRR [22], LBFS DRR [23], Stratfed RR[24], EBRR [25], Pre-order DRR [26], ERR [27], have low complexty whch makes them preferred choce for hgh-speed networks. From them DRR s the most studed and deployed n some hgh-speed routers [28]. Accordng to the DRR algorthm each flow contendng for resources s assgned a quantum and a defct counter (DC). The quantum ndcates the porton of the resources n a round robn cycle a flow should get. The DC tracks the amount of servce the flow can stll receve n a round. The flows are servced n a round robn order. Each round a flow s vsted once. Upon a vst to a flow, ts DC s ncreased wth ts quantum. A packet s transmtted f ts sze s not more than the DC. After a packet s sent the DC s decreased wth the sze of the packet. In ths way f parts of the defct counter remaned unused n a round t wll stll be avalable for the next one. Only backlogged flows are servced. To realze ths, the DRR scheduler mantans a lst of all backlogged flows. When a flow s no longer backlogged t s removed from the lst and ts DC s set to 0.

4 3506 D. Nkolova, C. Blonda / Computer Networks 55 (2011) When a flow becomes backlogged t s added at the tal of the lst. The DRR s a far scheduler. It provdes rate guarantees when the quantums are assgned proportonally to the flow s requred rate. Lnk aggregaton s a common technque to ncrease the avalable capacty. For pont to pont networks, t s standardzed n [14] and for Ethernet systems n [15]. It s analogous to mult-server systems, whch arse also n multprocessor archtectures, mult-path storage I/O and cluster-based Web servers, to name a few. Load balancng algorthms, also referred to as load sharng or strppng algorthms, are used to share n a far manner the load amongst the servers or channels. There s a plethora of load balancng algorthms proposed, rangng from smple statc polces, such as random dstrbuton or round-robn polcy [29], to ones usng hashng [6], ncorporatng predcton schemes [5] or resultng n mnmal [8] or completely ordered output [7]. These load balancng algorthms however, do not perform flow schedulng. For example, for the strpng algorthm proposed n [29], the surplus round robn (SRR) scheduler [4] s used. For each channel t keeps a surplus counter, whch tracks the amount of bytes dspatched on the channel n a round. Note that t does not schedule amongst flows but amongst channels. Algorthms supportng per-flow queung for mult-lnk systems are desgned to schedule for pont-to-pont systems, multple processor and for Ethernet based networks. A mult server DRR has been proposed n [13]. It works n the same manner as a sngle channel DRR but just transmts the packet on the frst avalable channel. In networks where all flows can be transmtted on all channels t s work conservng and fully utlzes the avalable capacty. However, as ponted out n the Secton 1, such a straghtforward extenson of the DRR to the mult-channel system wth flows, whch can be transmtted only on a selecton of the avalable channels mght result n an neffcent utlzaton. When the next flow s selected for servce, the channels on whch t can transmt packets mght not be free. Thus, the scheduler wll be blocked untl one of these channels becomes free. In [11] the theoretcal framework for the evaluaton of the sorted-prorty multchannel schedulng algorthms s set. The authors consder frst mult-channel schedulng whch emulates sngle channel. However, as t s dscussed n the same reference such a scheduler s not work conservng. They propose and analyze also a sorted prorty scheduler, whch s work conservng, presumng that all flows can send packets on all avalable channels. The problem of provsonng QoS among competng flows over a system of lnks was addressed n [9]. The schedulng algorthm, mult server far queueng (MSFQ) s based on the generalzed processor sharng (GPS) [30] system, an dealzed servce dscplne, that s also representatve of a perfectly far system. MSFQ smply selects the next packet that would leave n the assocated GPS system and sends t on the next avalable lnk. The complexty of the algorthm s determned from the complexty of the Packet GPS whch s O (logn), where N s the number of the backlogged flows. In [10] the MSFQ algorthm s further extended by dvdng the schedulng process nto two steps. Each tme a lnk becomes free and there has been a change n the set of backlogged flows frstly the parttonng step s executed. It assgns flows to channels and correspondng weghts for each flow for each channel t s assgned to. In the second step packet GPS s used to schedule amongst the flows. Ths algorthm could be easly extended to be applcable to systems where the flows can be dstrbuted only over a selecton of the avalable channels. However, t has very hgh complexty, namely O(MlogM + NlogN), where M desgnates the number of channels and N the number of flows. In [12] a packet schedulng algorthm for parallel processng of packets n network nodes that can handle multple flows s proposed. It uses a combnaton of sorted prorty algorthm wth surplus and defct round robn. Agan t presumes that all flows can be scheduled on all processors and there s one scheduler to do the task. To ensure strct ordered processng of the packets the scheduler keeps a sorted lst of the processors n decreasng order of the number of bytes to be scheduled on each processor and a second sorted lst n whch the flows are ordered decreasngly n the amount of bytes they can send n the round. It s shown through smulatons that the algorthm wll result n a mnmal packet reorderng. However, no theoretcal framework s provded to show whether the algorthm s a latency-rate server or the amount of reorderng ncurred wth ths schedulng mechansm. Furthermore the algorthm presumes that all packets can be scheduled on all processors and s not readly extended to a system where ths s not allowed. The next secton descrbes two possble queung mechansms applcable for mult channel systems and dscusses ther advantages and dsadvantages. 3. Per-flow packet queung n mult-channel networks In order to ensure QoS n networks per-flow queung s necessary. In mult-channel archtecture the packets to a bonded flow should be dstrbuted amongst the channels of ts BG. In ths respect the scheduler mght keep separate queues per channel per bonded flow or only one queue per flow as for sngle channel schedulers. Further n ths secton we wll dscuss the two possbltes. Fg. 3 shows a schematc vew of a compound schedulng archtecture, further on referred to as output queung. It conssts of a packet dstrbutng process and a separate queue per flow for each channel. The dstrbutng process Packets Packets Dstrbutor Fg. 3. Output queung.

5 D. Nkolova, C. Blonda / Computer Networks 55 (2011) decdes upon a packet arrval, to whch channel the packet should be transmtted. The channel should belong, of course, to the bondng group of the flow the packet belongs to. Once the channel s determned, the packet becomes the responsblty of the channel scheduler, whch can be any per flow queung algorthm. Thus the packet dstrbutor acts as a strppng algorthm. The rule by whch the packets are dstrbuted to the channels wll also largely determne the packet delay. A dsadvantage of output queung s that t determnes the channel, on whch a packet s transmtted, at the tme of the arrval of the packet n the system. It bases ts decson on the nformaton avalable at ths moment or some estmaton for the future load on the channels. However, the load on the channels can change when the packets are not transmtted n FIFO order. Thus, a stuaton mght arse where there are many packets watng to be transmtted on one channel whle other channels, whch reach the same flows, have no packets queued for transmsson. Another major dsadvantage of output queung s that wth each packet stored n the queue ts sequence number also has to be stored. The advantage of output queung s that the dfferent channels operate completely ndependently, they have separate memory and exstng schedulng algorthms can be appled wthout the need for modfcaton. Fg. 4 shows a schematc drawng of the dstrbuted archtecture for nput queung. In ths scenaro the scheduler keeps only one queue per flow where the packets are queued mmedately upon arrval. The channel, on whch a packet wll be transmtted, s determned just before the transmsson tself starts. Each channel has a separate per flow-queung scheduler. However, these channel schedulers are not ndependent. When a channel becomes free ts scheduler selects the next flow based on the avalable nformaton. Once the flow s selected the common memory pool of the packet queues s accessed. When a scheduler on one channel leaves a queue empty, n order to avod unnecessary checks by the other channel schedulers, they should be nformed. In ths sense the channel schedulers are not fully ndependent. On the fgure ths dependence s ndcated by sketchng a central control unt, whch stores and updates the varables, common to all channels and manages the queues. It s also responsble for any concurrency ssues. The advantage of nput queung s that the decson for the channel on whch a packet wll be transmtted s made at the moment of packet transmsson. Thus, t s based on accurate up-to-date nformaton about the state of all channels. Another advantage s that the packet order s preserved untl transmsson thus there s no need for extra buffer space to store the packet sequence number. Further on two algorthms are proposed, whch use as channel schedulers the defct round robn (DRR) algorthm. Before descrbng them, prelmnary dscusson on the quantum selecton for each channel s gven. 4. Rate parttonng The DRR schedulng s determned by a quantum per flow. It ndcates the porton of the resources n a round robn cycle a flow should get and hence s related wth the rate at whch the flow can send packets. Consder that each flow has a pre-assgned weght w and correspondng rate r. For the common DRR algorthm these rates would be suffcent to determne the quantum. However for the bonded verson we need to ntroduce flow s channel weghts n order to determne the flow s quantum on a channel. In a sngle channel DRR scheduler the flow s quantum s proportonal to the flow s weght and wrtten for the flow s quantums for the DRR scheduler on channel m s Q m ¼ w m Q m mn : ð1þ The flow s weghts on the channels, w m, are such that ther sum should result n the flow s weght n the system, whch would correspond to the weght the flow would have f t was served by a sngle channel scheduler w ¼ X m2m w m ; ð2þ where M s the set of channels of the flow s bondng group and M s the number of channels n the flow s bondng group. The weghts determne a rate by r m ¼ w m r m mn : ð3þ Packet queues SF dentfers The mnmum rate corresponds to the mnmum weght that can be assgned to a flow on a channel. A natural choce s that the mnmum weght s 1 and s the same for all channels. In ths way any flow wth the mnmum weght and a sngle channel n ts bondng group can be assgned on any channel. Thus r m mn ¼ r mn. From Eq. (2) the rates satsfy r ¼ X m2m r m : ð4þ Packets Fg. 4. Intput queung. The mnmum quantum n DRR s typcally the mnmum possble quantum for whch the scheduler has O(1) complexty, whch s the maxmum packet sze on the network L max. The mnmum quantums on the channels are naturally selected as the mnmum possble quantum for whch the sngle channel DRR scheduler has O(1) complexty,

6 3508 D. Nkolova, C. Blonda / Computer Networks 55 (2011) whch s the same for all channels Q m mn ¼ Q mn. A frame f m s one round robn cycle amongst the backlogged flows on channel m. The sum of the quantums determnes the frame sze and takng nto account Eqs. (1) and (3) s expressed as f m ¼ XN m ¼1 Q m ¼ XN m ¼1 r m r mn Q mn ; ð5þ where N m s the number of backlogged flows assgned to bondng groups whch nclude channel m. 5. DRR for channel bonded system wth output queung (OutQ-DRR) An output queung algorthm, as dscussed n Secton 3, s determned by the type of the scheduler (s) on the channels and by the packet dstrbutng rule. The packet dstrbutor acts as a load sharng algorthm. Typcally n the exstng load sharng algorthms there s no channel scheduler but FIFO queung s used. The smplest algorthm dstrbutes the packet to the channel wth the smallest FIFO queue relatve to the channel rate. Ths s roughly equvalent to the packet beng dstrbuted to the channel where t wll be scheduled wth the shortest delay. In the proposed OutQ-DRR algorthm the scheduler on each channel s DRR. The quantum for each channel s gven wth Eq. (1). The dstrbuton rule ams at selectng the channel on whch the packet wll have the least watng tme. The watng tme on a channel depends on the amount of packets n the queue for ths flow and the bandwdth share the flow wll receve. At the moment of arrval of a packet for flow the far share of the bandwdth ~r m for ths flow on channel m s ~r m ¼ P wm j2n mwm j R m ¼ P rm j2n mrm j R m ; ð6þ where N m s the number of flows whch have backlogged queues on channel m. Thus the watng tme for a packet for flow on channel m provded there s no change n the backlogged flows for ths channel s gven wth ~d m ¼ qm ~r m ¼ qm P j2n mrm j r m R m ; ð7þ where q m denotes the sze of the flow s queue on channel m f the packet s added to ths queue. The dstrbuton rule proposed n ths artcle s that the packet dstrbutor forwards the packets to the channel wth the least estmated watng tme ~ d m at the moment of packet arrval m ¼ argmn ~ d m : ð8þ ~d m s an estmated tme because t can dffer from the actual delay d m f the set of the backlogged flows for ths channel changes. Thus, f the set of the backlogged flows on the channels does not change the packet wll have as an actual delay ~ d m. Moreover they wll be delvered n order because the sze of the packet s taken nto account n the estmated delay. In Table 1 the pseudo code for the packet dstrbutng process s gven. For each flow the process keeps the set Table 1 Packets dstrbutng process. 1. = p.flow (); 2. M ¼ :BondedChannelsSetðÞ 3. for each m 2 M 4. IF( NOT IN BacloggedFlowsLst m ) 5. ~ d m ¼ qm q ð m þr m Þ r m R m 6. ELSE 7. ~ d m ¼ qm q m r m R m 8. m ¼ mn M ~ d m 9. IF( NOT IN BacloggedFlowsLst m ) 10. q m þ¼r m ; 11.p.stamp(SequenceNumber) 12.SequenceNumber ++; of channels M from ts bondng group. For each channel m t keeps a varable q m ndcatng the total reserved rate of the flows havng packets to be transmtted on ths channel. Upon arrval, a packet s classfed to a servce flow, say. The classfcaton s based on the destnaton address or other parameters from the packets headers. The watng tmes on each channel are subsequently estmated. As already dscussed they depend on the queue sze at the channel, q m, and on the sum q m of the reserved rates of the flows backlogged on the channel. The process knows the set of channels n the bondng group of the flow and from t the channel wth lowest ~ d m s selected. If there are several channels wth the same ~ d m one s selected randomly. Ths s mportant feature n order to acheve spreadng of a burst of packets amongst equally loaded channels from the flows bondng group. Provded that the flow s not backlogged on the selected channel ts channel reserved rate, r m, s added to q m. Before the packet s forwarded to a channel t has to be stamped wth the sequence number. The packet dstrbutng process s the one n the system that can keep track of the order of packet arrval. When a flow s no longer backlogged on a channel m, the packet dstrbutng process s nformed such that t can reduce the q m varable wth r m. As can be seen from the pseudo code the process complexty depends on the number of channels n the bondng group of a flow thus the complexty s O(M). Here M ndcates the number of channels on the system. For the DRR algorthms on the channels perform n constant tme, thus the whole OutQ-DRR algorthm has O(M) complexty. 6. DRR for channels bonded systems wth nput queueng As was dscussed n Secton 3 n order to avod unnecessary checks a dstrbuted algorthm n nput queung archtecture can not have ndependent schedulers. They are bonded by usng a common memory pool for the packet queues and, as wll be dsclosed further, other common varables. To emphasze ths fact the algorthm s named bonded defct round robn (BDRR). It s descrbed hereafter. As an nput queung algorthm, BDRR keeps one packets queue per flow. For each channel m there s a separate DRR

7 D. Nkolova, C. Blonda / Computer Networks 55 (2011) schedulng algorthm wth BackloggedFlows m lst and per flow a quantum gven by Eq. (1) and a defct counter. Consder flow whch requres a mnmum rate r and s assgned to a bondng group consstng of M channels. When a packet arrves for a flow wth an empty queue, a channel from the flow s bondng group s selected randomly and the scheduler for ths channel s notfed. When a channel s notfed the flow s d,, s added to the BackloggedFlows lst of the channel s DRR schedulng algorthm. If a second packet arrves before the frst one s served and M > 1, a second channel from the bondng group s notfed. Wth the further ncrease of the number of packets n the flow s queue more channels are notfed untl all the channels from the flow s bondng group are notfed. Thus f there are p packets n a flow s queue the number of notfed channels n s n ¼ mnðp ; M Þ: Ths condton guarantees that when a flow s selected for servce from a channel scheduler, t wll have at least one packet n ts queue to be transmtted. The channels schedulers have to ensure that after a packet s transmtted the condton gven by Eq. (9) s satsfed. For example f the transmsson of a packet from the flow s queue wll cause the number of packets n the queue p to become less than the number of notfed channels mnus one n 1, the scheduler should no longer schedule packets from ths flow. The flow should be removed from the Backlogged- Flows lst of the channel, thus the number of notfed channels wll be reduced by one and wll become equal to p.a flow can be selected for servce on more than one channel at the same tme. It s presumed possble to transmt dfferent packets from the same queue on several channels smultaneously. Ths can be realzed when the two packets are stored n dfferent memory blocks whch can be read smultaneously. Next, the algorthm s further clarfed by means of ts pseudo code but frst the varables used are ntroduced. They are lsted n Table 2. For each channel the algorthm keeps a separate BackloggedFlows m lst where the flows, whch have packets to be scheduled on the channel, are kept. For each flow there s only one queue Queue, where the packets classfed for the flow are nserted. Per flow there are also the DRR varables, namely the flow s quantum Q m and defct counter DC m, assgned as dscussed n Secton 4. The BDRR does not notfy all channels from the flow s bondng group unless there are suffcent packets n the flow s queue to have Table 2 Bonded DRR varables. Per channel: BackloggedFlows lsts lsts wth backlogged flows per channels Per channel per flow: Queue ponter to the packets queue; Q m ; DC m DRR varables for channel m; Per flow: queue - the packets queue; UnusedChannels set, dentfyng the ds of the unused channels n the number of notfed channels; SequenceNumber a counter to track the packets sequence; ð9þ at least one packet transmtted per channel. Each flow keeps an ndcator UnusedChannels of the channels of ts bondng group whch are not notfed yet,.e., the flow s d s not added n ther BackloggedFlows lsts. A channel s removed from the set when the flow has notfed the scheduler of ths channel that t has backlogged packets. When the flow has no more packets for the specfed channel t s nserted back nto the set. The UnusedChannels set can be realzed for example as a lst to whch channel IDs are added or removed or as a btmap where a bt s set or reset when a channel s used or not. A varable s kept per flow n ndcatng how many channels at the moment are notfed that the flow has packets to transmt,.e., the flow s d s n ther correspondng BackloggedFlows lsts. Thus the sum of n and the number of channels n the set UnusedChannels results n the number of channels n the bondng group of flow. Fnally there s the counter sequencenumber whch s used to tag the packets so ther order can be recovered at the destnaton Enqueue process The pseudo code of the enqueue process s gven n Table 3. Ths functon s nvoked upon a packet arrval at the BDRR scheduler. When a packet arrves at the system t s classfed nto a flow, say, and nserted n ts correspondng Queue. If there are channels on whch the flow s not consdered backlogged, one s selected and notfed that the flow has packets to transmt. Consequently, the flow s d s added t to the BackloggedFlows lst of the channel and the varables as shown on lnes 6 and 7 of the enqueue process are updated. The selectchannel functon searches n the UnusedChannels set for a channel that has no scheduled transmsson at the moment, whch n the worst case results n O(M) operatons. If such a channel s not present, one s selected randomly. Thus the complexty of the heren mplemented enqueue process s O(M). The process can be optmzed n a future verson n a way that the channel selecton s based on some ndcators lke load or number of backlogged flows. However there s no straghtforward way to select the channel on whch the flow wll receve the fastest servce wthout keepng extra varables. The selectchannel functon can also be mplemented as a smple FIFO whch would result n O(1) complexty but some effcency wll be lost as t can happen that the selected channel to be nformed s currently busy Dequeue process The Dequeue process s called ndependently at each channel scheduler at the events of a flow beng added to Table 3 BDRR enqueue process. 1. = Packet Flow (); 2. Queue Insert(packet); 3. IF(UnusedChannels >0) 4. m = selectchannel(unusedchannels) 5. BackloggedFlows[m] pushback(); 6. n ++; 7. DC m ¼ Q m ;

8 3510 D. Nkolova, C. Blonda / Computer Networks 55 (2011) Table 4 BDRR Dequeue process for channel m. 1. WHILE (Actve lst for channel m NOT empty) 2. = BackloggedFlowsLsts[m] popfront (); 3. whle(queue NOT empty AND nextpacketlength <¼ DC m AND Sze(Queue )> =n ) 4. packet = Queue pophead(); 5. packet stamp(sequencenumber ) 6. SequenceNumber send (packet); 8. DC m ¼ DC m packet LengthðÞ; 9. IF(Queue NOT empty AND Sze(Queue )> =n ) 10. BackLoggedFlowsLst[m] pushback(); 11. DC m þ¼q m ; 12. ELSE 13. addchannel(m,unusedchannels); 14. n ; 15. DC m ¼ 0; a channel s empty BackloggedFlows lst or when a packet fnshes transmsson on the channel. The process remans actve as long as the lst s not empty. The pseudo code s gven n Table 4. After selectng the next flow from the BackloggedFlows lst ts packets are processed untl three condtons are met. The frst, obvous one s that there are packets n the queue of the flow. The second one s that the length of the next packet s less than or equal to the defct counter. These condtons are the same as for the DRR scheduler. The thrd condton s that there are at least the same number of packets n the queue as the number of channels on whch the flow has scheduled transmsson, gven by n. Ths ensures that when the same flow s selected for servce on a dfferent channel there wll be at least one packet to be transmtted. As long as the condtons are met packets are popped from the flow s queue stamped wth the sequence number and sent on the channel. The defct counter for ths flow for the channel s reduced by the sze of the transmtted packet and the sequence number counter s ncreased by one. When any of the condtons no longer true holds the process moves on to update the varables. If the only condton whch s not met s the next packet not fttng n the defct counter, then the flow s nserted at the back of the channels BackloggedFlows lst for another round of servce and the defct counter s ncreased wth the quantum for ths channel. Otherwse the flow does not requre any more servce from the channel. Ths case covers the stuaton where ether there are no more packets n the queue or the number of packets s less than the channels on whch the flow s consdered backlogged. In both cases the flow should no longer be consdered backlogged for ths channel thus ts defct counter s set to 0, the channel s added to the UnusedChannels set and the counter of notfed channels s decreased. Note that, as there are common varables read and updated by the dfferent channel schedulers, concurrency ssues must be accounted for n order to avod deadlocks [31]. To ensure the correct workng when a scheduler checks the queue sze for flow on lne 3 t should be able to contnue to lne 4 wthout any other scheduler accessng the same Queue. The same should hold for lnes 5 and 6. Also the operatons between lnes 12(9) and 14 are performed wthout other channel schedulers accessng the n varable. The longest operaton s the sendng of the packet on lne 7. Thus the scheduler has to ensure that ths operaton can be done n parallel,.e., that when a packet s transmtted on one channel the dequeue process on the other channels can be executed smultaneously. All the other operatons perform n constant tme thus any locks on lnes 3 6 and wll pose very low overhead n comparson wth the packet transmsson. The reported algorthm for the dequeue process guarantees that the condton gven by Eq. (9) s satsfed after t fnshes. The algorthm has O(1) complexty. Thus the total complexty of the BDRR s O(M). 7. Theoretcal analyss For flows, whch can receve packets only on a sngle channel, both algorthms behave lke sngle channel DRR. In order to be able to guarantee the rate of these flows the followng nequalty must hold for each channel m X Nm ¼1 r m 6 R m ; ð10þ where R m s the channel s lnk rate, N m s the number of flows whch have a reserved rate on the channel as gven by Eq. (4). The frame sze from Eq. (5) s bounded by f m 6 Rm r mn Q mn ¼ F m ; ð11þ where F m denotes the maxmum frame sze. These are the same relatons between the parameters as those for DRR schedulng on sngle channels. Thus, selectng the rate parttons as gven by Eq. (10) would guarantee the reserved rates for the flows assgned on only one channel under both algorthms. Note that wth OutQ-DRR the packets from one flow are not transmtted n order. BDRR, on the other hand, transmts the packets on the DS lnk n the order of ther arrval. Stll some reorderng at the recever s possble due to the dfferent channel rates and packet szes. That s, whle packet k wth length L,k s beng transmtted on a channel, packets from flow wth ndex greater than k could be transmtted. For a gven channel, for these packets to arrve before L,k, ther transmsson must take less tme than the transmsson of L,k. In the worst case, L,k s transmtted over the slowest channel of capacty R mn. Thus, t takes L,k /R mn seconds to transmt ths packet. Durng ths tme, on channel m can be transmtted a total of L ;k R m bytes of R mn packets whose ndex s greater than k. Hence, the number of bytes b,k of the packets wth ndex greater than k that arrve before L,k can be at most b ;k 6 L ;k R mn X M m¼2 R m R 6 L max 1 R mn ð12þ bytes. Here R ¼ P M m¼1 Rm s the total capacty of all channels. Ths bound wll be acheved when only packets of the consdered flow are transmtted on the network. If, as

9 D. Nkolova, C. Blonda / Computer Networks 55 (2011) typcall n practcal applcatons, the maxmum rate at whch a flow can transmt s restrcted by some R max, b,k Rmax wll be bound by L max R mn 1. Followng the same consderaton and not accountng for the processng delays at both sdes, a packet of sze L,k wll be delayed n the buffer of the CM due to reorderng by at most d reorder 6 L max R mn L ;k R max : ð13þ The second term s the tme t takes for the full packet k to be receved and the frst term s the maxmum tme t takes a packet wth ndex less than k to arrve at the recever. Further on, we nvestgate the delay of the BDRR scheduler and more specfcally we calculate the latency and maxmum delay bound. In [32] a general model, called latency-rate server (LR-server), for analyss of schedulng algorthms was defned. The theory of LR-servers further detaled n [33] provdes means to descrbe the worst-case behavor of a broad class of schedulng algorthms n a smple and elegant manner. The latency h S of a schedulng algorthm S s defned n [33] as the mnmum non-negatve constant that satsfes W ðt 0 ; tþ P max 0; r t t 0 h S ð14þ where t 0 s the start of a busy perod, t s any tme nstance wthn ths perod and W (t 0,t) the amount of servce receved by sesson n the tme nterval (t 0,t). A busy perod s defned as the maxmal nterval of tme (s 1,s 2 ) such that for any tme t 2 (s 1,s 2 ), packets arrve for flow wth rate greater or equal than ts guaranteed rate r, or, A ðt 0 ; tþ P r ðt t 0 Þ: ð15þ Wth other words the latency of a scheduler s the tme that a flow has to wat untl t begns recevng servce at ts guaranteed rate. The LR-server theory has been used to derve the latences of many schedulng algorthms for example for DRR n [34], for surplus round robn n [35], for vrtual clock and self-clocked far queung n [33], for Pre-order DRR n [36], for elastc round robn n [27]. From the latency the maxmum packet delay D max, for a leaky-bucket shaped traffc flow wth parameters (r,r ) transmtted over multple channels s obtaned by D max 6 r r þ h S þ d reorder : ð16þ For the mult channel BDRR the latency s gven from the followng theorem Theorem 7.1. Bonded DRR s a latency-rate server wth latency determned by h BondedDRR ¼ max m 2 M h m;drr þ ðm 1ÞL max ; r where M s the number of channels flow s assgned to, r s ts guaranteed rate, L max s the maxmum packet sze on the network and h m;drr s the latency of the DRR scheduler on channel m. Proof. Consder a channel bonded system wth M channels, total bandwdth capacty R and a BDRR scheduler. Let the bandwdth of channel m be R m and thus R ¼ P M m¼1 Rm. Suppose (t 0,t 1 ) s flow s busy perod concdng wth the start of a backlogged perod at t 0,.e. q (t 0 )=0 and t s some tme nstant wthn ths busy perod t 0 < t 6 t 1. The bondng group of flow conssts of M channels and ts reserved rate r s dstrbuted over these channels resultng n channel rates accordng to Eq. (4), whch are bounded by Eq. (10). The number of channels n on whch the flow s assgned at each moment s gven by Eq. (9). Dependng on the number of assgned channels n at tme t two cases can be dstngushed. Case 1: n 6 M 1. From Eq. (9) follows that there are at most M 1 packets n the queue. Thus for the queue state expressed n bts at tme t we can wrte q ðtþ 6 ðm 1ÞL max : ð17þ On the other hand the queue state depends on the arrvals and departures by the relaton q ðtþ ¼A ðt 0 ; tþ W ðt 0 ; tþþq ðt 0 Þ; ð18þ where A (t 0,t) s the number of bts that arrved for flow durng the nterval (t 0,t), W (t 0,t) s the number of bts that were transmtted,.e., the amount of servce scheduled to flow n ths perod and q (t 0 ) s the queue state at tme t 0. Expressng W from Eq. (18) and takng nto account that the queue state at tme t 0 s 0, the servce receved by flow n the nterval (t 0,t) can be bounded by W ðt 0 ; tþ ¼A ðt 0 ; tþ q ðtþþq ðt 0 Þ P A ðt 0 ; tþ ðm 1ÞL max P r ðt t 0 Þ ðm 1ÞL max : ð19þ The frst nequalty comes from Eq. (17) and q P 0. The last nequalty comes from the defnton of a busy perod (see Eq. (15)). Case 2: n = M. In ths case the queue state at tme t can not be bounded. In the perod (t 0,t) there can be a number of ntervals whch satsfy ths condton. Wthout loss of generalty let t M be the tme nstant when the number of packets p n the queue of flow becomes M and remans greater than or equal to M untl t. The servce receved by flow n the nterval (t 0,t) can be splt wth respect to t M as W ðt 0 ; tþ ¼ XM m¼1 W m ðt 0 ; t M Þþ XM m¼1 W m ðt M ; tþ: ð20þ At tme t M the number of packets n the queue s nq 6 M 1. The same s vald for the number of channels on whch the flow s subscrbed whch corresponds to case 1. Thus the work done n the nterval ðt 0 ; t M Þ; W ðt 0 ; t M Þ, s bounded as Eq. (19). Accountng for the consdered nterval ths gves a lower bound on the frst term of Eq. (20) n the form

10 3512 D. Nkolova, C. Blonda / Computer Networks 55 (2011) Fg. 5. Illustraton of the latency bound. X M m¼1 W m ðt 0 ; t M Þ¼W ðt 0 ; t M Þ P r t M t 0 ðm 1ÞL max : ð21þ r In order to fnd a bound on the second term of Eq. (20) we have to take nto account that the ntervals (t m,t) are backlogged ntervals for the correspondng channels. Thus the servce on each of the channels can be bounded from the bounds for the scheduler on a sngle channel f such exsts. In the proposed Bonded DRR algorthm the scheduler on each channel s DRR. The latency h DRR was derved n [33,34] based on a backlogged perod. Thus the servce receved on each channel m n the backlogged perod ðt M ; tþ s bounded by the DRR latency W m ðt M ; tþ P max 0; r m t t M h DRR;m : ð22þ The servce receved on channel m n a backlogged perod ðt M ; tþ s related to the latences of all channels by the bound W m ðt M ; tþ P max 0; r m ðt t M h DRR;m Þ P max 0; r m mn k2m t t M h DRR;k P max 0; r m t t M max k2m h DRR;k : ð23þ Summng Eq. (23) over all assgned channels m 2 M gves X M m¼1 W m ðt M ; tþ P max 0; r t t M max k2m h DRR;k : ð24þ Replacng the bounds gven by Eqs. (24) and (21) n Eq. (20) the lower bound on the servce for a perod (t 0,t) s obtaned to be W ðt 0 ; tþ Pmax 0; r t t 0 ðm 1ÞL max r max m2m h m : ð25þ From the defnton of a LR server (Eq. (14)) the latency s expressed from Eq. (25) as the one stated n the theorem. To conclude the proof and show that ths latency bound s tght, an example where t s actually acheved s gven. Consder a system wth M channels and a flow assgned to M channels. Let all the channels n the set M guarantee the same latency for flow h 1 ¼ h 2 ¼¼h M except channel m whch guarantees hgher latency,.e., h m > h 1. Fg. 5 shows an llustraton of the consdered example for the latency bound of the BDRR algorthm. At tme t 0 a packet arrves for flow. Channel 1 s notfed and the flow s nserted n ts BackloggedFlows 1 lst. Suppose that n the nterval (t 0,t m ),M 1 more packets arrve and the flow receves no servce. As a result at tme t m all channels from the DCS of the flow are notfed wth the last beng m at tme t m. The tme nstant t 0 þ h 1 marks the tme when the flow starts to receve servce on channel 1 at ts guaranteed rate for the channel r 1. The tme nstant

11 D. Nkolova, C. Blonda / Computer Networks 55 (2011) t m þ h m marks the worst case tme nstant that the flow starts to receve servce at ts reserved rate r m for channel m. In the nterval t 0 þ h 1 ; t m þ h m t starts to receve ts reserved rate also on all the other channels. Ths follows from h m beng the maxmum latency. Thus t m þ h m s the tme nstant where the flow starts to receve servce at ts guaranteed rate r. For the tme nterval (t 0,t m ), M 1 packets arrved, whch n the worst case s (M 1)L max bts. As ths nterval s part of a busy perod for the flow the mnmum arrval rate n ths nterval s r and thus n the worst case t m t 0 6 ðm 1ÞL max r : Thus when ths s added to the latency of the m-th channel, whch s the channel wth maxmum latency on the system, t can be easly verfed that the latency bound s exactly met. Ths concludes the proof. h Suppose the maxmum channel latency s acheved on channel m, where the DRR parameters are F m ; Q m ; N m ; R m ; r m. Replacng the latency of the DRR scheduler derved n [34,18] n the latency expresson for BDRR gven by Theorem 7.1 results n h BDRR ¼ Fm Q m þðn m 2ÞðL max 1Þ R m þ L max 1 r m þ ðm 1ÞL max r : ð26þ A number of conclusons can be drawn from the latency bound of the BDRR. Frstly, the latency of the bonded system depends on the latency of the schedulers on the dfferent channels. Thus to mnmze the latency of the bonded algorthm the latency on the sngle channels has to be mnmzed takng nto account the frame length, the channel rate, the number of flows assgned on the channel and the parttonng of the flows rates on the channels. Secondly, t ndcates that the hgher the number of channels a flow s assgned to, the hgher the latency s. Thus for a flow wth reserved rate of say 4 Mb/s wth respect to delay performance t wll be better to assgn t to only 1 channel than to spread t on 4 channels wth reserved rate on each of them of 1 Mb/s. However, f assgned on only one channel the peak rate the flow can get wll be restrcted. Practcally flows from tme crtcal applcatons lke voce or gamng are better assgned on a sngle channel. Flows from applcatons wth hghly varable traffc are better assgned on multple channels. For flows whch can receve on only one channel OutQ- DRR s a latency rate server as the servce receved by the flow on the channel, say m, can be bounded by W ðt 0 ; tþ P max 0; r m t t 0 h DRR : ð27þ However, for flows whch can receve on more than one channel OutQ-DRR s not a latency rate server. Even though the DRR scheduler guarantees a mnmum rate on each separate channel a flow s subscrbed, the OutQ-DRR does not guarantee that the packets of a flow wll be dstrbuted on all channels of ts bondng group. As a result the mnmum rate allocated for the flow mght be less than the sum of the mnmum reserved rates on all assgned channels. Consder a flow wth reserved rate r, wth leaky bucket shaped traffc n the backlogged perod (t t 0 ) A ðt 0 ; tþ 6 r þ r ðt t 0 Þ: ð28þ Let d k s the moment packet k from flow s served and a k s the moment of ts arrval. Then the amount served n the perod t 0 ; d k s equal to the amount of traffc that arrved for flow n the perod t 0 ; a k.e. A t 0 ; a k ¼ W t 0 ; d k : ð29þ In the worst case the packets of a backlogged flow wll be transmtted on only one channel. Thus, replacng the bounds from Eq. (27) and Eq. (28) results n r þ r a k t 0 P r m d k t 0 h DRR : ð30þ After some smple calculatons one obtans for the maxmum packet delay for such traffc D OutQ DRR max ¼ d k a k 6 r þ h DRR r m þ r 1 a k r m t 0 : ð31þ The bound ncreases wth the length of the backlogged perod a k t 0. Thus tme crtcal flows served under OutQ- DRR should be assgned to a sngle channel. The queue length at the tme nstant t n the worst case wll be q ðtþ ¼A ðt 0 ; tþ W ðt 0 ; tþ 6 r þ r r m ðt t0 Þþr m h DRR ð32þ and also accumulates wth the length of the backlogged perod. The latency of the MS-DRR scheduler proposed n [13], dscussed n Secton 2.2, does not ncrease wth the number of channels a flow can be transmtted. Thus for flows whch can use all channels the MS-DRR scheduler provdes lower latency than BDRR or OutQ-DRR. 8. Smulaton results A varety of smulaton results clarfyng and comparng the performance of the two algorthms are presented n ths secton. A HFC network wth 4 DS channels and DOC- SIS 3.0 MAC protocol s smulated. The smulaton program s mplemented usng the OMNET++ [37] event smulator. The smulated system has 4 channels, each havng the same DS bandwdth of R m = 40 Mb/s resultng n total DS system capacty of R = 160 Mb/s. Further on wth the term k-channel flow s referred a SF that can transmt smultaneously on k channels.e. ts bondng group s of k channels. Specfcally n ths study a straghtforward rule to assgn the flows channel rates s selected. Namely, for flows assgned to a bonded group of k channels, the reserved rate on any one of the channels s determned from r m ¼ r =k. It s straghtforward to see that the rule from Eq. (4) s satsfed. There are 100 actve flows from whch 8 are 4-channel flows wth reserved rate r = 4 Mb/s dstrbuted equally amongst the 4 channels.e. r m ¼ 1 Mb/s. There are also 12 1-channels flows wth r = 4 Mb/s and 80 1-channels

12 3514 D. Nkolova, C. Blonda / Computer Networks 55 (2011) flows wth r = 1 Mb/s dvded equally amongst the channels. Thus on each channel there N m = 31 flows assgned and the total channel reserved rate s P N m r m ¼ 40 Mb/s. The mnmum quantum Q mn on all channels s set to the maxmum packet sze whch on DOCSIS network s L max = 1518 bytes wthout the overhead. The mnmum channel reserved rate r mn = 1 Mb/s. From Eq. (5) we can calculate the frame sze of a round for each channel wth R = r m to be ms. The packet delay s measured from the moment a packet enters the scheduler.e. after the traffc shaper, f such s present, untl t s ready to be transmtted from the CM to ts clent.e. after the packet order s restored at the CM. Thus any transmsson and reorderng delays are also taken nto account. For each flow the delay s measured over more than a mllon packets. Each pont on the graphcs s obtaned by averagng out the packet delay from all flows n the correspondng group. The traffc generaton process for each flow, unless otherwse stated, s ON/OFF wth exponentally dstrbuted ON tmes wth mean 0.01 s and exponentally dstrbuted OFF tmes wth mean 0.99 s. The nter-arrval tme of the packets s exponentally dstrbuted wth values dependng on the desred load. The rate of the flows s ncreased correspondngly n order to obtan the desred total load on the channels and s vared n the range [0.33,0.9] tmes the flow s reserved rate. In the frst smulated scenaro four 4-channel flows have leaky bucket traffc source wth maxmum burst sze r = 4 Mb and generaton rate of 4 Mb/s, thus equal to the flows reserved rate. For ths traffc we can calculate the maxmum delay from Eq. (16) takng nto account the latency from Eq. (26) and the reorder delay from Eq. (13). For the maxmum delay of a leaky bucket shaped 4-channel traffc flow wth parameters (1 Mb, 4 Mb/s) and reserved rate 4 Mb/s one obtans D max = 1.04 s. The leaky bucket shaped 4-channel flows 0 become actve some tme after the other flows, and all flows assgned on channel 1 become actve 0.01 s after them. On Fg. 6 the average and the maxmum packet delay of the 4 leaky bucket shaped 4-channel flows under the BDRR and the OutQ- DRR schedulng dscplnes s shown. As we can see the maxmum delay under the BDRR algorthm remans under the calculated maxmum lmt. There s no lmt on the maxmum delay under the OutQ- DRR algorthm and at hgher load t becomes hgher than the lmt for BDRR. To provde better nsghts of the schedulng mechansms, whch gve rse to these delays for the two algorthms Fg. 7 gves the throughput of flow 0, whch s from the leaky bucket shaped flows n the frst 4 s of ts actve perod. The traffc arrves n bursts of 1 Mb. In the frst 0.01 s of flow 0 s actve perod the 1-channel flows assgned on channel 1 are not yet actve. They start ther actve perods at 1.02 s. Thus n the frst 0.01 s of flow 0 s actve perod only t can transmt on channel 1. Hence the peak n the throughput (see Fg. 7a). The packet dstrbutor of the OutQ-DRR algorthm has to dstrbute all the packets n the burst to the channels. Correspondngly, the majorty of the packets are queued on channel 1 because there are very few other flows wth backlogged packets. At tme 1.3 s all the packets from flow 0 queued on Fg. 6. Packet delay of a leaky bucket shaped traffc for 4-channels flow. Fg. 7. The acheved throughput for one 4-channel flow vs. tme under (a) OutQ-DRR and (b) BDRR schedulng. channels 2, 3, and 4 are transmtted. If the load on channel 1 has not changed the packets on channel 1 would also have been transmtted. However, all other flows wth reserved rates on channels 1 became actve at tme 1.02 s, hence, the rate wth whch the packets of flow 0 are scheduled on channel 1 after 1.02 s s less than the one estmated at

13 D. Nkolova, C. Blonda / Computer Networks 55 (2011) the tme of the arrval of the packets and ther delay s hgher than the estmated. The BDRR algorthm serves flow 0 on all channels wth mnmum rate of 1 Mb/s. Consequently by 1.85 s the flow s backlog s cleared under BDRR (see Fg. 7b), n dfference wth the OutQ-DRR. Ths scenaro demonstrates that OutQ-DRR fals to utlze the full system capacty when the traffc s bursty. BDRR on the other hand dstrbutes the packets of the 4-channel flows on all 4 channels when there are at least 4 backlogged packets, thus utlzng all the avalable resources n these perods. When the next burst arrves there wll be some remanng packets to be transmtted on channel 1 under OutQ-DRR schedulng, thus more packets n comparson wth the prevous burst wll be scheduled on channels 2, 3 and 4. As can be seen from Fg. 7a, as the tme advances t takes longer for the bursts scheduled on these channels to be transmtted. Wth the further advance of the tme the average load on all channels s equalzed, moreover t has less varaton. In such condtons the OutQ-DRR estmates the expected watng tme well and provdes average throughput smlar to the throughput under BDRR. Ths s further confrmed from the results for the packet delay for the traffc from bursty not shaped flows shown on Fg. 8. For both algorthms the average and maxmum delays are smlar though for the 4-channel flows scheduled under OutQ-DRR the average delay s hgher than the one when scheduled under BDRR due to the dfferent way the two algorthms treat bursts. The delay for the one channel flows s of course the same for schedulng under the two algorthms as they both use DRR as the channel scheduler. The low delay for the OutQ-DRR algorthm, smlar to the BDRR one, however, s on the expense of large amount of packets transmtted out of order. The sze of the reorder buffer at the clent sde.e. at the CM s shown on Fg. 9. The sze of the buffer under OutQ-DRR schedulng s n the order of a 1 Mbyte for average loads and s dependent on the traffc rate of the flow. The maxmum sze of the Fg. 8. Packet delay vs. total load for 4-channel flows wth weght w =4. buffer when BDRR scheduler s used remans less than one maxmum packet sze. 9. Conclusons Fg. 9. Reorder buffer sze at the recever. The paper proposed two schedulng algorthms derved from the defct round robn scheduler, applcable for packet schedulng of flows over multple channels n a pont-to-multpont network. Unlke exstng algorthms whch presume that all channels can be transmtted on all channels, the proposed algorthms are desgned to schedule flows whch use dfferent numbers of channels. The OutQ-DRR algorthm, whch uses output queung, requres one queue per flow per channel. The transmsson channel s selected upon packet arrval by estmatng the delay the packet wll endue on each channel and subsequently choosng the one wth the lowest value. The packets are not transmtted n order thus, the sequence number must be stored wth them. We have shown n our analyss that OutQ-DRR s not a latency-rate server for flows whch can be transmtted on multple channels. Thus f the network needs to provde to a flow delay guarantee or lmt the packet reorderng t has to be assgned on one channel, n whch case the OutQ-DRR bounds the latency and packet reorderng. The preferred soluton, bonded DRR, uses nput queung hence, t needs only one queue per flow. The packets are transmtted n order and are stamped wth the sequence number just before transmsson. It s proven that BDRR bounds the latency and packet reorder and the correspondng bounds are derved. The performed smulatons show that both algorthms provde low average packet delay. For the OutQ-DRR algorthm, however, ths s on the expense of a large amount of packets transmtted out of order resultng n a bg reorder buffer at the clent sde. In the case of mult-channel flows wth traffc wth large bursts the OutQ-DRR can not guarantee equal dstrbuton of the packets among the channels the flow can receve on. Ths results n long perods where the full system capacty s not utlzed. BDRR on the other

14 3516 D. Nkolova, C. Blonda / Computer Networks 55 (2011) hand dstrbutes the packets from mult-channel flows on all the channels they can use regardless of the traffc burstness. References [1] Data-over-cable servce nterface specfcatons DOCSIS 2.0 rado frequency nterface specfcaton (2001). URL < [2] D. Nkolova, B.V. Houdt, C. Blonda, Dynamc bandwdth allocaton for EPON wth threshold reportng, Telecommuncaton Systems 28 (1) (2005) [3] Data-over-cable servce nterface specfcatons DOCSIS 3.0 MAC and upper layer protocols nterface specfcaton (2007). < [4] H. Adsshu, G. Parulkar, G. Varghese, A relable and scalable strpng protocol, n: Proceedngs ACM SIGCOMM, [5] J. Guo, L.N. Bhuyan, Load balancng n a cluster-based web server for multmeda applcatons, IEEE/ACM Transactons on Parallel and Dstrbuted Systems 17 (11) (2006) [6] W. Sh, M.H. MacGregor, P. Gburzynsk, Load balancng for parallel forwardng, IEEE/ACM Transactons on Networkng 13 (4) (2005) [7] W. Sh, L. Kencl, Sequence-preservng adaptve load balancers, n: Proceedngs of the 2006 ACM/IEEE Symposum on Archtecture for Networkng and Communcatons Systems, ANCS 2006, [8] J. Yao, J. Guo, L. Bhuyan, Far lnk strpng wth FIFO delvery on heterogeneous channels, Computer Communcatons 31 (2008) [9] Josep M. Blanquer, Banu Ozden, Far queueng for aggregated multple lnks, n: Proceedngs of the ACM SIGCOMM, [10] Satya R. Mohanty, Laxm N. Bhuyan, On far schedulng n heterogeneous lnk aggregated servces, n: Proceedngs of the ICCCN, [11] J. Cobb, M. Ln, A theory of mult-channel schedulers for qualty of servce, Journal of Hgh Speed Networks 12 (12) (2002) [12] J. Guo, J. Yao, L. Bhuyan, An effcent packet schedulng algorthm n network processors, n: Proceedngs of INFOCOM, [13] H. Xao, Y. Jang, Analyss of mult-server round Robn schedulng dscplnes, IEICE Transactons on Communcatons E87-B (12) (2004) [14] RFC 1990 The PPP multlnk protocol (MP) August [15] Carrer sense multple access wth collson detecton (CSMA/CD) access method and physcal layer specfcatons [16] S.K. Wen-Kuang Kuo, C.-C.J. Kuo, Improved prorty access, bandwdth allocaton and traffc schedulng for DOCSIS cable networks, IEEE Transactons on Broadcastng 49 (4) (2003) [17] W. Lao, L.H. jun Ju, Adaptve slot allocaton n DOCSIS-based CATV networks, IEEE Transactons on Multmeda 6 (3) (2004) [18] D. Nkolova, Desgn and performance analyss of schedulng algorthms for cable and optcal access networks, Ph.D thess, Unversty of Antwerp, Belgum January [19] J.-Y. Jung, J.-M. Ahn, Novel bandwdth schedulng algorthm for docss 3.0 based multple upstream channels, n: Proceedngs of AccessNets, [20] J. Bennett, H. Zhang, Herarchcal packet far queueng algorthms, IEEE/ACM Transactons on Networkng 5 (5) (1997) [21] P. Goyal, H. Vn, H. Cheng, Start-tme far queung: a schedulng algorthm for ntegrated servces packet swtchng networks, IEEE/ ACM Transactons on Networkng 5 (5) (1997) [22] M. Shreedhar, G. Varghese, Effcent far queueng usng defct round-robn, IEEE/ACM Transactons on Networkng 4 (3) (1996) [23] D. Nkolova, C. Blonda, Last-backlogged frst-served defct round robn (LBFS-DRR) packet schedulng algorthm, n: Proceedngs of the 15th Internatonal Conference on Networks (ICON2007), [24] S. Ramabhadran, J. Pasquale, The stratfed round robn scheduler: desgn, analyss and mplementaton, IEEE/ACM Transactons on Networkng 16 (6) (2006) [25] L. Lenzn, E. Mngozz, G. Stea, Elgblty-based round robn for far and effcent packet schedulng n wormhole swtchng networks, IEEE Transactons Parallel Dstrbuton System 15 (3) (2004) [26] S.-C. Tsao, Y.-D.J. Ln, Pre-order defct round robn: a new schedulng algorthm for packet-swtched networks, Computer Networks 35 (2-3) (2001) [27] S.S. Kanhere, H. Sethu, A.B. Parekh, Far and effcent packet schedulng usng elastc round robn, IEEE Transactons Parallel Dstrbuton System 13 (3) (2002) [28] CISCO seres routers. < [29] A. Har, G. Varghese, G. Parulkar, An archtecture for packet-strpng protocols, ACM Transactons on Computer Systems 17 (4) (1999) [30] A. Parekh, R.G. Gallager, A generalzed processor sharng approach to flow control n ntegrated servce networks: the sngle-node case, IEEE/ACM Transactons on Networkng 1 (3) (1993) [31] Programmng n C, lecture notes by Dave Marshall. < [32] D. Stlads, A. Varma, Latency-rate servers: a general model for analyss of traffc schedulng algorthms, IEEE/ACM Transactons on Networkng 6 (5) (1998) [33] D. Stlads, Traffc schedulng n packet-swtched networks: Analyss, desgn, and mplementaton, Ph.D thess, Unversty of Calforna at Santa Cruz, USA, June [34] S. Kanhere, H. Sethu, On the latency bound of defct round robn, n: Proceedngs ICCCN, Mam, Florda, USA, [35] D. Nkolova, C. Blonda, Evaluaton of surplus round robn schedulng algorthm, n: Proceedngs SPECTS 06, Calgary, Canada, [36] S. Kanhere, H. Sethu, On the latency bound of pre-order defct round robn, n: Proceedng of the 27th IEEE Conference on local computer networks (LCN 2002), Tampa, Florda, USA, [37] OMNET++ smulator. < Dessslava Nkolova receved Master n Physcs degree from the Unversty of Sofa, Bulgara n She joned Alcatel Bell n 2000 as research engneer on optcal access networks. Snce 2002 she s a research fellow at the Department of Mathematcs and Computer Scence at the Unversty of Antwerp, Belgum, where n 2010 she obtaned PhD n Computer Scence. Her thess was on desgn and performance analyss of schedulng algorthms for optcal and cable access networks. She has one patent for a medum access protocol for passve optcal networks. Her current research nterests are Nanonetworks, Plasmoncs and new technologes for Optcal and Cable Networks. Chrs Blonda obtaned hs Master n Scence and Ph.D. n Mathematcs, both from the Unversty of Ghent (Belgum) n 1977 and 1982 respectvely. In 1983 he joned Phlps Belgum, where he was a researcher between 1986 and 1991 n the Phlps Research Laboratory Belgum (PRLB) n the group Computer and Communcaton Systems. Between August 1991 and end 1994 he was an Assocate Professor n the Computer Scence Department of the Unversty of Njmegen (The Netherlands). In 1995 he joned the Department of Mathematcs and Computer Scence of the Unversty of Antwerp, where he s currently a Professor and head of the research group Performance Analyss of Telecommuncaton Systems (PATS). He has been member of many program commttees of nternatonal conferences. He has been and s currently nvolved n many Natonal and European Research Programs. He s member of IFIP W.G. 6.3 on Performance of Computer Networks and edtor of the Journal of Network and Computer Applcatons. Hs man research nterests are related to mathematcal models for performance evaluaton of computer and communcaton systems and the mpact of the performance on the archtecture of these systems. The systems that are studed are related to traffc management n mult-servce networks, moblty management n IP networks, Content Dstrbuton Networks, access control n access networks, both wred and wreless, and optcal networks. He has publshed a substantal number of papers n nternatonal journals and conferences on these research areas.

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Traffic State Estimation in the Traffic Management Center of Berlin

Traffic State Estimation in the Traffic Management Center of Berlin Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,

More information

Performance Analysis and Comparison of QoS Provisioning Mechanisms for CBR Traffic in Noisy IEEE 802.11e WLANs Environments

Performance Analysis and Comparison of QoS Provisioning Mechanisms for CBR Traffic in Noisy IEEE 802.11e WLANs Environments Tamkang Journal of Scence and Engneerng, Vol. 12, No. 2, pp. 143149 (2008) 143 Performance Analyss and Comparson of QoS Provsonng Mechansms for CBR Traffc n Nosy IEEE 802.11e WLANs Envronments Der-Junn

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

A generalized hierarchical fair service curve algorithm for high network utilization and link-sharing

A generalized hierarchical fair service curve algorithm for high network utilization and link-sharing Computer Networks 43 (2003) 669 694 www.elsever.com/locate/comnet A generalzed herarchcal far servce curve algorthm for hgh network utlzaton and lnk-sharng Khyun Pyun *, Junehwa Song, Heung-Kyu Lee Department

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

A Performance Analysis of View Maintenance Techniques for Data Warehouses

A Performance Analysis of View Maintenance Techniques for Data Warehouses A Performance Analyss of Vew Mantenance Technques for Data Warehouses Xng Wang Dell Computer Corporaton Round Roc, Texas Le Gruenwald The nversty of Olahoma School of Computer Scence orman, OK 739 Guangtao

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

taposh_kuet20@yahoo.comcsedchan@cityu.edu.hk rajib_csedept@yahoo.co.uk, alam_shihabul@yahoo.com

taposh_kuet20@yahoo.comcsedchan@cityu.edu.hk rajib_csedept@yahoo.co.uk, alam_shihabul@yahoo.com G. G. Md. Nawaz Al 1,2, Rajb Chakraborty 2, Md. Shhabul Alam 2 and Edward Chan 1 1 Cty Unversty of Hong Kong, Hong Kong, Chna taposh_kuet20@yahoo.comcsedchan@ctyu.edu.hk 2 Khulna Unversty of Engneerng

More information

Efficient On-Demand Data Service Delivery to High-Speed Trains in Cellular/Infostation Integrated Networks

Efficient On-Demand Data Service Delivery to High-Speed Trains in Cellular/Infostation Integrated Networks IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. XX, NO. XX, MONTH 2XX 1 Effcent On-Demand Data Servce Delvery to Hgh-Speed Trans n Cellular/Infostaton Integrated Networks Hao Lang, Student Member,

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP) 6.3 / -- Communcaton Networks II (Görg) SS20 -- www.comnets.un-bremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

Conferencing protocols and Petri net analysis

Conferencing protocols and Petri net analysis Conferencng protocols and Petr net analyss E. ANTONIDAKIS Department of Electroncs, Technologcal Educatonal Insttute of Crete, GREECE ena@chana.tecrete.gr Abstract: Durng a computer conference, users desre

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

ivoip: an Intelligent Bandwidth Management Scheme for VoIP in WLANs

ivoip: an Intelligent Bandwidth Management Scheme for VoIP in WLANs VoIP: an Intellgent Bandwdth Management Scheme for VoIP n WLANs Zhenhu Yuan and Gabrel-Mro Muntean Abstract Voce over Internet Protocol (VoIP) has been wdely used by many moble consumer devces n IEEE 802.11

More information

Recurrence. 1 Definitions and main statements

Recurrence. 1 Definitions and main statements Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

denote the location of a node, and suppose node X . This transmission causes a successful reception by node X for any other node

denote the location of a node, and suppose node X . This transmission causes a successful reception by node X for any other node Fnal Report of EE359 Class Proect Throughput and Delay n Wreless Ad Hoc Networs Changhua He changhua@stanford.edu Abstract: Networ throughput and pacet delay are the two most mportant parameters to evaluate

More information

Network Services Definition and Deployment in a Differentiated Services Architecture

Network Services Definition and Deployment in a Differentiated Services Architecture etwork Servces Defnton and Deployment n a Dfferentated Servces Archtecture E. kolouzou, S. Manats, P. Sampatakos,. Tsetsekas, I. S. Veners atonal Techncal Unversty of Athens, Department of Electrcal and

More information

Dynamic Fleet Management for Cybercars

Dynamic Fleet Management for Cybercars Proceedngs of the IEEE ITSC 2006 2006 IEEE Intellgent Transportaton Systems Conference Toronto, Canada, September 17-20, 2006 TC7.5 Dynamc Fleet Management for Cybercars Fenghu. Wang, Mng. Yang, Ruqng.

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

QoS in the Linux Operating System. Technical Report

QoS in the Linux Operating System. Technical Report Unverstät Karlsruhe (H) Insttut für elematk QoS n the Lnux Operatng System echncal Report Marc Bechler and Hartmut Rtter Insttut für elematk Fakultät für Informatk Unverstät Karlsruhe (H) E-Mal: [mbechler

More information

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture A Desgn Method of Hgh-avalablty and Low-optcal-loss Optcal Aggregaton Network Archtecture Takehro Sato, Kuntaka Ashzawa, Kazumasa Tokuhash, Dasuke Ish, Satoru Okamoto and Naoak Yamanaka Dept. of Informaton

More information

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network) Network-Wde Load Balancng Routng Wth Performance Guarantees Kartk Gopalan Tz-cker Chueh Yow-Jan Ln Florda State Unversty Stony Brook Unversty Telcorda Research kartk@cs.fsu.edu chueh@cs.sunysb.edu yjln@research.telcorda.com

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS Bogdan Cubotaru, Gabrel-Mro Muntean Performance Engneerng Laboratory, RINCE School of Electronc Engneerng Dubln Cty

More information

End-to-end measurements of GPRS-EDGE networks have

End-to-end measurements of GPRS-EDGE networks have End-to-end measurements over GPRS-EDGE networks Juan Andrés Negrera Facultad de Ingenería, Unversdad de la Repúblca Montevdeo, Uruguay Javer Perera Facultad de Ingenería, Unversdad de la Repúblca Montevdeo,

More information

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems Schedulablty Bound of Weghted Round Robn Schedulers for Hard Real-Tme Systems Janja Wu, Jyh-Charn Lu, and We Zhao Department of Computer Scence, Texas A&M Unversty {janjaw, lu, zhao}@cs.tamu.edu Abstract

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters Frequency Selectve IQ Phase and IQ Ampltude Imbalance Adjustments for OFDM Drect Converson ransmtters Edmund Coersmeer, Ernst Zelnsk Noka, Meesmannstrasse 103, 44807 Bochum, Germany edmund.coersmeer@noka.com,

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently.

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently. Corporate Polces & Procedures Human Resources - Document CPP216 Leave Management Frst Produced: Current Verson: Past Revsons: Revew Cycle: Apples From: 09/09/09 26/10/12 09/09/09 3 years Immedately Authorsaton:

More information

A GENERIC HANDOVER DECISION MANAGEMENT FRAMEWORK FOR NEXT GENERATION NETWORKS

A GENERIC HANDOVER DECISION MANAGEMENT FRAMEWORK FOR NEXT GENERATION NETWORKS A GENERIC HANDOVER DECISION MANAGEMENT FRAMEWORK FOR NEXT GENERATION NETWORKS Shanthy Menezes 1 and S. Venkatesan 2 1 Department of Computer Scence, Unversty of Texas at Dallas, Rchardson, TX, USA 1 shanthy.menezes@student.utdallas.edu

More information

Cooperative Load Balancing in IEEE 802.11 Networks with Cell Breathing

Cooperative Load Balancing in IEEE 802.11 Networks with Cell Breathing Cooperatve Load Balancng n IEEE 82.11 Networks wth Cell Breathng Eduard Garca Rafael Vdal Josep Paradells Wreless Networks Group - Techncal Unversty of Catalona (UPC) {eduardg, rvdal, teljpa}@entel.upc.edu;

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

Outsourcing inventory management decisions in healthcare: Models and application

Outsourcing inventory management decisions in healthcare: Models and application European Journal of Operatonal Research 154 (24) 271 29 O.R. Applcatons Outsourcng nventory management decsons n healthcare: Models and applcaton www.elsever.com/locate/dsw Lawrence Ncholson a, Asoo J.

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Real-Time Process Scheduling

Real-Time Process Scheduling Real-Tme Process Schedulng ktw@cse.ntu.edu.tw (Real-Tme and Embedded Systems Laboratory) Independent Process Schedulng Processes share nothng but CPU Papers for dscussons: C.L. Lu and James. W. Layland,

More information

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures Mnmal Codng Network Wth Combnatoral Structure For Instantaneous Recovery From Edge Falures Ashly Joseph 1, Mr.M.Sadsh Sendl 2, Dr.S.Karthk 3 1 Fnal Year ME CSE Student Department of Computer Scence Engneerng

More information

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho

More information

Rapid Estimation Method for Data Capacity and Spectrum Efficiency in Cellular Networks

Rapid Estimation Method for Data Capacity and Spectrum Efficiency in Cellular Networks Rapd Estmaton ethod for Data Capacty and Spectrum Effcency n Cellular Networs C.F. Ball, E. Humburg, K. Ivanov, R. üllner Semens AG, Communcatons oble Networs unch, Germany carsten.ball@semens.com Abstract

More information

Analysis of Premium Liabilities for Australian Lines of Business

Analysis of Premium Liabilities for Australian Lines of Business Summary of Analyss of Premum Labltes for Australan Lnes of Busness Emly Tao Honours Research Paper, The Unversty of Melbourne Emly Tao Acknowledgements I am grateful to the Australan Prudental Regulaton

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

Analysis of Energy-Conserving Access Protocols for Wireless Identification Networks

Analysis of Energy-Conserving Access Protocols for Wireless Identification Networks From the Proceedngs of Internatonal Conference on Telecommuncaton Systems (ITC-97), March 2-23, 1997. 1 Analyss of Energy-Conservng Access Protocols for Wreless Identfcaton etworks Imrch Chlamtac a, Chara

More information

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits Lnear Crcuts Analyss. Superposton, Theenn /Norton Equalent crcuts So far we hae explored tmendependent (resste) elements that are also lnear. A tmendependent elements s one for whch we can plot an / cure.

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

RequIn, a tool for fast web traffic inference

RequIn, a tool for fast web traffic inference RequIn, a tool for fast web traffc nference Olver aul, Jean Etenne Kba GET/INT, LOR Department 9 rue Charles Fourer 90 Evry, France Olver.aul@nt-evry.fr, Jean-Etenne.Kba@nt-evry.fr Abstract As networked

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook)

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook) MIT 8.996: Topc n TCS: Internet Research Problems Sprng 2002 Lecture 7 March 20, 2002 Lecturer: Bran Dean Global Load Balancng Scrbe: John Kogel, Ben Leong In today s lecture, we dscuss global load balancng

More information

"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *

Research Note APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES * Iranan Journal of Scence & Technology, Transacton B, Engneerng, ol. 30, No. B6, 789-794 rnted n The Islamc Republc of Iran, 006 Shraz Unversty "Research Note" ALICATION OF CHARGE SIMULATION METHOD TO ELECTRIC

More information

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs Dstrbuted Optmal Contenton Wndow Control for Elastc Traffc n Wreless LANs Yalng Yang, Jun Wang and Robn Kravets Unversty of Illnos at Urbana-Champagn { yyang8, junwang3, rhk@cs.uuc.edu} Abstract Ths paper

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Enterprise Master Patient Index

Enterprise Master Patient Index Enterprse Master Patent Index Healthcare data are captured n many dfferent settngs such as hosptals, clncs, labs, and physcan offces. Accordng to a report by the CDC, patents n the Unted States made an

More information

Research of concurrency control protocol based on the main memory database

Research of concurrency control protocol based on the main memory database Research of concurrency control protocol based on the man memory database Abstract Yonghua Zhang * Shjazhuang Unversty of economcs, Shjazhuang, Shjazhuang, Chna Receved 1 October 2014, www.cmnt.lv The

More information

A New Paradigm for Load Balancing in Wireless Mesh Networks

A New Paradigm for Load Balancing in Wireless Mesh Networks A New Paradgm for Load Balancng n Wreless Mesh Networks Abstract: Obtanng maxmum throughput across a network or a mesh through optmal load balancng s known to be an NP-hard problem. Desgnng effcent load

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

BERNSTEIN POLYNOMIALS

BERNSTEIN POLYNOMIALS On-Lne Geometrc Modelng Notes BERNSTEIN POLYNOMIALS Kenneth I. Joy Vsualzaton and Graphcs Research Group Department of Computer Scence Unversty of Calforna, Davs Overvew Polynomals are ncredbly useful

More information

Extending Probabilistic Dynamic Epistemic Logic

Extending Probabilistic Dynamic Epistemic Logic Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σ-algebra: a set

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

A Dynamic Load Balancing for Massive Multiplayer Online Game Server

A Dynamic Load Balancing for Massive Multiplayer Online Game Server A Dynamc Load Balancng for Massve Multplayer Onlne Game Server Jungyoul Lm, Jaeyong Chung, Jnryong Km and Kwanghyun Shm Dgtal Content Research Dvson Electroncs and Telecommuncatons Research Insttute Daejeon,

More information

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers Journal of Computatonal Informaton Systems 7: 13 (2011) 4740-4747 Avalable at http://www.jofcs.com A Load-Balancng Algorthm for Cluster-based Mult-core Web Servers Guohua YOU, Yng ZHAO College of Informaton

More information

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Network Aware Load-Balancing via Parallel VM Migration for Data Centers Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika.

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika. VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression Novel Methodology of Workng Captal Management for Large Publc Constructons by Usng Fuzzy S-curve Regresson Cheng-Wu Chen, Morrs H. L. Wang and Tng-Ya Hseh Department of Cvl Engneerng, Natonal Central Unversty,

More information

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

A Dynamic Energy-Efficiency Mechanism for Data Center Networks A Dynamc Energy-Effcency Mechansm for Data Center Networks Sun Lang, Zhang Jnfang, Huang Daochao, Yang Dong, Qn Yajuan A Dynamc Energy-Effcency Mechansm for Data Center Networks 1 Sun Lang, 1 Zhang Jnfang,

More information

Checkng and Testng in Nokia RMS Process

Checkng and Testng in Nokia RMS Process An Integrated Schedulng Mechansm for Fault-Tolerant Modular Avoncs Systems Yann-Hang Lee Mohamed Youns Jeff Zhou CISE Department Unversty of Florda Ganesvlle, FL 326 yhlee@cse.ufl.edu Advanced System Technology

More information

Lecture 3: Force of Interest, Real Interest Rate, Annuity

Lecture 3: Force of Interest, Real Interest Rate, Annuity Lecture 3: Force of Interest, Real Interest Rate, Annuty Goals: Study contnuous compoundng and force of nterest Dscuss real nterest rate Learn annuty-mmedate, and ts present value Study annuty-due, and

More information

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems 1 Mult-Resource Far Allocaton n Heterogeneous Cloud Computng Systems We Wang, Student Member, IEEE, Ben Lang, Senor Member, IEEE, Baochun L, Senor Member, IEEE Abstract We study the mult-resource allocaton

More information

In some supply chains, materials are ordered periodically according to local information. This paper investigates

In some supply chains, materials are ordered periodically according to local information. This paper investigates MANUFACTURING & SRVIC OPRATIONS MANAGMNT Vol. 12, No. 3, Summer 2010, pp. 430 448 ssn 1523-4614 essn 1526-5498 10 1203 0430 nforms do 10.1287/msom.1090.0277 2010 INFORMS Improvng Supply Chan Performance:

More information

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process Dsadvantages of cyclc TDDB47 Real Tme Systems Manual scheduler constructon Cannot deal wth any runtme changes What happens f we add a task to the set? Real-Tme Systems Laboratory Department of Computer

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

Fuzzy Set Approach To Asymmetrical Load Balancing In Distribution Networks

Fuzzy Set Approach To Asymmetrical Load Balancing In Distribution Networks Fuzzy Set Approach To Asymmetrcal Load Balancng n Dstrbuton Networks Goran Majstrovc Energy nsttute Hrvoje Por Zagreb, Croata goran.majstrovc@ehp.hr Slavko Krajcar Faculty of electrcal engneerng and computng

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network 700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School

More information

An Adaptive Cross-layer Bandwidth Scheduling Strategy for the Speed-Sensitive Strategy in Hierarchical Cellular Networks

An Adaptive Cross-layer Bandwidth Scheduling Strategy for the Speed-Sensitive Strategy in Hierarchical Cellular Networks An Adaptve Cross-layer Bandwdth Schedulng Strategy for the Speed-Senstve Strategy n erarchcal Cellular Networks Jong-Shn Chen #1, Me-Wen #2 Department of Informaton and Communcaton Engneerng ChaoYang Unversty

More information

Improved SVM in Cloud Computing Information Mining

Improved SVM in Cloud Computing Information Mining Internatonal Journal of Grd Dstrbuton Computng Vol.8, No.1 (015), pp.33-40 http://dx.do.org/10.1457/jgdc.015.8.1.04 Improved n Cloud Computng Informaton Mnng Lvshuhong (ZhengDe polytechnc college JangSu

More information

IT09 - Identity Management Policy

IT09 - Identity Management Policy IT09 - Identty Management Polcy Introducton 1 The Unersty needs to manage dentty accounts for all users of the Unersty s electronc systems and ensure that users hae an approprate leel of access to these

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information