Performance improvement of active queue management with per-flow scheduling



Similar documents
Active Queue Management and Wireless Networks

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow

17: Queue Management. Queuing. Mark Handley

Comparative Analysis of Congestion Control Algorithms Using ns-2

GREEN: Proactive Queue Management over a Best-Effort Network

Novel Approach for Queue Management and Improvisation of QOS for Communication Networks

Robust Router Congestion Control Using Acceptance and Departure Rate Measures

Rate-Based Active Queue Management: A Green Algorithm in Congestion Control

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

About the Stability of Active Queue Management Mechanisms

Aggregate Traffic Performance with Active Queue Management and Drop from Tail

Approximate fair bandwidth allocation: A method for simple and flexible traffic management

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.

DESIGN OF ACTIVE QUEUE MANAGEMENT BASED ON THE CORRELATIONS IN INTERNET TRAFFIC

Passive Queue Management

On Packet Marking Function of Active Queue Management Mechanism: Should It Be Linear, Concave, or Convex?

Using median filtering in active queue management for telecommunication networks

Practical Appraisal of Distinguish Active Queue Management Algorithms

An Adaptive RIO (A-RIO) Queue Management Algorithm

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

TCP/IP Over Lossy Links - TCP SACK without Congestion Control

AN ACTIVE QUEUE MANAGEMENT ALGORITHM FOR REDUCING PACKET LOSS RATE

Analyzing Marking Mod RED Active Queue Management Scheme on TCP Applications

Active Queue Management

Active Queue Management A router based control mechanism

Fuzzy Active Queue Management for Assured Forwarding Traffic in Differentiated Services Network

REM: Active Queue Management

Performance Analysis Of Active Queue Management (AQM) In VOIP Using Different Voice Encoder Scheme

Seamless Congestion Control over Wired and Wireless IEEE Networks

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN Vol 2 No 3 (May-2015) Active Queue Management

Network congestion, its control and avoidance

Improving Internet Quality of Service through Active Queue Management in Routers

Adaptive CHOKe: An algorithm to increase the fairness in Internet Routers

Active Queue Management (AQM) based Internet Congestion Control

Survey on AQM Congestion Control Algorithms

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

LRU-RED: An active queue management scheme to contain high bandwidth flows at congested routers

Performance Evaluation of Active Queue Management Using a Hybrid Approach

Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)


TCP based Denial-of-Service Attacks to Edge Network: Analysis and Detection

Analysis of Internet Transport Service Performance with Active Queue Management in a QoS-enabled Network

SJBIT, Bangalore, KARNATAKA

Packet Queueing Delay

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain

EVALUATION OF ACTIVE QUEUE MANAGEMENT ALGORITHMS

TCP, Active Queue Management and QoS

Queuing Algorithms Performance against Buffer Size and Attack Intensities

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

NEW ACTIVE QUEUE MANAGEMENT MECHANISM FOR REDUCING PACKET LOSS RATE

Requirements for Simulation and Modeling Tools. Sally Floyd NSF Workshop August 2005

Active Queue Management for Flow Fairness and Queue Stability

The Interaction of Forward Error Correction and Active Queue Management

Assessment of Active Queue Management algorithms by using NS2 simulator

Delay-Based Early Congestion Detection and Adaptation in TCP: Impact on web performance

XCP-i : explicit Control Protocol for heterogeneous inter-networking of high-speed networks

Active Queue Management

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)

Buffer Management Schemes for Supporting TCP in Gigabit Routers with Per-flow Queueing

La couche transport dans l'internet (la suite TCP/IP)

# % # % & &( ) & & + ) ),,, ) & & ## )&&. ),,, ) & & ## )&&. / /1 4 ) (.

RBA-RIO Rate Based Adaptive Red With In and Out. Algorithm for DiffServ AF PHB

Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks

TCP over Wireless Networks

Performance evaluation of TCP connections in ideal and non-ideal network environments

Why Congestion Control. Congestion Control and Active Queue Management. Max-Min Fairness. Fairness

A Framework For Evaluating Active Queue Management Schemes

Data Networks Summer 2007 Homework #3

4 High-speed Transmission and Interoperability

Chapter 6 Congestion Control and Resource Allocation

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

Cable Modem Buffer Management in DOCSIS Networks

Active Queue Management of TCP Flows with Self-scheduled Linear Parameter Varying Controllers

Transcription:

Performance improvement of active queue management with per-flow scheduling Masayoshi Nabeshima, Kouji Yata NTT Cyber Solutions Laboratories, NTT Corporation 1-1 Hikari-no-oka Yokosuka-shi Kanagawa 239 0847 Japan {nabeshima.masayoshi, yata.kouji}@lab.ntt.co.jp Abstract Active queue management (AQM) has two main objectives. One is to achieve high throughput and low average queueing delay simultaneously. The other is to achieve fair bandwidth allocation among competing flows. Although many algorithms have been proposed and investigated in an effort to fulfil the goal of AQM, there are few studies on AQM with per-flow scheduling. This is because longest queue drop (), which is commonly used as a queue management discipline with per-flow scheduling, has good performance in terms of throughput and fairness. However, suffers from long queueing delays. In addition, it cannot support explicit congestion notification (ECN). Accordingly, for AQM with per-flow scheduling, we can specify the following requirements: (1) it achieves simultaneously lower average queueing delays than and the same throughput as. (2) it achieves the same degree of fairness as. (3) it achieves lower packet loss ratios than when flows are ECN capable. This paper proposes a new AQM that fulfils these requirements. In, packets are dropped probabilistically before the buffer is full. The probability is maintained for each active flow. The probability of the flow with the longest queue length is increased when the network is congested. Simulations confirm that our mechanism achieves the requirements set for AQM with per-flow scheduling. 1 Introduction The number of people who use the Internet has been increasing. There is no doubt that the Internet is becoming a public communication infrastructure that rivals the public switched telephone network (PSTN). However, some essential features must be provided if it is to be accepted more widely. One is to allocate bandwidth fairly among competing flows. Queue management plays an important role in fair bandwidth allocation. From the point of dropping packets, queue management can be classified into two cat- 1

egories. The first category is passive queue management (PQM), which does not drop packets until the buffer is full. Drop tail is a well-known example of PQM. The second category is active queue management (AQM) [1], which drops packets probabilistically before the buffer is full. AQM has two main objectives. One is to achieve high throughput and low average queueing delay simultaneously. The other is to achieve fair bandwidth allocation among competing flows. Many algorithms have been proposed and investigated in an effort to fulfil the goal of AQM. They can be classified into three types: AQM without per-flow information (AQM1), AQM with per-flow information (AQM2), and AQM with perflow scheduling (AQM3). For each flow, AQM1 and AQM2 use a single queue, while AQM3 uses a separate queue. AQM2 and AQM 3 maintain per-flow information while AQM1 does not. In general, AQM1 is the least complex to implement, but it has the worst performance as regards fair bandwidth allocation. Conversely, AQM3 is the most complex to implement, but it achieves the fairest bandwidth allocation. AQM2 provides an intermediate solution. Random early detection (RED) [2], BLUE [3], and random early marking (REM) [4] are examples of AQM1. Flow RED (FRED) [5], balanced RED (BRED) [6], and BRED with virtual buffer occupancy (BRED/VBO) [7] are examples of AQM2. However, AQM3 has had few proposals. This is because longest queue drop (), which is commonly used as a queue management discipline with per-flow scheduling, has good performance in terms of throughput and fairness. The authors in [8] show that with per-flow scheduling achieves much fairer bandwidth allocation than RED with per-flow scheduling. In the discipline, when a new packet arrives at a router, if the buffer is full the router chooses the flow with the longest queue length. The router then drops the front packet from the buffer of the chosen flow. The discipline is classified as PQM because no packets are dropped until the buffer is full. Thus, it has two drawbacks. One is that the queue length is likely to be full for a long period of time. This results in long queueing delays. The other is that the discipline cannot support explicit congestion notification (ECN) [9]. To support ECN, routers are required to mark a packet before the buffer is full. This means that AQM routers can support ECN while PQM routers cannot. Since the discipline is PQM, it cannot provide end-nodes with any of the advantages of ECN. Therefore, the requirements for AQM3 can be specified as follows. The first is to achieve simultaneously lower average queueing delays than and the same throughput as. The second is to achieve the same degree of fairness as. The third is to achieve lower packet loss ratios than when flows are ECN 2

capable. This paper proposes a new AQM3 that satisfies these requirements. In our mechanism, packets are dropped probabilistically before the buffer is full. The probability is maintained for each active flow. The probability of the flow with the longest queue length is increased when the network is congested. Flows whose sending rate exceeds the fair share rate tend to be chosen as the one with the longest queue length. Thus, it is reasonable to increase the packet dropping probability of such flows. The remainder of this paper is organised as follows. Section 2 overviews conventional queue management disciplines with per-flow scheduling. Section 3 describes our proposed AQM3. Section 4 uses computer simulations to confirm that can achieve the requirements set for AQM3. Finally, conclusions are provided in Section 5. 2 Queue management discipline with per-flow scheduling In this section, we describe two conventional queue management disciplines: and RED. is the most commonly used queue management discipline with per-flow scheduling. RED is the most well-known AQM1 and it can be used as AQM3. 2.1 Longest queue drop () The logic of is that flows whose sending rate exceeds the fair share rate are likely to have longer queues. Thus, it is desirable to drop packets of such flows preferentially to achieve fair bandwidth allocation. When a new packet arrives at a router, if the buffer is full the router chooses the flow whose queue length is the longest. The router then drops the front packet from the buffer of the chosen flow. Drop-from-front informs the traffic senders about the packet loss earlier than that in the drop-tail case. This increases throughput. Though can achieve a high degree of throughput and fairness, it has two shortcomings. The first is that average queueing delays become long because the queue length is likely to be full for a long period of time. The second is that cannot support ECN. Since ECN is expected to be used widely in the future Internet, queue management disciplines with per-flow scheduling need to support ECN. The authors in [10] proposed two new queue management disciplines: throughput DRR (TDRR) and queue state DRR (QSDRR). They showed that when their 3

proposed mechanisms were used, short-lived TCP flows had better performance than when was used. However, TDRR and QSDRR possess the same problems as because they are classified as PQM. 2.2 Random early detection (RED) When a new packet arrives at a RED router, the router determines whether the packet should be accepted or not based on the average queue length avg and two thresholds mint H and maxt H. Specifically, if avg is below mint H, the packet is accepted. If avg lies between the two thresholds, the packet is dropped probabilistically. If avg is above maxt H, the packet is dropped. The average queue length at time t is calculated by the following low pass filter. avg(t) = (1 w) avg(t 1) + w q(t) w is a weight parameter, and q(t) is the instantaneous queue length at time t. w determines the time constant of the low pass filter. In [2], w is set to 0.002. The optimal values of mint H and maxt H depend on the desired average queue size. In [2], mint H is set to five packets. maxt H is set to three times mint H. The basic idea behind RED is that a router detects incipient congestion by the relationship between avg and the two thresholds, and notifies flows of the decision result. This results in keeping the average queue length low. As the authors in [8] showed, however, RED with per-flow scheduling does not have the same degree of fairness as with per-flow scheduling. This means that RED does not achieve one of the AQM3 requirements. 3 Proposed mechanism First, we define the following parameters. B = buffer size Q = total queue length of active flows T H = predefined threshold qlen[i] = queue length of flow i P [i] = packet dropping probability of flow i prevt ime[i] = time when the previous update of P [i] occurred minit V L = minimum time interval between two successive updates of P [i] N = total number of active flows 4

now = current time Note that a flow is defined as active if it has at least one packet in its buffer. When a new packet of flow i arrives at a router, the router calculates the value of EQ that represents Q plus the size of the arriving packet. In addition, if the router does not maintain the state of flow i, P [i] is set to zero. If (EQ > B), the router chooses the flow with the longest queue length. The router then drops the front packet from the buffer of the chosen flow. That is, when EQ exceeds B, and behave the same. If (EQ B), the router chooses the flow whose queue length is the longest. If the chosen flow is ECN capable, the router marks the first unmarked packet with probability P [L]. L is the identifier of the chosen flow. If flow L is non-ecn capable, the router drops the front packet from the buffer of flow L with probability P [L]. When a packet of flow i is transmitted, if qlen[i] is 0 the router eliminates the state of flow i. That is, routers maintain the states of only active flows. The value of P [i] is updated as follow. When a new packet arrives, if (EQ > T H) we check whether (now prevt ime[l]) is more than minit V L. L is the identifier of the flow whose qlen is the longest. If the condition is true, P [L] is increased by α; α is the increase parameter. prevt ime[l] is then set to now. When a packet of flow i is transmitted, if (P [i] > 0) and (Q/N > qlen[i]), we check whether (now prevt ime[i]) is more than minit V L. If the condition is true, P [i] is decreased to β P [i]; β is the decrease parameter. prevt ime[i] is then set to now. The logic behind the update is as follows. Flows whose sending rate exceeds the fair share rate tend to be chosen as the one with the longest queue length. Thus, it is reasonable to increase the packet dropping probability of such flows. Q/N represents the average buffer occupancy of each active flow. If (Q/N > qlen[i]), the current queue length of flow i is less than the average. Thus, it is desirable to decrease the packet dropping probability of flow i. For completeness, the pseudo-code of the proposed mechanism is shown in the appendix. We describe how our proposed scheme can support ECN flows because ECN support is one of the biggest difference between and. The TCP sender sets the ECN capable transport (ECT) bit in the IP header of data packets to inform routers that it supports ECN. Routers choose the flow with the longest queue length. If the flow is ECN capable, the routers mark the first unmarked packet with probability P [L] by setting the congestion experienced (CE) bit in the IP header. If the TCP receiver receives a packet with CE bit set, it sets the ECN-echo (ECE) bit in the TCP header of the subsequent ACK packet. The 5

receiver continues to set the ECE bit until it receives a data packet with the congestion window reduced (CWR) bit set. This provides robustness against the loss of an ACK packet with ECE bit set. When the TCP sender receives an ACK packet with ECE bit set, it halves its congestion window. In addition, the sender sets the CWR bit in the TCP header of the next data packet after the reduction of the window. The TCP sender reacts once per RTT to ACK packets whose ECE bits are set. 4 Simulated performance As described before, there are three AQM3 requirements. One is to achieve, simultaneously, lower average queueing delays than and the same throughput as. Another is to achieve the same degree of fairness as. The other is to achieve lower packet loss ratios than when flows are ECN capable. We simulated the performance of to confirm that it can fulfil these requirements. We compared it against and Adaptive RED (ARED) [11]. ARED is an enhanced version of RED. The characteristic of ARED is that it tunes its parameters automatically to increase the robustness of RED. In ARED, network operators set only the target queue length. mint H and maxt H are set based on the target queue length. The weight parameter w is set as a function of the link bandwidth. 4.1 Simulation model All simulations were performed using ns-2 [12]. We used TCP and UDP traffic generation codes from the ns-2 distribution. Packet size was 1000 bytes. For TCP, the SACK option [13], and the timestamp option [14] were used. We changed the maximum advertised window to 64 packets. The other parameters took the default values of the simulator. For UDP, the sending rate was set to twice its fair share rate. The start time of each flow was randomised in the range 0 to 5 secs in each simulation run. Though can be used in conjunction with any per-flow scheduling algorithm, we tested it with the deficit round robin (DRR) scheduling algorithm [15]. The code of DRR with is also available from the ns-2 distribution. For DRR, Quantum was 1000 bytes. For ARED, the target queue length was 50 packets. The other parameter values were the same as in [11]. For, T H was 50 packets and minit V L was 100 msec. The increase parameter α was 0.05. The decrease parameter β was 0.9. The simulation time was 100 seconds. We repeated each simulation 10 times and the average values of these simulations are reported below. Note that the first 6

50 seconds of each run were not considered in determining the results. We used the single congested link topology shown in Figure 1. Congestion occurred on the link between routers 1 and 2. Router 1 was the bottleneck point. Each output link had a capacity of 100 Mbps. The propagation delay between routers was 10 ms. The propagation delay between the router and source/destination i (i = 1, 2,...) was set to (i mod 5) ms. 4.2 Scenario A In scenario A, we investigated the performance of each scheme when TCP and UDP flows co-exist. When the total number of flows was N, we assumed that flows 1 to (N 1) were TCP, and flow N was UDP. 4.2.1 Fairness This section evaluated each mechanism in terms of fairness. In the first simulation, the total number of flows was fixed to 10. The buffer size was varied from 60 to 250 packets. Figure 2 shows Jain s fairness index [16] versus the buffer size. The fairness index is defined as follows. ( Ni ) 2 x i fairness index = N N i x 2 i,where x i : measured throughput of flow i N: total number of flows The fairness index always lies between 0 and 1. An value closer to 1 represents a higher degree of fairness. We can see that achieves approximately the same degree of fairness as. Their fairness index is more than 0.99 regardless of the buffer size. On the other hand, ARED has the worst performance. In ARED, UDP flows receive much more bandwidth than TCP flows. This is because the packet dropping policy in ARED focuses on only TCP flows. This result clarifies that merely using the scheduling policy, without an appropriate queue management discipline, does not lead to good performance. In the second simulation, the buffer size was fixed to 250 packets. The total number of flows was varied from 10 to 50. Figure 3 shows the fairness index versus the total number of flows. Our mechanism achieves approximately the same degree of fairness as regardless of the total number of flows. The fairness index of ARED approaches 1 as the total number of flows increases. The reason is that the sending rate of flow N, which is UDP, is decreased as the total number of flows 7

increases. In other words, the influence of flow N on TCP flows declines as the total number of flows increases. From these results, it is apparent that ARED fails to achieve the AQM3 requirements. Thus, we will not consider ARED hereafter. 4.2.2 Throughput and average queueing delay This section evaluated each mechanism in terms of throughput and average queueing delay. In the first simulation, the total number of flows was fixed to 10. The buffer size was varied from 60 to 250 packets. Figure 4 shows throughput versus the buffer size. Throughput in Figure 4 represents the sum of the throughput of each flow and it is normalised by the bottleneck link capacity, 100 Mbps. We can see that achieves approximately the same degree of throughput as. Their throughput is more than 0.99 regardless of the buffer size. Figure 5 shows average queueing delay versus the buffer size. For each data packet, we measured the time from when it arrives at router 1 until it was transmitted from the router. We can see that achieves lower average queueing delay than. Since probabilistically drops packets before the buffer is full, the average queue length in is lower than that in. In the second simulation, the buffer size was fixed to 250 packets. The total number of flows was varied from 10 to 50. Figure 6 shows throughput versus the total number of flows. Our mechanism achieves approximately the same throughput as regardless of the total number of flows. Figure 7 shows average queueing delay versus the total number of flows. We can see that achieves lower average queueing delay than. Thus, we can say that achieves lower average queueing delay than while matching its throughput. 4.2.3 Packet loss ratio This section evaluated each mechanism in terms of packet loss ratio when TCP flows were ECN capable. In the first simulation, the total number of flows was fixed to 10. The buffer size was varied from 60 to 250 packets. Figure 8 shows packet loss ratio versus the buffer size. We can see that achieves lower packet loss ratios than. In, when the buffer size was more than 150 packets the packet loss ratio was 0. In the second simulation, the buffer size was fixed to 250 packets. The total number of flows was varied from 10 to 50. In, the packet loss ratio was 0 regardless of the total number of flows. In, on the other hand, the packet loss ratio was always more than 10 4. Thus, we can say that can achieve lower packet loss ratios than when flows are ECN capable. From the 8

results in scenario A, we state that fulfils the AQM3 requirements in an environment where TCP and UDP flows co-exist. 4.3 Scenario B In scenario B, we investigated the performance of each scheme when TCP flows have different RTTs. The total number of flows was assumed to 50. The propagation delay between the router and source/destination i (i = 1, 2,..., 50) was set to (i mod r) ms. The value of r was varied from 10 to 50. We call the parameter r the RTT factor. The buffer size was fixed to 250 packets. Figure 9 shows the fairness index versus the RTT factor. The fairness indices of and decrease as the RTT factor increases. This is because the higher the RTT factor, the larger the differences of RTTs. The fairness index of is better than that of. However, the fairness index of our mechanism is always more than 0.96. Figure 10 shows throughput versus the RTT factor. Throughput in Figure 10 represents the sum of the throughput of each flow and it is normalised by the bottleneck link capacity, 100 Mbps. The throughputs of and are more than 0.99 regardless of the RTT factor. Figure 11 shows average queueing delay versus the RTT factor. We can see that achieves lower average queueing delay than. We measured the packet loss ratio when all flows were ECN capable. In our mechanism, the packet loss ratio was 0 regardless of the RTT factor. In, on the other hand, the packet loss ratio was always more than 5 10 3. From the results in scenario B, we state that fulfils the AQM3 requirements in an environment where TCP flows have different RTTs. 4.4 Scenario C In scenario C, we investigated the performance of each scheme when the flow traverses more than one congested link. We used the topology shown in Figure 12. All links between routers were congested. The propagation delay between routers was 5 ms. The propagation delay between the router and source/destination i (i = 1, 2,...) was set to (i mod 5) ms. Flows 1 to 25 were the target flows and traversed more than one congested link. The other flows were the cross flows and traversed only one congested link. All flows were TCP. Figure 13 shows the fairness index versus the number of congested links. The fairness indices of and are more than 0.99 regardless of the 9

number of congested links. Figure 14 shows throughput versus the number of congested links. Throughput in Figure 10 represents the sum of the throughput of each target flow and it is normalised by 50 Mbps. The throughput of is higher than that of. However, the difference is approximately 1Mbps. Figure 15 shows average queueing delay versus the number of congested links. When the number of congested links is N 1, we measured the time from when a data packet arrives at router 1 until the packet has been transmitted from router N 1. We can see that achieves lower average queueing delay than. We measured the packet loss ratio when all flows were ECN capable. In our mechanism, the packet loss ratio was 0 regardless of the number of congested links. In, on the other hand, the packet loss ratio was always more than 3 10 3. From the results in scenario C, we state that fulfils the AQM3 requirements under the multiple congested links topology. 5 Conclusions This paper has proposed a new queue management discipline with per-flow scheduling. In, the packet dropping probability P is maintained for each active flow. When a packet arrives at a router, the router calculates the value of EQ, which represents the total queue length plus the size of the arriving packet. If EQ exceeds buffer size B, the router chooses the flow with the longest queue length. The router then drops the front packet from the buffer of the chosen flow. That is, and behave the same when EQ exceeds B. If (EQ B), the router chooses the flow with the longest queue length. The router then marks the first unmarked packet or drops the front packet with probability P of the chosen flow. The value of P is updated as follows. When a packet arrives, if EQ exceeds the predefined threshold, P of the flow with the longest queue length is increased. When a packet of flow i is transmitted, if the buffer occupancy of flow i is less than the average, P of flow i is decreased. We evaluated the performance of our mechanism by computer simulations. Fairness, throughput, average queueing delay, and packet loss ratio were used as performance metrics. The simulations clarified that could achieve approximately the same degree of fairness and throughput as. Moreover, could achieve smaller average queueing delay and packet loss ratios than. In short, can fulfil the requirements for AQM with per-flow scheduling. 10

References [1] B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J. Wroclawski, and L. Zhang. Recommendation on queue management and congestion avoidance in the Internet. RFC2309, April 1998. [2] S. Floyd and V. Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Trans. Netwoking, 1:397 413, Aug 1993. [3] W. Feng, K. Shin, D. Kandlur, and D. Saha. The BLUE active queue management algorithms. IEEE/ACM Trans. Networking, 10:513 528, Aug 2002. [4] S. Athuraliya, S. Low, V. Li, and Q. Yin. REM: Active queue management. IEEE Network, 15(3):48 53, May/June 2001. [5] D. Lin and R. Morris. Dynamics of random early detection. ACM SIGCOMM 97, pages 127 137, Sept 1997. [6] F. Anjum and L. Tassiulas. Fair bandwidth sharing among adaptive and nonadaptive flows in the Internet. IEEE INFOCOM, pages 1412 1420, March 1999. [7] M. Nabeshima. Improving the performance of active buffer management with per-flow information. IEEE Communications Letters, 6(7):306 308, July 2002. [8] B. Suter, T. Lakshman, D. Stiliadis, and A. Choudhury. Design consideration for supporting TCP with per-flow queueing. IEEE INFOCOM, pages 299 306, March 1998. [9] K. Ramakrishnan, S. Floyd, and D. Black. The addition of explicit congestion notification (ECN) to IP. RFC3168, Sept 2001. [10] A. Kantawala and J. Turner. Queue management for short-lived TCP flows in backbone routers. IEEE GLOBECOM, pages 2380 2384, Nov 2002. [11] S. Floyd, R. Gummadi, and S. Shenker. Adaptive RED: An algorithm for increasing the robustness of RED s active queue management. available from http://www.aciri.org/floyd/red.html. [12] S. McCanne and S. Floyd. NS simulator, version 2.26. available from http: //www.isi.edu/nsnam/ns. 11

[13] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP selective acknowledgement options. RFC 2018, Oct. 1996. [14] V. Jacobson, R. Braden, and B. Borman. TCP extensions for high performance. RFC 1185, May 1992. [15] M. Shreedhar and G. Varghese. Efficient fair queuing using deficit round robin. IEEE/ACM Trans. Networking, 4:375 385, June 1996. [16] R. Jain. The art of computer systems performance analysis. John Wiley and Sons, New York, 1991. 12

Appendix: Pseudo-code of our proposed mechanism When a packet of flow i arrives EQ = Q+ size of the arriving packet; if (EQ > T H) inc P(); if (EQ > B) while (EQ > B) { choose flow L whose qlen is the longest; drop the front packet from the buffer of flow L; EQ = EQ size of the dropped packet; } the arriving packet is accepted; else { choose flow L whose qlen is the longest; rnd is a random number in the range 0.0 to 1.0; if (P [L] rnd) { if (flow L is ECN capable) mark the first unmarked packet of flow L; else // non-ecn capable drop the front packet from the buffer of flow L; } the arriving packet is accepted; } When a packet of flow i is transmitted if ((P [i] > 0) and (Q/N > qlen[i])) { diff = now prevt ime[i]; if (diff > minit V L) { P [i] = P [i] β prevt ime[i] = now; } } inc P() { L is the identifier of the flow with the longest queue length; diff = now prevt ime[l]; 13

if (diff > minit V L) { P [L] = P [L] + α; if (P [L] > 1)P [L] = 1 prevt ime[l] = now; } } 14

List of Figure Captions Figure 1. Single congested link topology. Figure 2. Fairness index versus buffer size when TCP and UDP flows co-exist. Figure 3. Fairness index versus total number of flows when TCP and UDP flows co-exist. Figure 4. Throughput versus buffer size when TCP and UDP flows co-exist. Figure 5. Average queueing delay versus buffer size when TCP and UDP flows co-exist. Figure 6. Throughput versus total number of flows when TCP and UDP flows coexist. Figure 7. Average queueing delay versus total number of flows when TCP and UDP flows co-exist. Figure 8. Packet loss ratio versus buffer size when TCP and UDP flows co-exist. Figure 9. Fairness index versus RTT factor when TCP flows have different RTTs. Figure 10. Throughput versus RTT factor when TCP flows have different RTTs. Figure 11. Average queueing delay versus RTT factor when TCP flows have different RTTs. Figure 12. Multiple congested links topology. Figure 13. Fairness index versus number of congested links. Figure 14. Throughput versus number of congested links. Figure 15. Average queueing delay versus number of congested links. 15

S 1 D 1 S 2 Router 1 Router 2 bottleneck link D 2 S N D N Figure 1: Single congested link topology. 1.0 ARED Fairness index 0.9 0.8 50 100 150 200 250 Buffer size [packets] Figure 2: Fairness index versus buffer size when TCP and UDP flows co-exist. 16

1.0 Fairness index 0.9 ARED 0.8 10 20 30 40 50 Total number of flows Figure 3: Fairness index versus total number of flows when TCP and UDP flows co-exist. 1.00 Throughput 0.98 0.96 50 100 150 200 250 Buffer size [packets] Figure 4: Throughput versus buffer size when TCP and UDP flows co-exist. 17

Average queueing delay [sec] 10 1 10 2 10 3 50 100 150 200 250 Buffer size [packets] Figure 5: Average queueing delay versus buffer size when TCP and UDP flows co-exist. 18

1.00 Throughput 0.98 0.96 10 20 30 40 50 Total number of flows Figure 6: Throughput versus total number of flows when TCP and UDP flows co-exist. 10 1 Average queueing delay [sec] 10 2 10 3 10 20 30 40 50 Total number of flows Figure 7: Average queueing delay versus total number of flows when TCP and UDP flows co-exist. 19

10 2 Packet loss ratio 10 3 10 4 50 100 150 200 250 Buffer size [packets] Figure 8: Packet loss ratio versus buffer size when TCP and UDP flows co-exist. 1.0 Fairness index 0.9 0.8 10 20 30 40 50 RTT factor Figure 9: Fairness index versus RTT factor when TCP flows have different RTTs. 20

1.00 Throughput 0.98 0.96 10 20 30 40 50 RTT factor Figure 10: Throughput versus RTT factor when TCP flows have different RTTs. 10 1 Average queueing delay [s] 10 2 10 3 10 20 30 40 50 RTT factor Figure 11: Average queueing delay versus RTT factor when TCP flows have different RTTs. 21

Flows1-25 Sources Router 1 Flows26-50 Destinations Router 2 Flows(25N-24)-(25N) Destinations Router N Flows26-50 Sources Flows51-75 Sources Flows1-25 Destinations Figure 12: Multiple congested links topology. 1.0 Fairness index 0.9 0.8 1 2 3 4 5 Number of congested links Figure 13: Fairness index versus number of congested links. 22

1.00 Throughput 0.95 0.90 1 2 3 4 5 Number of congested link Figure 14: Throughput versus number of congested links. 10 1 Average queueing delay [s] 10 2 10 3 1 2 3 4 5 Number of congested links Figure 15: Average queueing delay versus number of congested links. 23