Study of Active Queue Management Algorithms ----Towards stabilize and high link utilization



Similar documents
Comparative Analysis of Congestion Control Algorithms Using ns-2

Active Queue Management A router based control mechanism

ns-2 Tutorial Exercise

Novel Approach for Queue Management and Improvisation of QOS for Communication Networks

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Active Queue Management (AQM) based Internet Congestion Control

Network Simulator: ns-2

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow

LRU-RED: An active queue management scheme to contain high bandwidth flows at congested routers

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

Robust Router Congestion Control Using Acceptance and Departure Rate Measures

Survey on AQM Congestion Control Algorithms

Active Queue Management

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

Passive Queue Management

Adaptive Virtual Buffer(AVB)-An Active Queue Management Scheme for Internet Quality of Service

Network congestion, its control and avoidance

Performance improvement of active queue management with per-flow scheduling

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.

Performance Evaluation of AQM Techniques in PIM-DM Multicast Network for SRM Protocol

Active Queue Management and Wireless Networks

Seamless Congestion Control over Wired and Wireless IEEE Networks

17: Queue Management. Queuing. Mark Handley

Performance Evaluation of Active Queue Management Using a Hybrid Approach

Random Early Detection Gateways for Congestion Avoidance

TCP in Wireless Mobile Networks

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN Vol 2 No 3 (May-2015) Active Queue Management

SJBIT, Bangalore, KARNATAKA

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

Chapter 6 Congestion Control and Resource Allocation

Data Networks Summer 2007 Homework #3

ns-2 Tutorial (1) Multimedia Networking Group, The Department of Computer Science, UVA Jianping Wang Jianping Wang, 2004 cs757 1 Today

SHIV SHAKTI International Journal of in Multidisciplinary and Academic Research (SSIJMAR) Vol. 4, No. 3, June 2015 (ISSN )

Rate-Based Active Queue Management: A Green Algorithm in Congestion Control

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Exercises on ns-2. Chadi BARAKAT. INRIA, PLANETE research group 2004, route des Lucioles Sophia Antipolis, France

On Packet Marking Function of Active Queue Management Mechanism: Should It Be Linear, Concave, or Convex?

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

A Survey on Congestion Control Mechanisms for Performance Improvement of TCP

Chaoyang University of Technology, Taiwan, ROC. 2 Department of Computer Science and Information Engineering

TCP Westwood for Wireless

Per-Flow Queuing Allot's Approach to Bandwidth Management

GREEN: Proactive Queue Management over a Best-Effort Network

The Interaction of Forward Error Correction and Active Queue Management

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

Using median filtering in active queue management for telecommunication networks

Assessing the Impact of Multiple Active Queue Management Routers

Analyzing Marking Mod RED Active Queue Management Scheme on TCP Applications

Modeling Active Queue Management algorithms using Stochastic Petri Nets

Assessment of Active Queue Management algorithms by using NS2 simulator

Analysis and Detection of a Denial-of-Service Attack Scenario generated by TCP Receivers to Edge Network

Practical Appraisal of Distinguish Active Queue Management Algorithms

CSE 123: Computer Networks

Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks

DESIGN OF ACTIVE QUEUE MANAGEMENT BASED ON THE CORRELATIONS IN INTERNET TRAFFIC

AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS

Transport Layer Protocols

Mobile Communications Chapter 9: Mobile Transport Layer

Performance Analysis Of Active Queue Management (AQM) In VOIP Using Different Voice Encoder Scheme

15-441: Computer Networks Homework 2 Solution

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)

TCP, Active Queue Management and QoS

TTC New Reno - Consistent Control of Packet Traffic

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain

Routing in packet-switching networks

NEW ACTIVE QUEUE MANAGEMENT MECHANISM FOR REDUCING PACKET LOSS RATE

Performance Evaluation of Linux Bridge

Oscillations of the Sending Window in Compound TCP

TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

TCP Performance Simulations Using Ns2. Johanna Antila 51189d TLT

Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network

Improving Internet Quality of Service through Active Queue Management in Routers

Applying Active Queue Management to Link Layer Buffers for Real-time Traffic over Third Generation Wireless Networks

Router Scheduling Configuration Based on the Maximization of Benefit and Carried Best Effort Traffic

Disjoint Path Algorithm for Load Balancing in MPLS network

TCP over Wireless Networks

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan

Adaptive Coding and Packet Rates for TCP-Friendly VoIP Flows

Effect of Packet-Size over Network Performance

A Survey: High Speed TCP Variants in Wireless Networks

FEW would argue that one of TCP s strengths lies in its

RWM and Network Congestion Management

Packet Queueing Delay

Analysis of QoS Routing Approach and the starvation`s evaluation in LAN

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

A Congestion Control Algorithm for Data Center Area Communications

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

TCP based Denial-of-Service Attacks to Edge Network: Analysis and Detection

Improving the Performance of HTTP over High Bandwidth-Delay Product circuits

TCP for Wireless Networks

Network congestion control using NetFlow

AN ACTIVE QUEUE MANAGEMENT ALGORITHM FOR REDUCING PACKET LOSS RATE

How To Monitor Performance On Eve

Achieving QoS for TCP traffic in Satellite Networks with Differentiated Services

Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications

QoS issues in Voice over IP

First Midterm for ECE374 03/24/11 Solution!!

Transcription:

Study of Active Queue Management Algorithms ----Towards stabilize and high link utilization Mengke Li, Huili Wang Computer Science and Engineering University of Nebraska Lincoln Lincoln, NE 66588-0115 {mli, hwang@cse.unl.edu Abstract The focus of this work is to study the behaviors of varies queue managements, including RED (Random Early Detection), SRED (Stabilized-RED), and BLUE. The performance metrics of the comparison are queue size, the drop probability, and link utilization. The simulation is done using NS-2. The results of this work shows that different from the RED, SRED and BLUE, which use the available queue length as the indicator of the severity of congestion, they use packet loss and link idle events to manage the congestion. Thus SRED and BLUE achieve significant better performance in terms of packet loss rates and buffer size requirement in the Network. Finally we report a new queue management, SBQ (Stochastic Blue Queue management), which enforce fairness among a large number of flows. Keywords: Congestion Control, SFQ, RED, SRED, BLUE, packet loss rates, buffer size, fairness 1. Introduction As the Internet bandwidth increases, more users share the link. It is important to adopt some mechanisms to avoid packet loss since all resources which is consumed in transmit will be wasted if a packet is lost. As a result, TCP (Transmission Control Protocol) congestion control has been used to adaptively control the rates of individual connections sharing IP (Internet Protocol) network links over the last decade. Drop-tail queue has been used widely as the straightforward solution for queue management. However, current TCP still experiences high packet loss, even though it also uses other techniques suc h as congestion avoidance, slow start, fast retransmit and fast recovery mechanism. This leads many researches on the dynamic queue managements. The basic idea for dynamic queue management is to implicitly or explicitly notify sources to decrease the transmission rates before the queue overflows in hope that this coordination between sources and network will eliminate any future sustained packet loss. A typical dynamic queue management is RED (Random Early Detection), which was recommended by the IETF for deployment in IP routers/networks and is supported by many routers. It is now widely believed that a RED-controlled queue performs better than a drop-tail queue. However, the inherent design of RED, which uses the available queue length as the indicator of the severity of congestion, makes it difficult to parameterize RED queues to give good performance under different network scenarios. Contrast to RED, SRED and BLUE use packet loss and link idle events to manage the congestion. The basic idea of SRED and BLUE is to maintain a single probability to mark packets when they are queued. The probability is changed

according the utilization ratio of the link. The simulation results show better performances on packet loss rate and buffer size management. The rest of the report is organized as follows. Section 2 gives the descriptions of the different queue management algorithm, RED, SRED, and BLUE. Section 3 describes the simulation results of RED, SRED and BLUE queue algorithms using NS-2. Section 4 summarizes the dynamic queue algorithm and reports other approaches. Finally, section 5 concludes a future work. 2. Related Work In this section, we focus on RED, SRED, BLUE and SFQ, and briefly explain them in each of the sub section. The main idea of this work is to compare these typical dynamic queuing algorithms instead of exhaustively reviewing the existing ones. This will be used in performance comparison. 2.1 RED Algorithm RED (Early Detection Scheme) is proposed by Floyd and Jacobson for congestion avoidance in packet-switch networks. The idea is simple: the gateway notifies the incipient congestion to the sources by dropping the arriving packets. The RED congestion control mechanism monitor the average queue size for each output queue with buffer size K. Whenever a packet arrive, RED checks the average used queue size, and decides to mark by compare the average queue size with the defined thresholds. If RED decides to mark the packet, it will discard this packet. The source will wait for timeout or duplicate ACK to confirm the packet loss and stop sending too much packets to the network. In RED, it defines two thresholds, the minimum threshold MIN th and the maximum threshold MAX th. Let the current average queue size is Q. If Q is less than MIN th, the packet will not be marked. If Q is greater than MAX th, the packet will be marked. If Q is between MIN th and MAX th, the packet will be marked by probability p m. The Q is vary linearly from MIN th to MAX th. The final probability to mark the packet increases slowly as the count increase since the last marked packet starting form 0. The maximum probability to drop a packet is defined by MAX p. Thus the packet-marking probability varies from 0 to MAX p : Pa MAX p ( Q MIN th) /( MAX th MIN th) The final mark-probability is calculated as: p m pa /( 1 Count pa ) The algorithm is as following:

For each packet PCK arrive Calculate the average queue size Q If Q MAX th if MIN th Q MAX th Calculate the probability p m With probability p m mark the PCK Else if Q MAX th mark the PCK There is a option for RED gate to measure the queue in bytes rather than in packets. With this option on the average queue size accurately reflects the average delay at the gateway. The above algorithm need to be modified to ensure that the probability that a packet is market is proportional to the packet size is bytes: p a MAX p (Q - MIN th ) / (MAX th - MIN th ) p a p a PacketSize / MaximumPacketSize p m p a / (1- Count p a ) Form the above description of RED, it is easy to conclude that the effectiveness of RED relays on the proper set of the following parameters: MIN th, MAX th and MAX p. 2.2 SRED Algorithm SRED (Stabilized RED) derives from RED by adding some other features. Like RED, SRED discards packets with a load-dependent probability when the packets arrive at the queue, even thought the buffer is not full. In addition to this, SRED can stabilize its buffer occupation at a level of independent of the number of active connections over a wide range of load levels by estimating the number of active connections or flows statistically. This estimation is obtained without collecting or analyzing state information on individual flows. Its goal is to identify flows that are taking more than their fair share of bandwidth, and to allocate a fair share of bandwidth to all flows without incurring in too many computations Whenever a packet arrives at the queue, SRED do the comparison test and claims a hit only when two packets come from the same flow. A simple way to do the comparison is to compare it with a packet still in the queue. However, the task to search the whole queue is

O(n), where n is the size of the queue, is time consuming. SRED uses a Zombie List as the reference for comparison. Zombie list can be thought as a small list of M recently seen flows, with the following extra information for each flow in the list: a Count and a time stamp. It starts out empty. If the zombie list is not full, it will add the arriving packet identifier into the list. The Count of that zombie is set to zero, and its timestamp is set to the arrival time of the packet. If the zombie list is full, it will compare the arriving packet with the random selected zombie in the zombie list. If the two match, SRED declares a hit and increase the Count of that zombie by one. If the two not match, SRED declares a not hit and replace the chosen zombie by the arriving packet identifier, the Count of the zombie is set to 0, and the timestamp is set to the arrival time at the buffer with probability p. SRED maintains an estimate P(t) for the hit frequency around the time of the arrival of the t-th packet at the buffer. Let Hit(t) 0 = 1 if no hit if hit P(t)= ( 1 α ) P( t 1) + αhit( t), where 0 < α < 1. SRED uses P(t) -1 to estimate of the effective number of active flow in the same shortly before the arrival of packet t. When there are exactly N flows with probabilities ( π ) N i, i= 1 N 1 there are π 2 i 1. To reduce comparison overhead, SRED allows to update P(t). N i= 1 For 0 P(t)<1/256, the packet drop probability P zap is: P zap 1 = Psred( q) min(1, ), where 2 (256 P( t)) Pmax q) = 0.25 P 0 P sred ( max if B/3 q B if B/6 q 3/B otherwise For 1/256 P(t) 1, P zap = P sred (q). In Full SRED we modify the drop probability as follows: 2.3 BLUE Algorithm 1 Pzap = Psred ( q) min( 1, ) (1 + 2 (256 P( t)) HIt( t) ) P( t) The key idea behind BLUE is to perform queue management based directly on packet loss and link utilization rather than on the instantaneous or average queue lengths. Thus BLUE is fundamentally different with RED and SRED queue management algorithm. It maintains a single probability P m, which it uses to mark (or drop) packets when they are queued. If the queue is continually dropping packets due to buffer overflow, BLUE will increments the marking probability, thus increase the rate at which it sends back

congestion notification. Conversely, BLUE will decrease its marking probability if the queue is empty or the link is idle. The BLUE algorithm is as following: Upon packet loss or ( Q ten > L) event: If ((now last_update) > freeze_time) then p m =p m +d 1 last_update=now Upon link idle event: If ((now last_update) > freeze_time) then p m =p m -d 2 last_update=now To control how quickly the Pm changes over the time, BLUE uses the following parameter: freeze_time, d 1 and d 2. Parameter freeze_time determines the minimum time interval between two successive updates of p m. parameter d 1 and d 2 determine the amount by which p m is incremented when the queue overflows or is decremented when the link is idle. By weighting heavily against packet loss, BLUE can quickly react to a substantial increase in traffic load. We can conclude that BLUE is a very simple algorithm, and the performance of BLUE also counts on proper parameter setting. 3. Experiments and Evaluation In this section, we first describe the network configuration and the NS-2 simulation model, then we describe the series of experiments performed, and the evaluation results are presented. NS Simulator is a very big software packet and has various versions. We first use the NS- 1.4b installed by Dr. Byrav Ramamthy on CSE952 account. We then installed the NS- 2.19b-snapshot version to combine our own programs. We would like to thanks for the help of Dr. Byrav Ramamthy, the CSE account manager, Dr. Scott Chaffin, who increased our disk and file quota, and the NS resource owner ISI. Figure 2 shows the NS-2 network model that represents a simple network bottleneck configuration with two routers and number of subnet nodes. Each subnet has a number of TCP and UDP sources. This configuration can represent the interconnection of LANs to WANs or dial-up access users through WANs as in the case of an ISP network. All simulations take 100 seconds. The bottleneck-sharing link is 20Mbps with 20ms round trip time. The TCP source subnet use 100Mbps link with 2ms round trip time.

s4 s3 s2 s1 100Mbps, 2ms Bottleneck Link Drop-Tail, RED, SRED, BLUE, SFQ 100Mbps, 5ms d1 d2 d3 d4 s5 Router1 Router2 d5 s6 10Mps, 100ms d6 s7 d7 s8 s9 s10 d9 d8 d10 3.1 Random Earlier Drop Figure2: Network topology for the queuing study. In this scenario, the traffic source is TCP traffic with 10 sources and 10 sinks. The RED parameter values set according to the following table: RED Table1: RED parameter values Parameters Default Values Queue size 100 packet Minimum threshold (min th ) 50 packets Maximum threshold (max th ) 100 packets Maximum value for P b (max p ) 0.1 Queue weight (wq) 0.002 Average Queue Size by bytes FALSE The TCP source traffic window size is 511. We can get the queue size, the average queue size and the packet drop probability vary by the time by trace the simulation results. The bandwidth also can be calculated by multiply the average queue size with the packet size.

Maximum Threshold Minimum Threshold Figure 3. The queue size and average queue size of the RED queue. Figure 4. The final packet-drop probability of the RED queue.

Figure 5. Bandwidth of the RED queue simulation. 3.2 Stabilized RED In this scenario, the traffic source is TCP traffic with 10 sources and 10 sinks. The SRED parameter values set according to the following table: SRED Table2: SRED parameter values Parameters Default Values Number of the zombie in the zombie list (M) 1000 The maximum drop probability (P max ) 0.15 The efresh robability to update the zombie list (P) 0.25 Maximum value for P b (max p ) 0.1 We can get the queue size, the average queue size and the packet drop probability vary by the time by trace the simulation results. The bandwidth also can be calculated by calculate the total received bytes in the SRED queue every 0.1 seconds. The bandwidth result of the bottleneck link is presented in Figure6. It is clear that the average bandwidth ranges around 0.31Mbps to 0.27Mbps.

The packet drop rate of the SRED queuing algorithm is presented in Figure7. The packet loss rate is about a quarter of the RED algorithm. The time for SRED to reach a stable packet drop rate is not high. figure7: Packet drop rate of SRED

3.3 BLUE In this scenario, the traffic source is TCP traffic with 10 sources and 10 sinks. The BLUE parameter values set according to the following table: BLUE Table3: BLUE parameter values Parameters Default Values Initial Drop Probability 0.05 Freeze time period 0.01 Increase drop probability (d 1 ) 0.00025 Decrease drop probability (d 2 ) 0.000025 Buffer Size (B) 450 packets The bandwidth of the BLUE queuing algorithm is presented by figure8. It is very similar with SRED and higher than RED. The average bandwidth of BLUE ranges around 0.35Mbps to 0.28Mbps. The packet drop rate of the BLUE queuing algorithm is presented by figure9. From the result, we found out BLUE has very low packet drop rate. However, it take BLUE a very long time to reach the low and stable packet drop rate.

4. Performance Evaluation Figure 9. Packet drop rate of BLUE In this section, we evaluate the performance of RED, SRED and BLUE. Comparison results are given based on the evaluation. The TCP sources are based on a TCP-Vegas implementation. The TCP connections are modeled as greedy FTP connections; they always have data to send as long as their congestion windows permit. The maximum segment size (MSS) for TCP is 536 bytes. All simulation is based on TCP traffic source. Based on the same network, we get the simulation results. However, the performances of each of the queuing algorithms depend on the configuration of parameters. It is difficult to fairly compare them. In order to be fair, we set most of the parameter values in each algorithm using the recommended values from the original papers The simulation results are presented in figures. Among them figure 5 represent the usage bandwidth of RED, figure 6 represent the usage bandwidth of SRED, figure 8 represent the usage bandwidth of BLUE. It is obvious that the usage of bandwidth in BLUE and SRED queuing management are better than that of RED. The average usage bandwidth of RED is around 0.3Mbps to 0.1Mbps. While the average usage bandwidth of SRED ranges around 0.4Mbps to 0.2Mbps, and the average usage bandwidth of the BLUE ranges around 0.35Mbps to 0.28Mbps Figure 4 represent the packet loss rate of the RED, figure 7 represent the packet loss rate of SRED and figure 9 presents the packet loss rate of BLUE queuing algorithm.

5. Conclusions and Future Work From the simulation results, we can clearly see that the RED algorithm does not perform as well as the SRED and BLUE control algorithms in the heavily loaded network. In addition, RED is difficult to stabilize the queue size. The parameter configuration of RED is another challenge task under different network scenarios and over a wide range of load levels. BLUE also has difficult to stabilize the queue size. It needs some starting time for BLUE to reach a fine-tuning. To solve this problem, other researcher proposed another enhanced BLUE algorithm, Stochastic BLUE queuing algorithm. SBLUE is also called fairblue queuing algorithm. If we have time, we can study and simulate it. In this work, we do not cover the fairness issue for the queuing algorithm, which actually another important factor of queuing algorithm. Thus we future work can focus on discuss of the fairness issue also. References [1] V. Jacobson, Congestion Avoidance and Control, Proc. ACM SIGCOMM 88, Aug.1988, pp.314-329. [2] W.Richard Stevens, TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms, RFC 2001, Jan 1997. [3] S. Floyd and V. Jacobson. Random Early Detection for congestion avoidance. IEEE/ACM Transactions On Networking, 1(4):397-413, August 1993. [4] T. J. Ott, T.V.Lakshman, and L.H.Wong, SRED: Stabilized RED, Proc. IEEE INFOCOM 99, NY, March 21-25, 1999, pp.1346-1355. [5] W.Feng, D.D.Kandlur, D.Saha, and D.G.Shin, BLUE: A New Class of Active Queue Management Algorithms, Technical Report CSE-TR-387-99, Dept. of EECS, University of Michigan, April 1999. [6] Chengyu Zhu, Oliver W. W. Yang, James Aweya, Michel Ouellette, and Delfin Y. Montuno, "A Comparison of Active Queue management Algorithm Using OPNET Modeler", best paper award in OPNET 2001. Appendix 1. Tcl script file for RED: red.tcl (combine the awk call procedure) #Mengke Li RED #Simulation the RED queue to get the average queue size of RED, #the packet mark probability, and the bandwidth. set ns [new Simulator] #define the RED parameter Queue/RED set thresh_ 50 Queue/RED set maxthresh_ 100

Queue/RED set mean_pktsize_ 500 Queue/RED set q_weigth_ 0.002 Queue/RED set linterm_ 30 Queue/RED set drop_rand_ true # # Create a simple 22 nodes topology: # # s1 d1 # \ / # 100Mb,2ms \ 10Mb,100ms / 100Mb,5ms #... r1 --------- r2... # 100Mb,2ms / \ 100Mb,5ms # / \ # s10 d10 # #set the two RED queue router set node_(r1) [$ns node] set node_(r2) [$ns node] #define the source and sink node number set nodenum 10 #define the simulation time set finish_time 100 #create node_(s1) to node_(s10) as the source node #create node_(d1) to node_(d10) as the sink node for {set i 0 {$i < $nodenum {incr i { set node_(s$i) [$ns node] set node_(d$i) [$ns node] #create the links between the sources and sinks for {set i 0 {$i < $nodenum {incr i { $ns duplex-link $node_(s$i) $node_(r1) 100Mb 2ms DropTail $ns duplex-link $node_(d$i) $node_(r2) 100Mb 5ms DropTail #create the RED queue link $ns duplex-link $node_(r1) $node_(r2) 10Mbps 100ms RED #set the queue size $ns queue-limit $node_(r1) $node_(r2) 100 $ns queue-limit $node_(r2) $node_(r1) 100 $ns duplex-link-op $node_(r1) $node_(r2) queuepos 0 $ns duplex-link-op $node_(r2) $node_(r1) queuepos 0 #create TCP traffics attached with one FTP agent for {set i 0 {$i < $nodenum {incr i { set tcp($i) [$ns create-connection TCP $node_(s$i) TCPSink $node_(d$i) $i] $tcp($i) set window_ 127 set ftp($i) [$tcp($i) attach-source FTP]

# Set tracing parameter to Tracing the RED queue set redq [[$ns link $node_(r1) $node_(r2)] queue] set tchan_ [open red_all.q w] $redq trace curq_ $redq trace ave_ $redq trace prob1_ $redq attach $tchan_ #create the random number set rng [new RNG] for {set i 0 {$i < $nodenum {incr i { set start_time [$rng uniform 0 1] $ns at $start_time "$ftp($i) start" $ns at $finish_time "finish" # Define 'finish' procedure (include post-simulation processes) proc finish { { global tchan_ #Define the awk subprocedue to put the queuesize and average queuesize #into file red_q.q and red_q.a set queuesize { { if ($1 == "Q" && NF>2) { print $2, $3 >> "red_q.q"; set end $2 else if ($1 == "a" && NF>2) print $2, $3 >> "red_q.a"; #Define the awk subprocedure to put the packet loss probability #into file red_d set LostRate { { if ($1 == "p") print $2, $3 >> "red_d"; #Define the awk subprocedure to put the bandwidth into file red_b set GetBandwidth { { if ($1 == "a" && NF>2 ) { print $2, $3*500*8.0/1000000 >> "red_b";

#Define datafile for xgraph for bandwidth show set bd [open red.bd w] puts $bd "TitleText: Bandwidth of RED" exec rm -f red_b exec touch red_b exec awk $GetBandwidth red_all.q puts $bd \"Bandwidth" exec cat red_b >@ $bd close $bd exec xgraph -bb -tk -x "Time" -y "Bandwidth (Mbps)" red.bd -geometry 800x400 & #Define data file for xgraph for queue size show set f [open red.queue w] puts $f "TitleText: Queue Size of RED" if { [info exists tchan_] { close $tchan_ exec rm -f red_q.q red_q.a red_d exec touch red_q.a red_q.q red_d exec awk $queuesize red_all.q puts $f \"Current_Queue_Size" exec cat red_q.q >@ $f puts $f \n\"average_queue_size" exec cat red_q.a >@ $f close $f exec xgraph -bb -tk -x "Time" -y "Queue Size" red.queue -geometry 800x400 & #Define the data file for xgraph packet drop probability show set f1 [open red_d.q w] puts $f1 "TitleText: Packet Drop Probability of RED" puts $f1 \"Packet_Drop_Probability" exec awk $LostRate red_all.q exec cat red_d >@ $f1 close $f1 exec xgraph -bb -tk -x "Time" -y "Packet Drop Probabiltiy" red_d.q - geometry 800x400 & exit 0 $ns run 2. Implementation of SRED: mainly get reference from http://home.lanl.gov/sunil 3. Tcl script file for SRED: sred.tcl, similar with red.tcl 4. Implementation of BLUE: mainly get reference from http://home.lanl.gov/sunil 5. Tcl script file for BLUE. blue.tcl, similar with red.tcl 6. AWK script to compute the bandwidth from the row data of SRED: get

#Mengke Li #Get format data from row data file BEGIN { system ("rm -f sred_q") system ("rm -f sred_q.all") system ("rm -f sred_qq") if (ARGC!=2) print "No input file" else { input=argv[1]; print input system ("awk '{print $1, $2, $6' "input" > sred_q.all") last=0 total=0 while ( (getline data < "sred_q.all") >0 ) { split(data, anarray, " ") indicator=anarray[1] time=anarray[2] packet=anarray[3] if (indicator=="r") { total = total + packet if (time-last > 0.1) { total=total*8/1000000 print time, total > "sred_q" total=0 last = time close ("sred_q") 7. AWK script to compute the packet loss rate from row data of SRED: getdd, similar with get 8. AWK script to compute the bandwidth from the row data of BLUE: get, similar with SRED s get 9. AWK script to compute the packet loss rate from row data of BLUE: getdd, similar with SRED s get Note: Because NS BLUE and SRED algorithm has some problem binding LossMonitor agent with the node, thus we use the awk script to get the formatted data.