Active Queue Management in MANETs A Primer on Network Performance optimization

Size: px
Start display at page:

Download "Active Queue Management in MANETs A Primer on Network Performance optimization"

Transcription

1 Active Queue Management in MANETs A Primer on Network Performance optimization 1. Shallu Bedi, 2. Gagangeet Singh Aujla, 3. Sahil Vashist 1. Research Scholar, Department of CSE Chandigarh Engineering College, Mohali 2, 3. Department of CSE, Chandigarh Engineering College, Mohali Abstract The need of congestion control and Queue management is inevitable in MANETs, and in order to perform research in this domain, a review of Active Queue Management Techniques with some important network parameters is presented. A framework could be devised so that we can generate scenario for various network configurations. The Networks are becoming increasing complicated in terms of the network traffic that they carry. Nowadays networks carry multimedia, voice, secured encrypted data along with the simple data they used to carry earlier. The heterogeneous nature of the networks can sometimes create problem in the traffic management on the backbone links. The multimedia traffic mostly flows on UDP and traffic like HTTPS flows on TCP. In order to analyze the traffic management, we must understand the behavior of various Active Queue Management Techniques on different traffic classes viz TCP and UDP. In this thesis, a multi class network was analyzed for both TCP and UDP traffic classes for various network performance parameters like Throughput, Packet loss ratio and Average end to end delay for various networks Active Queue Management Techniques. In the thesis four Active Management Techniques were tested for both TCP and UDP traffic classes. In UDP traffic flow the fragment size of the packets generated at the source was also varied for varied network parameters like bandwidth, delay and channel error rate. The analysis tries to find out the best and optimum AQM technique for different classes of traffic under various conditions and the results can be used to carefully design the network on requirement basis. In case of TCP it has been observed SFQ was intended to perform best in as it employs Fair Queuing Algorithms for the handling of flow of packets on link with simultaneous sessions. On the other hand in UDP Drop tail, RED and REM performed best in different scenarios as their Queue management involves only the link state and the congestion and not on the traffic flow. Keywords: RED, REM, SFQ, Drop tail, MANET. 1. INTRODUCTION 1.1 MOBILE AD-HOC NETWORK MANET is a collection of mobile nodes without the required intervention of any centralized access point. It is a temporarily formed network which is created, operated and managed by nodes themselves. In MANETs nodes are wireless and battery powered. A MANET can be considered as an autonomic system as they are selfconfigure, self heal, self-organize, and self-protect. No fixed routers are available in these networks. Internet connectivity would benefit users from mobility offered by Mobile Ad hoc networks and connectivity provided by the Internet. A MANET can be a standalone network or attached to a larger network, including the internet. In these type of networks all nodes can freely communicate with every another node and nodes are independent to each other. Examples of a network are a P2P and multihop connected network. These networks have various advantages in terms of self reconfiguration and adaptability to highly variable mobile characteristics like the transmission conditions. The nodes in this network behave as routers which discover and maintain routes to other nodes in the network. Creation of routes depends only on nodes which forwards traffic on behalf of other nodes as shown in Figure 1. The destination node communicates with the help of intermediate nodes if it is not within the range of source node. There are various applications as shown in Figure 2 where MANETs are useful like data exchange in local groups, during emergencies such as earthquakes; rescue uses the Ad-hoc networking concept to send the information about the environment and victim s. MANET is one of the fields which provide ubiquitous computing capability and access of information without knowing about the location. MANETs are helpful in disaster and military applications and business people also uses the Ad-hoc networks to exchange the information at anywhere and at any time without knowing about whether they will be able to find any infrastructure. In multimedia applications MANETs play a vital role like audio and video which is another interesting domain. Kiess, Wolfgang and Martin Mauve [1]. Figure 1- Creation of Route by Nodes Volume 4, Issue 2, March April 2015 Page 181

2 Figure 2- Applications of MANETs Before few years only small number of industry and research groups working in multimedia technologies. Today, as with the growing interest in technical community is leading to significant impact in multimedia systems architectures, algorithms and technology. The fields of multimedia communications are moving forward at a fast pace. 1.2 PERFORMANCE ISSUES IN MANETS MANETs have always faced the constraint of being resource short in terms of processing power, available bandwidth and other network parameters. The backbone link i.e. the main router to router link has the biggest problem of scarcity of resources. Guan et al. [8]. As the network parameters change on the link it becomes important to handle the queuing on the main link. There are many Active Queue Management Techniques available Miskovic et al. [13]. The major performance issue of the Manet is throughput and packet loss because of the node mobility to be exceptional hide. The routing table updates are very frequent that causes resource constraint network to performance poorly another issue is the packet loss rate due to uneven resource distribution among the mobile nodes in Manet. Manet s have always faced problem of poor battery life, poor and poor algorithmic powers that causes dynamic, switching amongst mobiles node with redundant paths. On the redundant path loss of data occurs in the form of queuing, buffering and packet forwarding loss and delays in intermediate routing nodes. A mobile approach to minimize loss and increases throughput is to implement state of the art AQMs in the network. 1.3 ACTIVE QUEUE MANAGEMENT In Internet routers, active queue management (AQM) is a technique that consists in dropping or explicit congestion notification (ECN) marking packets before a router's queue is full. An Internet router typically maintains a set of queues, one per interface, that hold packets scheduled to go out on that interface. Historically, such queues use a drop-tail discipline: a packet is put onto the queue if the queue is shorter than its maximum size (measured in packets or in bytes), and dropped otherwise. Chung and Claypool [4], Ke, et al.[9]. Active queue disciplines drop or mark packets before the queue is full. Typically, they operate by maintaining one or more drop/mark probabilities, and probabilistically dropping or marking packets even when the queue is short. Drop-tail queues have a tendency to penalize burst flows, and to cause global synchronization between flows. By dropping packets probabilistically, AQM disciplines typically avoid both of these issues. By providing endpoints with congestion indication before the queue is full, AQM disciplines are able to maintain a shorter queue length than drop-tail queues, which combats buffer bloat and reduces network latency. Dana and Malekloo [5]. MANETs present many challenges, especially when realtime traffic must be supported in terms of providing Quality of Service (QoS) guarantees. Providing QoS for real-time traffic over IP-based networks is still an open issue because existing active queue management schemes have been designed for TCP-compatible traffic. MANETs present the worst-case scenario for QoS guarantees due to their distinct characteristics, such as contention from multiple users (when using ) and limited bandwidth. The objective of this thesis is to evaluate various AQM techniques to comparatively analyze the best Queue Management Schemes for different resource constraint networks. Feng, et al. [6], Fountanas [7]. 1.4 ACTIVE QUEUE MANAGEMENT TECHNIQUES The networks have evolved to be more complex and sophisticated in nature, but the resource constraints are always there in the networks. Due to the high prices of bandwidths, the network resources always limit themselves in the performance. The non availability of state of the art hardware and infrastructure also limits network performance. Ram and Manoj [16]. Some of the parameters that cause congestion in the network especially in the backbone link i.e. router to router links are lower bandwidths, higher delays and channel error rates. The congestion in the link causes the packets in the network to Queue up at one router and when the queuing reaches beyond its permissible limits, packet drops occur which drastically brings down the network performance. The combative techniques used to handle queue at a router to link is known as Active Queue Management Techniques or Schemes. There are few AQM Techniques used in MANETs which are as under DropTail RED (Random Early Detection) REM (Random Exponential Marking) SFQ (Stochastic Fair Queuing) FRED (Flow Random Early Drop) RED-PD: Random Early Detection with Preferential Dropping SRED: Stabilized Random Early Detection BLUE AVQ: Adaptive Virtual Queue In this paper we are going to discuss the most popular and common AQM techniques SFQ, RED, REM and DropTail in detail. Volume 4, Issue 2, March April 2015 Page 182

3 A. DROP TAIL It is a simple queue mechanism that is used by the routers that when packets should to be drop. In this mechanism as shown in Figure 3 each packet is treated identically and when queue filled to its maximum capacity the newly incoming packets are dropped until queue have sufficient space to accept incoming traffic. The drawback of DropTail is that when a queue is filled the router start to discard all extra packets thus dropping the tail of mechanism. The loss of packets (datagram s) causes the sender to enter slow start which decreases the throughput and thus increases its congestion window. as well as a low delay and attains fairness over multiple TCP connections, etc. It is the most common mechanism to stop congestive collapses. The main limitation of RED is that when the queue in the router starts to fill then a small percentage of packets are discarded. This is deliberate to start TCP sources to decrease their window sizes and hence suffocate back the data rate. This can cause low rates of packet loss in Voice over IP streams. There have been reported incidences in which a series of routers apply RED at the same time, resulting in bursts of packet loss. Kwon and Fahmy[11], Ott et al. [14]. RED Algorithm The general RED algorithm can be presented as follows in the Figure 4, Neha and Abhinav Bhandari[22]. Figure.3- Main principle of Drop Tail Drop Tail Algorithm The drop tail algorithm with the help of flow chart we can see step by step: If No of Packet Incoming at a node >Channel Bandwidth Then we will go for Check Buffer Size Else if size of buffer >= packet size Then form a queue of the packets Else Drop the packet If no. of incoming input packets at node < channel bandwidth Then Packet enter the channel If ( Minth Avg < Maxth) Calculate the probability Pa, with probability Pa: mark/drop the arriving packet Else if (Maxth Avg) mark/drop the arriving packet Else Do not mark/ drop the packet B. RANDOM EARLY DETECTION Random Early Detection (RED) Lin and Morris [12], is a congestion avoidance queuing mechanism (as opposed to a congestion administration mechanism) that is potentially useful, particularly in high-speed transit networks. Sally Floyd and Van Jacobson projected it in various papers in the early 1990s. It is active queue management mechanism. It operates on the average queue size and drop packets on the basis of statistics information. If the buffer is empty all incoming packets are acknowledged. As the queue size increase the probability for discarding a packet also increase. When buffer is full probability becomes equal to 1 and all incoming packets are dropped. The advantage of RED is that it is capable to evade global synchronization of TCP flows, preserve high throughput Figure 4.Working of RED algorithm RED was designed with the objectives to (1) Minimize packet loss and queuing delay (2) Avoid global synchronization of sources (3) Maintain high link utilization and (4) Remove biases against burst sources. Volume 4, Issue 2, March April 2015 Page 183

4 C. RANDOM EXPONENTIAL MARKING Random exponential marking (REM) is an attractive adaptive queue management algorithm. It uses the quantity known as price to measure the congestion in a network. REM can achieve high utilization, small queue length, and low buffer overflow probability. Many works have used control theory to provide the stable condition of REM without considering the feedback delay. Recently, sufficient conditions for local stability of REM have been provided when the sources have a uniform one- or twostep feedback delay. Nevertheless, no work has been done for the case of arbitrary uniform delay. The authors propose a continuous time model to generalize the local stable condition for REM in a multilink and multisource network with arbitrary uniform feedback delay. Kwon and Fahmy [11], Victor et al. [20]. D. STOCHASTIC FAIR QUEUING Fair Queuing is a queuing mechanism that is used to allow multiple packets flow to comparatively share the link capacity. Routers have multiple queues for each output line for every user. When a line as available as idle routers scans the queues through round robin and takes first packet to next queue. FQ also ensure about the maximum throughput of the network. For more efficiency weighted queue mechanism is also used. This queuing mechanism is based on fair queuing algorithm and proposed by John Nagle in Because it is impractical to have one queue for each conversation SFQ uses a hashing algorithm which divides the traffic over a limited number of queues. It is not so efficient than other queues mechanisms but it also requires less calculation while being almost perfectly fair. It is called "Stochastic" due to the reason that it does not actually assign a queue for every session; it has an algorithm which divides traffic over a restricted number of queues using a hashing algorithm. SFQ assigns a pretty large number of FIFO queues queuing Paul E. McKenney[23]. SFQ Algorithm Dequeue a Packet fair The following algorithm removes a packet from a stochastic fairness queue: 1. If currently switching to a new perturbation of the hash function and the active list is empty, complete the switch (start outputting from the new queue). 2. If the active list is empty, exit (this implies that the entire SFQ is empty). 3. Output a packet from the queue pointed to by the round robin pointer. 4. Advance the round-robin pointer. 5. Delete the queue from the NEL it was in. If this NEL is now empty and the maximum-size pointer points to this NEL, decrement the maximum-size pointer (this implies that we just output from the longest queue). 6. If the queue still contains packets, add it to the nextsmaller NEL, otherwise delete it from the active list. Enqueue a Packet The following algorithm adds a packet to a stochastic fairness queue if possible, or discards the packets if not: 1. Compute the hash function to obtain the queue index. 2. If the queue indexed by the hash function is full or if the buffer pool is exhausted and this is the longest queue, discard the packet and exit. 3. If the buffer pool is exhausted, discard a packet from the longest queue as follows: a. Locate the first queue on the NEL referenced by the maximum-size pointer. b. Discard the packet at the head of this queue. c. Remove the queue from the NEL, if this is the only queue on this NEL, and decrement the maximum-size pointer. d. If the queue still contains packets, add it to the next smaller NEL; otherwise delete it from the active list. 4. If the queue indexed by the hash function is empty, add it longest queue as follows: maximum-size pointer. to the active list; otherwise, delete it from its NEL. 5. Add the packet to the queue. 6.Add the queue to the appropriate NEL list, incrementing the maximum-size pointer if necessary. Some of the mainstream active queue management schemes are derived from the basic RED technique. They are briefly described as under. E. FLOW RANDOM EARLY DROP Flow Random Early Drop (FRED) is a modified version of RED, which uses per-active-flow accounting to make different dropping decisions for connections with different bandwidth usages. The implementation of flow and traffic control has resulted in a greater control over bandwidth allocation and flow control over individual channels. The main goal of FRED is to provide different dropping strategies to different kind of flows. Two parameters are introduced into FRED: minq and maxq, which are minimum and maximum numbers of packets that each flow is allow to buffer. An advantage of FRED is that it makes the dropping decisions based on the flow control and allocation of the connection on the channel, D. Lin and R. Morris[24]. F. RANDOM EARLY DETECTION WITH PREFERENTIAL DROPPING Another probabilistic approach towards queuing derived from RED is RED-PD. It uses several lists containing the drop history of consecutive intervals of time. RED-PD is only active if there is not enough bandwidth to provide sufficient service to all flows. In the case of congestion flows that use more of the bandwidth than their fair share should be cut back in service to a target bandwidth by packet dropping, Ratul Mahajan and Sally Floyd[25]. G. STABILIZED RANDOM EARLY DETECTION In contrast to normal RED which focus on estimating the average queue size, in SRED, introduced by Ott et al. in [OLW99], the most important value is the estimation of the number of active flows. The drop probability of a new arriving packet is stated by the following two formula. In the formula B is the total queue size, Pmax m the maximum dropping/marking probability, QC is the current queue length, Pest(t) is an factor estimating the Volume 4, Issue 2, March April 2015 Page 184

5 number of active flows, and Hit(t) is either 1 or 0, depending on whether the current packet had a match in the zombie list or not. SRED focuses mainly on the active flows in the queue instead of average queue size, which is why it is more suitable for dynamic topological changes which are more frequent to the nature of MANETs. H. BLUE BLUE is an active queue management algorithm to manage congestion control by packet loss and link utilization history instead of queue occupancy. BLUE maintains a single probability, Pm, to mark (or drop) packets. This effectively allows BLUE to learn the correct rate it needs to send back congestion notification or dropping packets, Debanjan Saha Wu-chang Feng, Dilip D. Kandlur and Kang G. Shin[26]. I. ADAPTIVE VIRTUAL QUEUE An Adaptive Virtual Queue Algorithm (AVQ) [KS04] was proposed by Kunniyur and Srikant to achieve the stability of the queue length. The maximum size of the virtual queue is adapted as follows: Vmax = ( Qcurrent - ) where is the arrival rate at the link, the smoothing parameter and the desired utilization of the link. The delay and packet loss small while the utilization of the link is high AVQ tries to maximize the sum of utility functions of single users. 2. LITERATURE REVIEW In Internet routers, active queue management (AQM) is a technique that consists in dropping or explicit congestion notification (ECN) marking packets before a router's queue is full. An Internet router typically maintains a set of queues, one per interface, that hold packets scheduled to go out on that interface. Historically, such queues use a drop-tail discipline: a packet is put onto the queue if the queue is shorter than its maximum size (measured in packets or bytes), and dropped otherwise. Chung & Claypool [4], Ke [9]. Active queue disciplines drop or mark packets before the queue is full. Typically, they operate by maintaining one or more drop/mark probabilities, and probabilistically dropping or marking packets even when the queue is short. Drop-tail queues have a tendency to penalize burst flows, and to cause global synchronization between flows. By dropping packets probabilistically, AQM disciplines typically avoid both of these issues. By providing endpoints with congestion indication before the queue is full, AQM disciplines are able to maintain a shorter queue length than drop-tail queues, which combats buffer bloat and reduces network latency. Dana and Malekloo [5]. MANETs present many challenges, especially when realtime traffic must be supported in terms of providing Quality of Service (QoS) guarantees. Providing QoS for real-time traffic over IP-based networks is still an open issue because existing active queue management schemes have been designed for TCP-compatible traffic. MANETs present the worst-case scenario for QoS guarantees due to their distinct characteristics, such as contention from multiple users (when using ) and limited bandwidth. The objective of this thesis is to evaluate various AQM techniques to comparatively analyze the best Queue Management Schemes for different resource constraint networks. Feng et al. [6], Fountanas[7]. Random exponential marking (REM) is an attractive adaptive queue management algorithm. It uses the quantity known as price to measure the congestion in a network. REM can achieve high utilization, small queue length, and low buffer overflow probability. Many works have used control theory to provide the stable condition of REM without considering the feedback delay. Recently, sufficient conditions for local stability of REM have been provided when the sources have a uniform one- or twostep feedback delay. Nevertheless, no work has been done for the case of arbitrary uniform delay. The authors propose a continuous time model to generalize the local stable condition for REM in a multilink and multisource network with arbitrary uniform feedback delay. Kwon and Fahmy [11], Victor et al. [20]. The networks have evolved to be more complex and sophisticated in nature, but the resource constraints are always there in the networks. Due to the high prices of bandwidths, the network resources always limit themselves in the performance. The non availability of state of the art hardware and infrastructure also limits network performance. Ram and Manoj [16]. Different communication channels like Ethernet cables, coaxial cables, serial cables and fiber optical cables react differently to noise, fading, distortion, EMI and synchronization etc. Channel s inherent response to factors causing errors is known as Channel error rate. Fiber optic cables have the least error rate goal whereas traditional serial cables and coaxial cables are more prone to errors. It is essential to know the behavior of Routing Protocols and Active Queue management techniques for different physical channels with inherent channel error rates. Barlow [2]. 3. REVIEW OF NETWORK PARAMETERS FOR RESEARCH Another important aspect in the analysis of performance of the network is carefully selecting the network parameters to see the affect of AQMs. There are numerous other network parameters like jitter, MTU, reliability etc but here in this paper we discuss the following network parameters. 3.1 BANDWIDTH In computer networking and computer science, bandwidth, network bandwidth, data bandwidth, or digital bandwidth is a measurement of bit-rate of available or consumed data communication resources expressed in bits per second. Bandwidth sometimes defines the net bit rate, channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, spectral bandwidth, RF bandwidth, signal bandwidth or analog Volume 4, Issue 2, March April 2015 Page 185

6 bandwidth. Bandwidth in bit/s may also refer to consumed bandwidth, corresponding to achieved throughput or good put, i.e., the average rate of successful data transfer through a communication path. This sense applies to concepts and technologies such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream) during a studied time interval. 3.2 DELAY Network delay is an important design and performance characteristic of a computer network or telecommunications network. The delay of a network specifies how long it takes for a bit of data to travel across the network from one node or endpoint to another. It is typically measured in multiples or fractions of seconds. Delay may differ slightly, depending on the location of the specific pair of communicating nodes. Usually both the maximum and average delays are measured, and they divide the delay into several parts: Processing delay - time routers take to process the packet header Queuing delay - time the packet spends in routing queues Transmission delay - time it takes to push the packet's bits onto the link Propagation delay - time for a signal to reach its destination There is a certain minimum level of delay that will be experienced due to the time it takes to transmit a packet serially through a link. Onto this is added a more variable level of delay due to network congestion. IP network delays can range from just a few milliseconds to several hundred milliseconds. In a network based on packet switching, processing delay is the time it takes routers to process the packet header. Processing delay is a key component in network delay. During processing of a packet, routers may check for bit-level errors in the packet that occurred during transmission as well as determining where the packet's next destination is. Processing delays in high-speed routers are typically on the order of microseconds or less. After this nodal processing, the router directs the packet to the queue where further delay can happen (queuing delay). In the past, the processing delay has been ignored as insignificant compared to the other forms of network delays. However, in some systems, the processing delay can be quite large especially where routers are performing complex encryption algorithms and examining or modifying packet content. Deep packet inspection done by some networks examine packet content for security, legal, or other reasons, which can cause very large delay and thus is only done at selected inspection points. Routers performing network address translation also have higher than normal processing delay because those routers need to examine and modify both incoming and outgoing packets. Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits, It is given by the following formula: where is the transmission delay N is the number of bits, and R is the rate of transmission (say in bits per second). Most packet switched networks use store-andforward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus store-and-forward packet switches introduce a storeand- forward delay at the input to each link along the packet's route. In computer networks, propagation delay is the amount of time it takes for the head of the signal to travel from the sender to the receiver. It can be computed as the ratio between the link length and the propagation speed over the specific medium. Propagation delay is equal to d / s where d is the distance and s is the wave propagation speed. In wireless communication, s=c, i.e. the speed of light. In copper wire, the speed s generally ranges from.59c to.77c. This delay is the major obstacle in the development of highspeed computers and is called the interconnect bottleneck in IC systems. 3.3 CHANNEL ERROR RATE Different communication channels like Ethernet cables, coaxial cables, serial cables and fiber optical cables react differently to noise, fading, distortion, EMI and synchronization etc. Channel s inherent response to factors causing errors is known as Channel error rate. Fiber optic cables have the least error rate goal whereas traditional serial cables and coaxial cables are more prone to errors. It is essential to know the behavior of Routing Protocols and Active Queue management techniques for different physical channels with inherent channel error rates. Barlow [1].Specifying error in simulation is done through commands as the environment is virtual. Commands for specifying Channel error rate, on two nodes let us say n2 and n3 are #Set error model on link n2 to n3. set loss_module [new ErrorModel] $loss_module set rate_ 0.2 $loss_module ranvar [new RandomVariable/Uniform] $loss_module drop-target [new Agent/Null] $ns lossmodel $loss_module $n2 $n3 The comparative analysis of the Active Queue techniques is shown in Table 1. Volume 4, Issue 2, March April 2015 Page 186

7 Table 1- Comparison of active queue management techniques with performance parameters 4. RESEARCH GAP The impact of heterogeneous traffic in Mobile Ad-hoc Network leads to problems in performance quality of services (QoS) and security. Keeping aside quality of services and security. The performance can be improved easily by optimizing network parameters and queue management technique. The optimization of network for different traffic classes is never ending processes. The research in this area shows an improvement in performance by implementing the state of the art active queue management technique like SFQ, RED, REM. Now optimizing these queue management technique by varying different network parameters like bandwidth, delay, channel error rate, jitter, fragment size, maximum transmission unit (MTU) can yield unexpected performance improvements. The testing of this active queue management technique with these varying network parameters becomes a challenging and exceptionally increased task in terms of simulation to be performed. Therefore in order to establish the optiminal solution for maximum throughput, packet loss ratio and average endto-end delay. The frame of mind behind the problem formulation is to present an optiminal solution/result/recommendation for Manet users, so that they can achieve maximum performance in networks by using various active queue management techniques. 5. CONCLUSION AND FUTURE SCOPE Since its formal introduction to IP networks in 1993 as a viable complementary approach for congestion control, there has been a steady stream of research output with respect to Active Queue Management (AQM). This survey attempts to travel the trajectory of AQM research from inception with the first algorithm, Random Early Detection (RED), to current times. In this survey we discuss the general attributes of AQM schemes, and the design approaches taken such as heuristic, controltheoretic and determinist optimization of interest is the role of AQM in QoS provisioning in the wireless domain. For each AQM simple literature for understanding is presented. REFERENCES [1] Kiess, Wolfgang and Martin Mauve, "A survey on real-world implementations of mobile ad-hoc networks", Ad Hoc Networks 5,2007, pp [2] Barlow, D., Robbins, A.D., Rubin, P.H., Stallman, R. and Oostrum, P.V The AWK Manual Edition 1.0, Wiley Publishing, Inc. [3] Bhaskar Reddy, T.B. and Ali, A Performance Comparison of Active Queue Management Techniques, India Journal of Computer Science, Department of Computer Science and Technology, S.K. University, Anantapur, Vol. 4, No. 12, pp [4] Chung, J. and Claypool, M Analysis of Active Queue Management, Computer Science Department Worcester, Polytechnic Institute Worcester, USA. [5] Dana, A. and Malekloo, A Performance Comparison between Active and Passive Queue Management, International Journal of Computer Science, Vol. 7, Issue 3, No 5, pp [6] Feng, W., Kandlur, D., Saha, D., and Shin, K BLUE: A New Class of Active Queue Management Algorithms, In Proceedings of 11th international workshop, Network and Operating System Support for Digital Audio and Video, USA, pp [7] Fountanas, L Active queue management mechanisms for real-time traffic in manets Thesis, Naval Postgraduate School, Monterey, California. [8] Guan, L., Woodward, M.E. and Awan, I.U. Performance Analysis of Active Queue Management Scheme for Bursty and Correlated Multi-Class Traffic, Performance Challenges for Efficient Next Generation Networks, Beijing University of Posts and Telecommunications Press, pp [9] Ke, C. H., Shieh, C.K., Hwang, W.S. and Ziviani, A An Evaluation Framework for More Realistic Simulations of MPEG Video Transmission, Journal of Information Science and Engineering, Vol. 24, No. 2, pp [10] Koo, J., Ahn, S. and Chung, J Performance Analysis of Active Queue Management Schemes for IP Network, Computational Science-ICCS, Lecture Notes in Computer Science, Vol 3036, pp Volume 4, Issue 2, March April 2015 Page 187

8 [11] Kwon, M. and Fahmy, S A Comparison of Load-based and Queue-based Active Queue Management Algorithms, In Proceedings of SPIE ITCom, Orlando. [12] Lin, D. and Morris, R Dynamics of Random Early Detection, In Proceedings of ACM Sigcomm. Cannes, France, pp [13] Mišković, S., Petrović, G. and Trajković, L Implementation and Performance Analysis of Active Queue Management Mechanisms, Proceeding of Telecommunications in Modern Satellite, Cable and Broadcasting Services, Serbia, Vol 2. [14] Ott, T. J., Lakshman, T. V. and Wong, L. H Stabilized RED, In Proceedings of IEEE Infocom, pp , New York, USA. [15] [15] Postel, J.B. 1981, Transmission Control Protocol, SRI Network Information Center, Menlo Park, CA. [16] Ram, C. S. M. and Manoj, B.S Ad Hoc Wireless Networks Architecture and Protocols, Prentice Hall. [17] REM: Active Queue Management Appeared in IEEE Network, May/June 2001, pdf. [18] Sharma, N Analysis of security requirements in wireless networks and mobile ad-hoc, GESJ: Computer Science and Telecommunications, Vol. 5, pp 28. [19] Sophia and Antipolis, NS Simulator for Beginners, lecture Notes, University de Los Andes, Merida, France. [20] Victor, H., Athuraliya, S., Li Steven, H. and Yin, L. Q., REM: Active Queue Management. IEEE Network, pp [21] Wang, J NS-2 Tutorial Exercise, Multimedia Networking Group, The Department of Computer Science, UVA. [22] Neha,Abhinav Bhandari RED: A high link utilization and fair algorithm International Journal of Computer Applications Technology and Research Volume 3 Issue 7, , 2014, ISSN: [23] Paul E. McKenney Stochastic Fairness Queuing SRI International Menlo Park, CA (415) / mckenney0sri.com. [24] D. Lin and R. Morris. Dynamics of Random Early Detection. Proceedings of SIGCOMM, Sep [25] Ratul Mahajan and Sally Floyd. Controlling High Bandwidth Flows at the Congested Router. ICSI Tech Report TR , Apr [26] Debanjan Saha Wu-chang Feng, Dilip D. Kandlur and Kang G. Shin. BLUE: A New Class of Active Queue Management Algorithms. Technical Report CSETR , University of Michigan, April AUTHOR Shallu Bedi received the B.Tech degree in Computer Science Engineering from Doaba Woman Institute of Engineering & Technology, Kharar in 2013 and now I m Pursuing M.Tech degree in Computer Science Engineering from Chandigarh Engineering College Landran, Mohali in My research interests is Networking. Volume 4, Issue 2, March April 2015 Page 188