The QoS of the Edge Router based on DiffServ Zhang Nan 1, Mao Pengxuan 1, Xiao Yang 1, Kiseon Kim 2 1 Institute of Information and Science, Beijing Jiaotong University, Beijing 100044, China 2 Dept. of Information and Communications, Gwangju Institute of Science and Technology, Gwangju 500-712, Korea Abstract: Internet real-time multimedia communication brings a further challenge to Quality of Service (QoS). A higher QoS in communication is required increasingly. As a new framework for providing QoS services, Differentiated Services (DiffServ) is undergoing a speedily standardization process at the IETF. DiffServ not only can offer classified level of services, but also can provide guaranteed QoS in a certain extent. In order to provide QoS, DiffServ must be properly configured. The traditional DiffServ mechanism provides classifier for edge router to mark the different traffic streams, and then the core router uses different Drop Packet Mechanisms to drop packets or transmit data packets according to these classified markers. When multiple edge routers or other core routers transmit data packets high speedily to a single core router, the core router will emerge bottleneck bandwidth. The most valid solution to this problem is that the edge router adopts drop packet mechanism. This paper proposes an Modified Edge Router Mechanism that let the edge router achieve marking, dropping and transmitting packets of hybrid traffic streams based on DiffServ in a given bandwidth, the core router will only transmits packets but won t drop packets. By the simulation of ns2, the modified mechanism ensure the QoS of high priority traffics and simplify the core router, it is a valid method to solve the congestion of the core router. Key words: DiffServ; edge router; mixed flows; QoS I. INTRODUCTION With the development of the Internet, rapid increasing users and various real-time multimedia services that puts forward the new challenges to the IP QoS. Existing Internet provided Best Effort services that can t be satisfied with the different QoS of different users and different services, so which can provide different QoS is a research hot point in the world [1]. UDP is always used in the transport layer to get the better real-time support, because that the UDP has the characteristic of little spending and connectionless. So the UDP traffics increase largely and engross the bandwidth of TCP traffics unfairly, consequently, it will occupy almost all the bandwidth of the link, and the performance of TCP must be affected seriously, which induce the result of network congestion or breakdown [2]. We also know that current Internet is most WWW, e-mail and FTP, which applications are transmitted by TCP. When there emerges network congestion or resource competition, how This project is supported by the National Natural Science Foundation of China: (No. 60572093), the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20050004016), NSFC-KOSEF Joint Research Project of China and Korea, and the CDSN, GIST 156 2009.7
TECHNIQUE & APPLICATION 技 术 与 应 用 to avoid the congestion of hybrid traffics of TCP and UDP is the key point of IP QoS. IEIF proposes two kinds of IP QoS system structure: IntServ and DiffServ [3]. The Diffserv possesses good scalability which is adapted to provide QoS on the largescale skeleton network [4]. In recent years many papers bring forward methods to solve congestion control based on router [5]. Literature [4] proposes that using DiffServ to solve core router congestion, and it adapts to the status that only a single edge router is connected with a single core router, but it can t solve the condition that many edge routers are connected with a single core router. Literature [6] proposes that through introducing probabilitybased priority packets marking and dropping mechanisms, which has good performance of the throughput but the packet loss rate is unsure. This paper proposes a modified edge router mechanism which lets the core router drop little packet and control the bandwidth of each routers connected with it. If the connected router is the edge router it will use the modified edge router mechanism directly; if the connected router is the other core router, it will assign bandwidth to the anterior routers. At last the edge router will take the place of the core router to control the packet drop rate. So it can simplify the core router and avoid congestion. The edge router uses grouping marker algorithm and grouping lost algorithm based on Diffserv, it can realize that the different priority traffics gain different services in the edge of the network. This mechanism ensures TCP and UDP traffics with high priority have the performance of low packet loss rate and high throughput. It is not only solved the congestion but also ensure the higher priority hybrid traffics have higher QoS performance. The rest of paper is organized as follows. The traditional DiffServ router mechanism is given in Section II. In Section III the modified DiffServ router mechanism is introduced. Analysis and simulation results are given in Section IV. Finally, conclusions are drawn in Section V. II. TRADITIONAL DIFFSERV ROUTER MECHANISM The most important approach of IP QoS is that it ensure packet scheduling and traffic stream control algorithm. Traditional DiffServ realizes traffic scheduling mechanism in edge router and realize loss packet in core router (middle router). A. The marker mechanism of edge router Edge router consists of classifier and conditioner, which can save the state information of traffic streams and adjust in/out DiffServ domain streams based on scheduled stream regulation, and make in/out stream according to the advance appointed Traffic Conditioning Agreement (TCA), then mark the value of DSCP in packet head and aggregate according to the classes. Figure 1 is the logic framework of edge router. The IP data packets are classified by the classifier, and the meter measures the packet speed, marks in DSCP base on the result. The data packets that aren t satisfied with stream demand will be discarded. At last data packets are transmitted to the next router through queue scheduling mechanism. B. Drop mechanism of core router The core router uses different packets loss mechanism to transmit or drop packets. RED is different from Tail Drop, it qua an AQM which detects earlier congestion and provides feedback to primary router through drop with the motivation of keeping minimal queue length. It will decrease outburst and solve the whole synchronization. RED uses average queue length (Avglen) as a parameter that whether brings the congestion avoidance mechanism into business or not, must be computed when a grouping is arrival [7] : Fig. 1 The logical structure of edge router 2009.7 157
(1) is the Instant Queue Length, is the weight value. RED algorithm can be expressed as: (2) are configurable parameters. To get reasonable AF PHB, DiffServ uses RED with weighted parameter (WRED). AF PHB has four AF classes, and each AF class is marked with one of three Drop Processes: DP0, DP1 and DP2. These AF classes are called three colors green, yellow, red. WRED [8] computes new queue length by all these colors grouping arrived. WRED use several threshold parameters and drop probability to give packets of different color, the parameter scheme is: (a) interleaving; (b) overlap entirely; (c) overlap partly. C. The drawback of the traditional mechanism In Figure 2, the core router that the arrowhead pointed is linked with many edge routers and several other core routers, the core router will appear congestion easily. In traditional mechanism that the core router will lose packets largely, at the same time the edge routers and other core routers will send the packets constantly. In this condition that the core router will be bad performance and the QoS of the network becomes worse and worse, and the core router can t assign the bandwidth to every traffic directly, the network becomes very Fig. 2 The structure of core router complicated. How to simplify the network and improve the QoS of the network are the most important tasks for us. III. MODIFIED EDGE ROUTER CON- TROL MECHANISM This paper centralizes the Marker Mechanism and the Drop Mechanism on the edge router, the core router won t drop packet any more, which can simplify the core router. This method can solve the congestion of core router availably A. Modified edge touter mechanism (1) Policy The policy must be established between source router and end router, all the data flows corresponding to the source and end router can be looked upon a single stream. There are policer type, meter type and inner code point in different business. When a packet arrived at edge router, it ll first examine which business it belongs to, and active the meter to update all the state variables, and the policer type should be activated to mark packet by the whole state of traffic, then the packets are queued accordingly. There are six policy types: TSW2CMPolicy (two colors): apply one CIR and two drop priority. TSW2CMPolicy (three colors): apply one CIR, one PIR and three drop priority. tokenbucketpolicer: apply one CIR, one CBS and two drop priority. srtcmpolicer (single speed and three colors): apply one CIR, CBS, PIR and one PBS selected from three priorities. trtcmpolicer (double speed and three colors): apply one CIR, CBS, PIR and PSB selected from three priorities. The first three kinds of policy are adapted for hybrid TCP and UDP traffic streams, because that it will use lower priority when number of CIR is overabundance. (2) Classified mark mechanism A single link can apply several physical queues 158 2009.7
TECHNIQUE & APPLICATION 技 术 与 应 用 which have several virtual queues, and each virtual queue has a set of different parameters. Physical and virtual queue does a packet in is decided by its code point. The scheduling decides which physical and virtual queue the packet is queued in. This paper chooses Weighted Round Robin (WRR) scheduling: set qe1tor [[$ns link $e1 $r0] queue] set qrtoe2 [[$ns link $r0 $e2] queue] $qe1tor set numqueues_ 3 $qe1tor setnumprec 2 $qe1tor setschedularmode WRR $qe1tor addqueueweights 0 6 $qe1tor addqueueweights 1 3 $qe1tor addqueueweights 0 1 Different parameter settings of addphbentry and addpolicerentry can control the precedence by different queue and Commit Information Rate (CIR). The most important that it sets the lower precedence when lower priority traffic is Best Effort (BE), when the CIR is overabundance, they can use the lower precedence. So the high priority stream get guarantee. When the mixed streams of TCP and UDP are transmitted synchronously, the lower priority of UDP streams will invade and occupy the higher priority of TCP streams with UDP trait itself. So how to solve the problem is the most important. This paper lets all TCP streams in different physical queue from UDP streams, and they are in different virtual queues. All UDP traffics in the same physical queue or in different physical queue without TCP traffic, and they are in different virtual queues also. It can be realized by changing the entry policy. (3) Drop mechanism When the streams arrival rate exceeds the service rate of queue scheduling, it will appear congestion at the edge router. If there are congestions, the effective solution is that dropping packets from the lower precedence streams by the weight values based on WRED, and we can adjust the weight value to meet the actual demand. This mechanism ensures the performance of high priority flows in despite of TCP or UDP streams they are. Using touch off congestion avoidance mechanism to provide feedback signals to transport protocols, then the queue length is adjusted. RIO-C is one mode of WRED mechanism will be used. Through Drop Mechanism the edge router can control congestion, the data streams won t be dropped when they pass by core router if the bandwidth invariant between the core router and the next routers. B. Core router mechanism The core router transmits on PHB value. If there are several edge routers and the other core edge routers are linked to the core router, as Fig. 2, the core router will feedback the assignable bandwidth to each connected edge router and core router, and the anterior core router will feedback the assignable bandwidth to each edge routers connected with it. Then the edge router assigns the bandwidth to each business by the priority. The total bandwidth of all edge routers equals to the bandwidth of core router that connects with them directly: (3) When the high-speed streams come to the core router, there won t drop and no congestion anymore. We can solve the bottleneck problem of the core router, and simplify the duty of the core router. It can increase the utilization of the link bandwidth and improve the performance of QoS. IV. SIMULATION This paper proves our mechanism validly by the performance parameters as throughput, loss packet rate, jitter and delay based on NS2. Different priority orders of hybrid traffics are tried out to testify the robust of our method. The low priority traffic will has worst performance than the traffics that have higher priority, the highest priority traffics will has the best performance than the other traf- 2009.7 159
fics. We gives the simple case that there are several edge routers are linked with a core router. The bandwidth of edge router will be adjusted by the information from core router. A simple topology structure is simulated can demonstrate our method effectively, as Figure 3. Fig. 4 The throughput of UDP2, TCP and UDP1 Fig. 3 The topology structure of simulation Node 0, 1, 2 are sending terminals which transmit the traffic streams of TCP or UDP, Node 3 is the edge router, Node 4 is the core edge router, Node 5 is the edge router, Node 6 is the receiving terminal. The link bandwidth is 10Mb between the sending nodes and the edge router, the link bandwidth is 10Mb between the edge router and the core router, the link bandwidth is 10Mb between the core router and the edge router also. The link delay is 5ms. The packet size is 1000. This paper chooses TSW2CMPolicy with two colors marked: apply one CIR and two drop priority. It will use lower priority when the number of CIR is overabundance. First, we choose the traffics priority order: UDP2, TCP and UDP1. With inner capsulation of NS2, we only need to set the weight value is 6, 3, 1 respectively, the CIR is 1000000. The simulation time is 30s. Figure 4 is the throughput result of each traffic. We can see that the highest priority traffic UDP2 has the maximal throughput, the traffic UDP1 has the lowest priority and has the least throughput. So the throughput is accordance with above priority order. Figure 5 is the loss packet rate of all traffic flows. We can see that the drop rate of UDP2 is lowest, and the TCP is the maximal. The loss packet ratio has the reverse order to the priority. The jitter and the delay are important indexes Fig. 5 Loss packet ratio of the hybrid traffics (a) The jitter of UDP2 (b) The jitter of TCP (c) The jitter of UDP1 Fig. 6 The jitter of each traffic stream of the QoS performance also, Figure 6 (a), (b), (c) are the jitter result of each flow respectively. Figure 7 (a), (b), (c) are the delay of the traffics. From the result we can see that the jitter of all traffics are less than 0.05s, it s a good performance value through they are not in the light of priority order. The delay result of all traffics is in a good bound also. 160 2009.7
TECHNIQUE & APPLICATION 技 术 与 应 用 (a) The delay of UDP2 (b) The delay of TCP (a) The delay of TCP (b) The delay of UDP2 (C) The delay of UDP1 Fig. 7 The delay of each traffic (c) The delay of UDP1 Fig.10 The delay of reordered priority (a) The jitter of UDP2 (b) The jitter of UDP1 Fig. 8 The throughput of the reordered traffics (c) The jitter of TCP Fig.11 The jitter of the reordered streams Fig. 9 The drop packet ratio of reordered traffics From all the performance indexes, we can get the conclusion: the highest precedence business of UDP2 has the best performance than the second precedence business of TCP, and the TCP has the better performance than UDP1. The jitter and the delay are in a reasonable domain. It gives high precedence business with high preferential service fleetly. This mechanism is suitable for multiple traffics. To testify the robustness of the proposed mechanism, we alter the priority order of the mixing flows to: TCP, UDP2 and UDP1. Figure 8 is the 2009.7 161
reordered throughput of all traffics. Figure 9 is the reordered loss packet rate of all traffics. We can get the similar result as above: The highest precedence business has the best performance; the lowest precedence business has the worst performance. The jitter and delay of the reordered priority are in appropriate bounds, except that the delay of UDP2 is higher appreciably. Figure 10 is the delay of every priority reordered traffic. All the jitters of the reordered traffics are lesser and in a seemly bounds as Figure 11. The simulation result concludes that higher priority traffic has the higher performance and lower priority has the worse performance in the mass. The modified edge router can not only solve the congestion problem but also can realize the higher QoS to high priority stream of hybrid TCP and UDP traffics. V. CONCLUSION The proposed modified edge router mechanism complicates the edge router but simplifies the core router. It is consistent with the DiffServ motive. Especially in complicated network with many core routers, the method can solve the congestion problem of all core routers. If a certain core router connects with the edge routers, it will use the modified edge router mechanism directly; if it connects with other core router, it will assign bandwidth to the anterior routers. At last the edge router will take the place of the core router to control the packet drop rate. By this method we can control the congestion of high-speed steams at the core router and improve the QoS performance of high priority traffics. The hybrid flows of TCP and UDP with different precedence can gain corresponding services, and it overcomes the interference that the UDP bring to the TCP. The simulation result validates our modified mechanism effectively. Combining DiffServ and MPLS together to improve the transmission speed and quality is the further study. References [1] WU Chun-ming, JIANG Ming, SBio: a new AQM algorithm for DiffServ network, Journal on Communication, Vol.26, No.6. 2005, pp. 130-136. [2] ZHOU Xi-hong, TCP/UDP traffic congestion problem, Journal of XI AN University of Science and Technology, Vol.26, No.2. 2006, pp. 253-255. [3] Hossam Sassanein and Jian Zhao, Supporting Service Differentiation Through End-to-End QoS Routing, ISCC 04, vol.02. IEEE Computer Society: Washington, USA.2004, pp. 864-869. [4] BAO Hui, ZHAO Sheng-gang, HUAN Xia, Queuing Algorithm Based on DiffServ Model, Computer Engineering, Vol.34, No.20. 2008, pp. 130-132 [5] CHEN Xin-nian, Analysis and Comparison of the NS2- Based Router Algorithms: Droptail and RED, Computer Engineering, Vol.29, No.6. 2007, pp. 24-28. [6] YE Xiao-Guo, WANG Ru-Chuan, WANG Shao-Di, A congestion Control Algorithm for Layered Multicast Based on Differentiated Services, Joural of Software, Vol.17, No.7. 2006, pp. 1609-1616. [7] ZHANG Dan-qing, LI Ming-Shi, XU Jian-dong, Research on DiffServ Performance Evaluation by Simulation, Journal of System Simulation, Vol.16, No.12. 2004, pp. 2880-2883. [8] Bandchsl A, Tartarelli. S, Orlandi. F, Sato. S, Kobayashi. K, Pan. H, Configuration of DiffServ routers for high-speed links, High performance Switching and Routing. 2002, pp. 172-177. 162 2009.7