Improving NGN with QoS Strategies Marcel C. Castro, Tatiana B. Pereira, Thiago L. Resende CPqD Telecom & IT Solutions Campinas, S.P., Brazil E-mail: {mcastro; tatibp; tresende}@cpqd.com.br Abstract Voice, video and data applications convergence is a reality in Next Generation Networks (NGNs). This convergence in a best effort scenario causes different requirement applications (as file transfers, video on demand and voice over IP) share the same infrastructure, experimenting equal treatment. As a result, interesting strategies can be studied in order to implement differentiated services. The present work investigates from a best effort scenario a variety of QoS solutions (DiffServ, RSVP and MPLS-TE) and the improvement resulting from each one. Results of delay and jitter expose the quality provided by each strategy. Introduction Internet growing is even more demanding for new services that could offer information and communication in an integrated way, with mobility, security and quality. In other words what is occurring can be named by services convergence, different requirement applications (as file transfers, video on demand and voice over IP) sharing the same infrastructure and consequently experiencing the same treatment in a best effort network. Best Effort Scenario In this section we present the best effort scenario that is the base for all simulations realized for testing the various QoS strategies. Basically, all simulated scenarios (included the ones that will be presented in the QoS Scenarios section) tried the same topology, application loads and nodes configuration (except particular QoS configurations related to the specific strategy and mentioned in the appropriated section). The network topology illustrated in Figure 1 represents our lab scenario adopted for simulation. All links are full duplex. Links between routers 3600C, 3600A and 7200A are 7 Mbps. Links between routers 3600C, 3600B and 7200A are 5 Mbps. Links between servers/workstations and switches are 10BaseT. Links between switches and routers are 100BaseT. In contrast to best effort networks, Next Generation Networks (NGNs) intends to support a variety of communication services (data, video and voice) seamlessly considering its differentiated requirements. For this reason it is greatly important that quality of service (QoS) mechanisms be adopted. The QoS idea is basically to implement mechanisms to create differentiated service classes, prioritizing more critical/important ones. Technologies/Protocols like IntServ, DiffServ and MPLS-TE are known to guarantee service distinguishing and certain service classes performance improvement. But understanding the mechanism inside each one of these QoS strategies and deciding for a particular one is not an easy task. OPNET simulator is an important tool for QoS strategies practicing, requirements understanding and performance results comparison. In this context, the present work intends to experiment and analyze different QoS strategies over an IP network example under a defined FTP, video and voice traffic. OPNET simulation results of delay and jitter from each one strategy are presented and compared. This paper is organized as follows. Firstly we present the Best Effort scenario, its topology, traffic load and particular router interfaces configuration. From this scenario three other ones were created in order to implement differentiated services treatments. These three QoS solution explanation and particular configuration requirements are presented in section QoS Scenarios. Finally the section Results is dedicated for simulation results exposition and strategies improvement comparison. Last section notes the conclusions. 1 Figure 1: Network Topology Three kinds of applications (FTP, video and voice) were introduced according to the parameters presented in Tables 1, 2 and 3. FTP traffic flows only from Server_FTP_1 to Client_FTP_1 and from Server_FTP_2 to Client_FTP_2 (both clients are 100% GET). Video traffic flows only from video_called_1 to video_calling_1 and from video_called_2 to video_calling_2. And finally, voice traffic flows between voice_called_1 and voice_calling_1 and between voice_called_2 to voice_calling_2 (both in dual direction). Command Mix (Get/Total) 100% Inter-Request Time (sec) exponential(1) File Size (bytes) pareto(83333.33,1.5) Table 1: FTP Application Parameters Set Frame Interarrival Time (sec) constant(0.1) Frame Size (bytes) exponential(15625) Table 2: Video Application Parameters Set
Silence Length (sec) exponential(0.65) Talk Spurt Length (sec) exponential(0.352) Encoder Scheme GSM (silence) Voice Frames per Packet 1 Table 3: Voice Application Parameters Set In Best Effort scenario, routers implement the FIFO queuing scheme using 5MBytes buffer interfaces (Table 4). Buffer Size (Bytes) 5MBytes Queuing Scheme FIFO Queuing Profile FIFO Profile Table 4: Router Interfaces Configuration Best Effort Scenario OPNET simulation results of Best Effort and QoS scenarios are presented in section Simulation Results. QoS Senarios Traditionally, the Internet just provides best effort service. Traffic is processed as quickly as possible and there is no guarantee to timeliness or delivery. On the other hand, Internet growing and e-commerce flourishing have become QoS more needed. Basically, there are two driving forces for QoS. First, companies that do business on the Web need QoS for better delivery of their content and/or for more customers services attracting. Second, ISPs need the valueadded services in their networks to increase revenue. Differentiated services (DiffServ) and integrated services (IntServ) are two models for providing QoS in the Internet. The essence of DiffServ is to divide traffic into classes (accordingly to its requirements) and give them differentiated treatments [1]. IntServ basically reserves resources (link bandwidth and buffer space) for each individual flow so that the quality of service can be guaranteed. RSVP protocol is commonly used for reservation. Traffic Engineering (TE) is used to provide the agreed upon quality of service to a customer. Traffic engineering routes traffic across a network based on the resources the traffic flow requires and the resources available in the network. It is feasible to use TE approach to develop a performance service model for critical applications, in order to make the best use of the network infrastructure and resources, and to use the explicit routing feature offered by MPLS to facilitate then. DiffServ The essence of DiffServ [2] is to divide traffic into multiple classes and treat them accordingly to its set priority. IPv4 header contains a type of service (TOS) one byte field. Its meaning is previously defined in [3]. Applications can set the left three bits in the TOS byte to indicate the IP precedence for discarding packets. However, choices are limited (2 3 = 8 classes, where the two priority ones are reserved for network control). DiffServ renames the TOS octet as differentiated services codepoint (DSCP) [4] and uses it to indicate the forwarding treatment a packet should receive. Consequently, DiffServ standardizes a number of per-hop behavior (PHB) groups. Using 2 different classification, policing, shaping and scheduling rules, several classes of services can be provided. A DiffServ domain meets the service level agreement (SLA) between a user and the service provider. Inside this domain the internal nodes forward packets based on their codepoint values. Each codepoint value is mapped to a supported per-hop behavior (PHB). Packets are classified at the boundaries of the network. The classification decision is made based on the information contained in packet header. Expedited forwarding, assured forwarding, class-selector and default PHB are the four types of PHBs. Expedited forwarding is the priority PHB, it can be used to build a low loss, low latency, low jitter, assured bandwidth, end-to-end service through the DiffServ domain. Assured forwarding (AF) offers different levels of forwarding assurances for IP packets. There are four types of AF classes (AF1x, AF2x, AF3x and AF4x). Inside each AF class it is possible to specify three drop precedence levels. The drop precedence of a packet determines the relative importance of the packet in case of congestion. A packet with a low drop precedence value has priority of protection, while packets with higher values are discarded. The class-selector PHB was created to be IP precedence compatible. Its format is xxx000, where the xxx maps the three first bits from the ToS field. For the default PHB value, there is no special treatment accorded to the packet, it is equivalent to a best effort. The DiffServ scenario is an evolution of the Best Effort Scenario. Basically, the following configurations were necessary to implement this scenario: voice application was associated to AF31, video application to AF21 and FTP application to AF11. Router interfaces were set to implement WFQ queuing scheme using DSCP Based profile. RSVP Resource Reservation Protocol (RSVP) [5] is a network layer signaling protocol that allows applications to reserve network resources for unicast and multicast data flows. RSVP was proposed to support real-time application QoS requirements, providing a kind of control over the end-to-end packet delay. With RSVP, the application source (the sender) transmits a Path message along the routed path to the unicast or multicast destination (the receiver). The purpose of the Path message is twofold: to mark the routed path between the sender and the receiver and to collect information about the QoS viability of each router along the path. Upon receiving the Path message, the destination host or hosts can determine what services the network can support (for example, guaranteed service or controlled service) and then generate an RSVP reservation (Resv) message. The Resv message contains traffic and QoS objects that are processed by the traffic control component of each router as it follows the path upstream toward the sender. If the router has sufficient capacity, then resources along the path back toward the receiver are reserved for that flow. If resources are not available, RSVP error messages are generated and returned to the receiver.
The per-flow reservation state maintained by each router will be deleted unless RSVP Path and Resv messages are periodically sent by the sender and receivers, respectively. return traffic is not required to take the same LSP. LSP may diverge from the IGP shortest path. MPLS allows a hierarchy of labels, known as label stack. The RSVP Scenario is an evolution of the Best Effort Scenario. As explained before all RSVP reservations starts from the sender, who has an application profile associated to it. If we intend that a particular application (inside of this profile) use of RSVP improvements so it is necessary that the following be done: this application must be set to enable RSVP messages and a RSVP profile must be associated to its inbound and/or outbound flows. RSVP profiles associated to inbounds and outbounds flows must be created inside the IP QoS Definition object ( RSVP Profiles attribute) defining rate thresholds, reservation styles, RSVP flow description, sender list and retry policies. RSVP flows mentioned in RSVP flow description attributes must be defined inside the IP QoS Definition object ( RSVP Flow Specification attribute) characterizing allocated bandwidth and buffer. Our RSVP scenario tried RSVP improvements for voice and video applications, creating a RSVP profile and a RSVP flow description for each of these two applications. Voice and video RSVP profiles tried a rate threshold of 100 and 10,000 bytes/sec and Wild Card and Fixed Field type of reservations respectively. Voice and video RSVP flows tried a bandwidth and buffer required reservation of 2,600 bytes/sec and 500,000 bytes for voice and 178,887 bytes/sec and 500,000 bytes for video. Furthermore, RSVP was enabled on end-points and routers interfaces. MPLS and Traffic Engineering Multi-Protocol Label Switching (MPLS) [6] is an emerging switching technology that provides a high performance method for forwarding packets. MPLS integrates the performance and traffic engineering capabilities of layer 2 with the scalability and flexibility of layer 3 routing. With MPLS, the layer 3 lookup is done only at the ingress edge router. Here, a short fixed length label is assigned to the packet. Labels have local meaning. Label Distribution Protocol (LDP) is used to distribute label information between the routers in the MPLS domain. The label, which is assigned to a particular packet, represents the Forwarding Equivalence Class (FEC) to which a packet is assigned. A FEC can represent a source/destination address, an input/output interface, a type of service, a protocol type, etc. Packets belonging to the same FEC get equal treatment. The edge routers of a MPLS domain are known as Label Edge Router (LER). The other routers inside the MPLS domain are known as Label Switch Router (LSR). At each LSR, packets are switched based on labels. The egress LSR removes the label before forwarding the IP packet outside the MPLS network. A Label Switch Path (LSP) is a set of LSRs that packets, belonging to a certain FEC, travel through in order to reach their destination. LSPs are derived from Interior Gateway Protocol (IGP) routing information and are always unidirectional. The 3 Traffic Engineering (TE) [7] deals with the performance of a network in supporting the network customers and their QoS needs. The focus of MPLS TE is traffic measurement and control. Traffic control deals with operations to ensure the required bandwidth across the network. When we configure a dynamic TE tunnel between two LERs, the LER scans through all the available LSPs to find a path with sufficient bandwidth. Whenever a link/node along the tunnel path goes down, the LER scans through the next available paths for required bandwidth and switches the path automatically. This makes MPLS TE more efficient to handle high priority traffic. Other MPLS TE functionality is that when we have multiple TE tunnels between two LERs, if the tunnel carrying the traffic goes down, the traffic is automatically mapped onto the next available tunnel through a different path. In a MPLS-TE enabled network, whenever a packet arrives from the connected IP network to the edge router, edge router (LER) maps the packet to a unique FEC, appends an appropriate label on to the packet and forwards it through a predefined LSP. The following was necessary to enable MPLS-TE in our scenario: 3600C and 7200A routers were enabled to act as LER, and 3600A and 3600B to act as LSR, providing MPLS at the interfaces that connect them. Three FECs were created, one for each application (FTP, Video and Voice), based in the destination address of the clients (Client_FTP_1, Client_FTP_2, video_calling_1, video_calling_2, voice_calling_1 and voice_calling_2). Three Traffic Trunks (Table 5) were specified based in applications traffic characteristics. Three TE tunnels (LSPs) were configured from router 3600C to 7200A through three dynamic LSPs instances. Minimum bandwidth was a TE parameter used and the values 5M, 3M and 50Kbits/sec was defined based on the traffic generated by FTP, Video and Voice application respectively. These values correspond to the same used in traffic trunk specifications. And finally a mapping was configured between incoming interface, FEC, Traffic Trunk and LSP for each application at the edge router (3600C). Traffic Trunks Maximum Bit Rate(bps) Average Traffic Bit Rate Profile (bps) Maximum Burst Size (bits) Out of Profile Transmit Transmit Action Unchanged Unchanged Table 5: Traffic Trunk Configuration Trunk_FTP Trunk_Video Trunk_Voice Transmit Unchanged
Simulation Results R Simulation results presented in this section compare the behaviors of FTP download response time, video conferencing packet end-to-end delay, VoIP packet end-to-end delay, video conferencing jitter and VoIP jitter; for each one of the previous exposed scenarios: Best Effort, DiffServ, RSVP and MPLS-TE. As expected, results of the best effort scenario simulation showed that the mixing of different service types (like FTP, video and VoIP) cause that: TCP traffic (like FTP) is damaged by UDP traffic (like Video and VoIP) because of TCP flow control mechanism. Figure 2: FTP Download Response Time guarantied (RSVP scenario) and traffic engineering is implemented (MPLS-TE scenario). The perception that all traffic types (FTP, video and voice) have their results improved as we go on with the sequence of strategies Best Effort, DiffServ, RSVP and MPLS-TE can be explained by the fact that a WFQ queuing scheme is used in the DiffServ and the RSVP scenario and traffic engineering is implemented in the MPLS-TE scenario. Both video and FTP traffic generate burst traffic with high loads. For this reason, in the Best Effort scenario where FIFO queuing scheme is used, video bursts damage the FTP traffic forwarding as well as FTP bursts damage the video traffic forwarding. In other words there is no protection among the applications. In DiffServ and RSVP scenarios, using WFQ queuing scheme [1], a queue is created for each traffic type (FTP, video and VoIP). These three queues are served through a round robin mechanism in a fair way; both prioritizing the VoIP and video traffic and sharing the network resources with FTP traffic (less priority one). So, even though there wasn't packet loss in the Best Effort scenario, the delay resulting from all traffic types get better using WFQ due to the fact that each queue would be served with protection (WFQ causes that one queue is not damaged by another, a priority proportional service time is guaranteed for each queue). In MPLS-TE scenario the traffic engineering that is implemented permits that links not before used be explored now, increasing the overall network throughput and consequently guaranteeing the best scenario delay results. Furthermore jitter results (Figure 5 and 6) prove that adopting QoS strategies implies in lower delay variation both for video and voice critical applications. Figure 3: Video Conferencing end-to-end delay Figure 5: Video Conferencing Jitter Figure 4: VoIP end-to-end delay From Figures 2-4 we can see that results of FTP download response time, video conferencing packet end-to-end delay and VoIP packet end-to-end delay are greater using the simple Best Effort scenario, getting better as a classification is done (DiffServ scenario), bandwidth and buffer reservations are 4 Figure 6: VoIP Jitter
No dropping was gotten in all scenarios and consequently throughput applications rates were about the same independent of the strategy used. Conclusion This work presents a sequence of QoS strategy simulations and analysis, useful for evaluating requirements and improvements caused by each one scenario. From the discussed scenarios and the simulations done, it is possible to conclude about the planning requirements need for each strategy implementation and the advantages resulting from no best effort scenarios. References [1] A. Tanenbaum, Computer Networks, third edition, Prentice Hall PTR, 1996. [2] S. Blake, D. Black, M. Carlson, E. Davies: An Architecture for Differentiated Services. RFC 2475, December 1998. [3] K. Nichols, V. Jacobson, and L. Zhang, "A Two-bit Differentiated Services Architecture for the Internet", Work in Progress, ftp://ftp.ee.lbl.gov/papers/dsarch.pdf [4] F. Baker, D.Black, S.Blake, and K.Nichols: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers. RFC 2474, December 1998. [5] R. Braden, L. Zhang, S. Berson, S. Herzog: Resource Reservation Protocol, RFC 2205, September 1997. [6] E. Rosen, A. Viswanathan, R. Callon: Multiprotocol Label Switching Architecture, RFC3031, January 2001. [7] J. Boyle, V. Gill, A. Hannan, D. Cooper, D. Awduche, B. Christian: Applicability Statement for Traffic Engineering with MPLS, RFC 3346, August 2002. 5