Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross



Similar documents
6.6 Scheduling and Policing Mechanisms

Network Management Quality of Service I

Improving QOS in IP Networks. Principles for QOS Guarantees. Principles for QOS Guarantees (more) Principles for QOS Guarantees (more)

16/5-05 Datakommunikation - Jonny Pettersson, UmU 2. 16/5-05 Datakommunikation - Jonny Pettersson, UmU 4

Chapter 7 outline. 7.5 providing multiple classes of service 7.6 providing QoS guarantees RTP, RTCP, SIP. 7: Multimedia Networking 7-71

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

CS640: Introduction to Computer Networks. Why a New Service Model? Utility curve Elastic traffic. Aditya Akella. Lecture 20 QoS

The network we see so far. Internet Best Effort Service. Is best-effort good enough? An Audio Example. Network Support for Playback

Motivation. QoS Guarantees. Internet service classes. Certain applications require minimum level of network performance:

Mixer/Translator VOIP/SIP. Translator. Mixer

Lecture 16: Quality of Service. CSE 123: Computer Networks Stefan Savage

Real-time apps and Quality of Service

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Integrated Service (IntServ) versus Differentiated Service (Diffserv)

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

Internet Quality of Service

MULTIMEDIA NETWORKING

Analysis of IP Network for different Quality of Service

Multimedia Requirements. Multimedia and Networks. Quality of Service

Improving Quality of Service

Overview. QoS, Traffic Engineering and Control- Plane Signaling in the Internet. Telematics group University of Göttingen, Germany. Dr.

Quality of Service. Traditional Nonconverged Network. Traditional data traffic characteristics:

Distributed Systems 3. Network Quality of Service (QoS)

Faculty of Engineering Computer Engineering Department Islamic University of Gaza Network Chapter# 19 INTERNETWORK OPERATION

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

4 Internet QoS Management

QoS scheduling. Scheduling algorithms

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

Quality of Service (QoS) on Netgear switches

How To Provide Qos Based Routing In The Internet

Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications

Quality of Service in ATM Networks

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology.

Performance Analysis of Queuing Disciplines for Different Internet Service Protocols

COMPARATIVE ANALYSIS OF DIFFERENT QUEUING MECHANISMS IN HETROGENEOUS NETWORKS

CS/ECE 438: Communication Networks. Internet QoS. Syed Faisal Hasan, PhD (Research Scholar Information Trust Institute) Visiting Lecturer ECE

18: Enhanced Quality of Service

EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP

Differentiated Services

Technology Overview. Class of Service Overview. Published: Copyright 2014, Juniper Networks, Inc.

Per-Flow Queuing Allot's Approach to Bandwidth Management

QoS in IP networks. Computer Science Department University of Crete HY536 - Network Technology Lab II IETF Integrated Services (IntServ)

CS 268: Lecture 13. QoS: DiffServ and IntServ

Protocols with QoS Support

The FX Series Traffic Shaping Optimizes Satellite Links

The Affects of Different Queuing Algorithms within the Router on QoS VoIP application Using OPNET

This topic lists the key mechanisms use to implement QoS in an IP network.

QoS : Computer Networking. Motivation. Overview. L-7 QoS. Internet currently provides one single class of best-effort service

QoS for Cloud Computing!

Quality of Service for IP Videoconferencing Engineering White Paper

Quality of Service (QoS)) in IP networks

RASHED ET AL: A COMPARATIVE STUDY OF DIFFERENT QUEUING TECHNIQUES IN VOIP, VIDEO CONFERENCING AND. Fig. 1 Network Architecture for FIFO, PQ and WFQ

VoIP Performance Over different service Classes Under Various Scheduling Techniques

VoIP Quality of Service - Basic Theory

The Conversion Technology Experts. Quality of Service (QoS) in High-Priority Applications

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service?

15-441: Computer Networks Homework 2 Solution

Network management and QoS provisioning - QoS in the Internet

ESTIMATION OF TOKEN BUCKET PARAMETERS FOR VIDEOCONFERENCING SYSTEMS IN CORPORATE NETWORKS

Quality of Service on the Internet: Evaluation of the IntServ Architecture on the Linux Operative System 1

Performance Evaluation of the Impact of QoS Mechanisms in an IPv6 Network for IPv6-Capable Real-Time Applications

02-QOS-ADVANCED-DIFFSRV

CSE 123: Computer Networks

Announcements. Midterms. Mt #1 Tuesday March 6 Mt #2 Tuesday April 15 Final project design due April 11. Chapters 1 & 2 Chapter 5 (to 5.

QoS for (Web) Applications Velocity EU 2011

Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross

Introduction to Quality of Service. Andrea Bianco Telecommunication Network Group

QoS Strategy in DiffServ aware MPLS environment

Policing and Shaping Overview

Network Performance: Networks must be fast. What are the essential network performance metrics: bandwidth and latency

Quality of Service Analysis of site to site for IPSec VPNs for realtime multimedia traffic.

Quality of Service. Translation of QoS Parameters. Quality of Service

VOIP TRAFFIC SHAPING ANALYSES IN METROPOLITAN AREA NETWORKS. Rossitza Goleva, Mariya Goleva, Dimitar Atamian, Tashko Nikolov, Kostadin Golev

Requirements of Voice in an IP Internetwork

Network Layer IPv4. Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS. School of Computing, UNF

Traffic Shaping. FortiOS Handbook v3 for FortiOS 4.0 MR3

Configuring QoS in a Wireless Environment

Configuring QoS in a Wireless Environment

Access the Test Here

- QoS and Queuing - Queuing Overview

Smart Queue Scheduling for QoS Spring 2001 Final Report

Management of Telecommunication Networks. Prof. Dr. Aleksandar Tsenov

FortiOS Handbook - Traffic Shaping VERSION 5.2.0

Three Key Design Considerations of IP Video Surveillance Systems

Chapter 5 Configuring QoS

VoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications

Router Scheduling Configuration Based on the Maximization of Benefit and Carried Best Effort Traffic

Design and analysis of flow aware load balancing mechanisms for multi-service networks

Active Queue Management

QoS issues in Voice over IP

How To Solve A Network Communication Problem

Transcription:

M ultimedia Communication Multimedia Systems(Module 5 Lesson 3) Summary: Beyond Best-Effort Motivating QoS Q uality of Service (QoS) Scheduling and Policing Sources: Chapter 6 from Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross Beyond Best-Effort In order to take the internet into the realm of supporting quality of service (QoS) several important architectural components have been proposed. To illustrate these proposals we will use the following network: Suppose that two application packet flows originate on hosts 1 and 2 on one LAN and are destined for hosts 3 and 4 on another LAN. The routers on the two LANs are connected by a 1.5 Mbps link. Let's assume the LAN speeds are significantly higher than 1.5 Mbps, and focus on the output queue of router R1; it is here that packet delay and packet loss will occur if the aggregate sending rate of the 1 and 2 exceeds 1.5 Mbps. 1 1.5 Mbps link 3 2 R1 R1 s o/p interface queue R2 4 Scenario 1: A 1 Mbps Audio App and an FTP transfer A 1 Mbps audio application (for example, a CD-quality audio call) shares the 1.5 Mbps link between R1 and R2 with an FTP application that is transferring a file from 2 to 4. In the best-effort Internet, the audio and FTP packets are mixed in the output queue at R1 and (typically) transmitted in a first-in-first-out (FIFO) order. A burst of packets from the FTP source could potentially fill up the queue, causing IP audio packets to be excessively delayed or lost due to buffer overflow at R1. ow should we solve this potential problem? 1 1.5 Mbps link 3 2 R1 R1 s o/p interface queue R2 4

Scenario 1 (Contd.) Given that the FTP application does not have time constraints, our intuition might be to give strict priority to audio packets at R1. Under a strict priority scheduling discipline, an audio packet in the R1 output buffer would always be transmitted before any FTP packet in the R1 output buffer. The link from R1 to R2 would look like a dedicated link of 1.5 Mbps to the audio traffic, with FTP traffic using the R1-to- R2 link only when no audio traffic is queued. In order for R1 to distinguish between the audio and FTP packets in its queue, each packet must be marked as belonging to one of these two "classes" of traffic. The Type-of-Service (ToS) field in IPv4 can be used for this Principle 1: Packet marking allows a router to distinguish among packets belonging to different classes of traffic. Scenario 2: Scenario 1 with high-priority FTP Suppose now that the FTP user has purchased "platinum service" Internet access from its ISP, while the audio user has purchased cheap, low-budget service. Should the cheap user's audio packets be given priority over FTP packets in this case? Arguably not. It would seem more reasonable to distinguish packets on the basis of the sender's IP address. More generally, we see that it is necessary for a router to classify packets according to some criteria. Principle 1 (modified): Packet classification allows a router to distinguish among packets belonging to different classes of traffic. Scenario 3: A misbehaving Audio App and an FTP transfer Suppose, the router knows it should give priority to packets from the 1 Mbps audio application. Since the outgoing link speed is 1.5 Mbps, even though the FTP packets receive lower priority, they will still, on average, receive 0.5 Mbps of transmission service. What happens if the audio application starts sending packets at a rate of 1.5 Mbps or higher (either maliciously or due to an error in the application)? In this case, the FTP packets will starve Ideally, one wants a degree of isolation among flows, in order to protect one flow from another misbehaving flow. Principle 2: It is desirable to provide a degree of isolation among traffic flows, so that one flow is not adversely affected by another misbehaving flow.

Scenario 3 (contd.): With strict enforcement of the link-level allocation of bandwidth, a flow can use only the amount of bandwidth that has been allocated; It cannot utilize bandwidth that is not currently being used by the other applications. It is desirable to use bandwidth as efficiently as possible, allowing one flow to use another flow's unused bandwidth at any given point in time. Principle 3: While providing isolation among flows, it is desirable to use resources (for example, link bandwidth and buffers) as efficiently as possible. Scenario 4: Two 1 Mbps Audio apps over an overloaded 1.5 Mbps Link The combined data rate of the two flows (2 Mbps) exceeds the link capacity. Even with classification and marking (Principle 1), isolation of flows (Principle 2), and sharing of unused bandwidth (Principle 3), of which there is none, this is clearly a losing proposition. Implicit with the need to provide a guaranteed QoS to a flow is the need for the flow to declare its QoS requirements. This process of having a flow declare its QoS requirement, and then having the network either accept the flow (at the required QoS) or block the flow is referred to as the call admission process. Principle 4: A call admission process is needed in which flows declare their QoS requirements and are then either admitted to the network (at the required QoS) or blocked from the network (if the required QoS cannot be provided by the network). Providing QoS Support : Summary QoS for Networked Applications Call Admission igh Resource Utilization Isolation Scheduling and Policing Packet Classification

Scheduling and Policing We have seen the principles (policies) that govern the support of QoS, now we see mechanisms that accomplish this. Scheduling Mechanisms (at the link): FCFS FIFO Queue Abstraction FIFO Queue Operation Scheduling Mechanism 2 Priority Queueing Prio. Queue Abstraction Prio. Queue Operation Scheduling Mechanism 3 RR and Weighted Fair Queuing(WFQ) RR Operation

WFQ A generalized abstraction of round robin queuing that has found considerable use in QoS architectures is the so-called weighted fair queuing (WFQ) discipline WFQ differs from round robin in that each class may receive a differential amount of service in any interval of time. Each class, i, is assigned a weight, w i. During any interval of time during which there are class i packets to send, class i will be guaranteed to receive a fraction of service equal to w i /(Σw j ) In the worst case, even if all classes have queued packets, class i will still be guaranteed to receive a fraction w i /( w j ) of the bandwidth. Thus, for a link with transmission rate R, class i will always achieve a throughput of at least R w i /(Σw j ). Policing We can identify three important policing criteria, each differing from the other according to the time scale over which the packet flow is policed: Average rate : The network may wish to limit the long-term average rate (packets per time interval) at which a flow's packets can be sent into the network. A crucial issue here is the interval of time over which the average rate will be policed. A flow whose average rate is limited to 100 packets per second is more constrained than a source that is limited to 6,000 packets per minute, even though both have the same average rate over a long enough interval of time. Peak rate : While the average rate-constraint limits the amount of traffic that can be sent into the network over a relatively long period of time, a peak-rate constraint limits the maximum number of packets that can be sent over a shorter period of time. Burst Size : The network may also wish to limit the maximum number of packets (the "burst" of packets) that can be sent into the network over an extremely short interval of time. It limits the number of packets that can be instantaneously sent into the network. Policing: The Leaky Bucket The leaky bucket mechanism is an abstraction that can be used to characterize policing limits: A leaky bucket (r,b), consists of a bucket that can hold up to b tokens. Tokens are added to this bucket as follows: New tokens, which may potentially be added to the bucket, are always being generated at a rate of r tokens per second. If the bucket is filled with less than b tokens when a token is generated, the newly generated token is added to the bucket; otherwise the newly generated token is ignored, and the token bucket remains full with b tokens.

The Leaky Bucket (Contd.) ow to use the leaky bucket concept in policing? Suppose that before a packet is transmitted into the network, it must first remove a token from the token bucket. If the token bucket is empty, the packet must wait for a token. Let us now consider how this behavior polices a traffic flow. Because there can be at most b tokens in the bucket, the maximum burst size for a leaky-bucket-policed flow is b packets. The token generation rate is r, therefore, the maximum number of packets that can enter the network in any interval of time of length t is rt + b. Thus, the token generation rate, r, serves to limit the longterm average rate at which the packets can enter the network. Leaky Bucket + WFQ => Provable Maximum Delay in queue Consider a router's output that multiplexes n flows, each policed by a leaky bucket with parameters (b i, r i ) for i = 1,..., n, using WFQ scheduling. Each flow, i, is guaranteed to receive a share of the link bandwidth equal to at least R w i /(Σ w j ), where R is the transmission rate of the link in packets/sec. Leaky Bucket + WFQ (Contd.) Suppose that flow 1's token bucket is initially full. A burst of b 1 packets then arrives to the leaky bucket policer for flow 1. These packets consume all of the tokens (without wait) from the leaky bucket and then join the WFQ waiting area for flow 1. Since these b 1 packets are served at a rate of at least R w 1 /(Σ w j ) packet/sec., the last of these packets will then have a maximum delay, d max, until its transmission is completed, where: b d = 1 max R w / w 1 j That is, if there are b 1 packets in the queue and packets are being serviced (removed) from the queue at a rate of at least R w 1 / (Σ w j ) packets per second, then the amount of time until the last packet is transmitted cannot be more than b 1 /(R w 1 /(Σ w j )).