H.323 Traffic Characterization Test Plan Draft Paul Schopis, pschopis@itecohio.org



Similar documents
Network Simulation Traffic, Paths and Impairment

Network Considerations for IP Video

Requirements of Voice in an IP Internetwork

Management of Telecommunication Networks. Prof. Dr. Aleksandar Tsenov

4 Internet QoS Management

Region 10 Videoconference Network (R10VN)

IP videoconferencing solution with ProCurve switches and Tandberg terminals

Improving Quality of Service

How To Provide Qos Based Routing In The Internet

Configuring QoS. Understanding QoS CHAPTER

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

Configuring QoS in a Wireless Environment

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

Performance of Cisco IPS 4500 and 4300 Series Sensors

Performance Evaluation of Linux Bridge

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Integrated Service (IntServ) versus Differentiated Service (Diffserv)

ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP

Faculty of Engineering Computer Engineering Department Islamic University of Gaza Network Chapter# 19 INTERNETWORK OPERATION

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

VOICE over IP H.323 Advanced Computer Network SS2005 Presenter : Vu Thi Anh Nguyet

Assignment #3 Routing and Network Analysis. CIS3210 Computer Networks. University of Guelph

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

EQ-BGP: an efficient inter-domain QoS routing protocol

Testing VoIP on MPLS Networks

Quality of Service Analysis of site to site for IPSec VPNs for realtime multimedia traffic.

Network Performance: Networks must be fast. What are the essential network performance metrics: bandwidth and latency

The Basics. Configuring Campus Switches to Support Voice

Preparing Your IP Network for High Definition Video Conferencing

MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans

Combining Voice over IP with Policy-Based Quality of Service

Experiences with Class of Service (CoS) Translations in IP/MPLS Networks

Network administrators must be aware that delay exists, and then design their network to bring end-to-end delay within acceptable limits.

Preparing Your IP network for High Definition Video Conferencing

Clearing the Way for VoIP

Influence of Load Balancing on Quality of Real Time Data Transmission*

Voice over IP. Overview. What is VoIP and how it works. Reduction of voice quality. Quality of Service for VoIP

On real-time delay monitoring in software-defined networks

GigE Triple Play PIM. Comprehensive prioritization support including VLAN Q-in-Q, layer 3 IP ToS and Cisco DSCP

Distributed Systems 3. Network Quality of Service (QoS)

IJMIE Volume 2, Issue 7 ISSN:

INTRODUCTION TO VOICE OVER IP

Encapsulating Voice in IP Packets

Comparison of Voice over IP with circuit switching techniques

Chapter 7 outline. 7.5 providing multiple classes of service 7.6 providing QoS guarantees RTP, RTCP, SIP. 7: Multimedia Networking 7-71

Performance Measurement of Wireless LAN Using Open Source

12 Quality of Service (QoS)

Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)

Analysis of IP Network for different Quality of Service

Quality of Service for IP Videoconferencing Engineering White Paper

CS/ECE 438: Communication Networks. Internet QoS. Syed Faisal Hasan, PhD (Research Scholar Information Trust Institute) Visiting Lecturer ECE

Quality of Experience and Quality of Service

Performance Evaluation of the Impact of QoS Mechanisms in an IPv6 Network for IPv6-Capable Real-Time Applications

Configuring QoS in a Wireless Environment

Improving QOS in IP Networks. Principles for QOS Guarantees. Principles for QOS Guarantees (more) Principles for QOS Guarantees (more)

The network we see so far. Internet Best Effort Service. Is best-effort good enough? An Audio Example. Network Support for Playback

APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service?

Quality of Service (QoS) on Netgear switches

PictureTel H.323 Videoconferencing Network Bandwidth Analysis

Net.Audit distributed Test System

A Performance Analysis of Gateway-to-Gateway VPN on the Linux Platform

VoIP Bandwidth Considerations - design decisions

Measurement Challenges for VoIP Infrastructures

Analysis of Effect of Handoff on Audio Streaming in VOIP Networks

Optimizing Converged Cisco Networks (ONT)

QoS Issues in a Large-scale IPv6 Network

MLPPP Deployment Using the PA-MC-T3-EC and PA-MC-2T3-EC

Lecture 8 Performance Measurements and Metrics. Performance Metrics. Outline. Performance Metrics. Performance Metrics Performance Measurements

VoIP QoS. Version 1.0. September 4, AdvancedVoIP.com. Phone:

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty

SUNYIT. Reaction Paper 2. Measuring the performance of VoIP over Wireless LAN

QoS in VoIP. Rahul Singhai Parijat Garg

Voice Over IP Performance Assurance

Cisco Bandwidth Quality Manager 3.1

Analysis and Simulation of VoIP LAN vs. WAN WLAN vs. WWAN

Network Layer IPv4. Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS. School of Computing, UNF

Introduction to Differentiated Services (DiffServ) and HP-UX IPQoS

Protocol Architecture. ATM architecture

Voice over IP Manual

Fiber Channel Over Ethernet (FCoE)

IP QoS Interoperability Issues

ADTRAN NetVanta 5660

FORTH-ICS / TR-375 March Experimental Evaluation of QoS Features in WiFi Multimedia (WMM)

ITL Lab 5 - Performance Measurements and SNMP Monitoring 1. Purpose

PERFORMANCE ANALYSIS OF VOIP TRAFFIC OVER INTEGRATING WIRELESS LAN AND WAN USING DIFFERENT CODECS

Curso de Telefonía IP para el MTC. Sesión 2 Requerimientos principales. Mg. Antonio Ocampo Zúñiga

Network-based Quality of Service for Polycom IP Videoconferencing

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

INVESTIGATING THE PERFORMANCE OF VOIP OVER ETHERNET LAN IN CAMPUS NETWORK

Network performance in virtual infrastructures

The Conversion Technology Experts. Quality of Service (QoS) in High-Priority Applications

Networkbased. Quality of Service. Communicate Simply. For IP Video Conferencing

APPLICATION NOTE 183 RFC 2544: HOW IT HELPS QUALIFY A CARRIER ETHERNET NETWORK. Telecom Test and Measurement. What is RFC 2544?

Requirements for Simulation and Modeling Tools. Sally Floyd NSF Workshop August 2005

Transcription:

H.323 Traffic Characterization Test Plan Draft Paul Schopis, pschopis@itecohio.org I. Introduction Recent attempts at providing Quality of Service in the Internet2 community have focused primarily on Expedited Forwarding as described in RFC2598[1] and several draft documents in various stages of the IETF approval process[2,3]. These documents and the network behaviors described in them make certain assumptions about an application s network requirements. Additionally, the EF Per Hop Behavior is described in strictly relativistic terms. For example, the simple expression D(i)-E(i)<=S*R/MTU is used to describe a PHB [2]. D(i) is the departure rate and E(i) is the ingress rate. Stated simply, no queuing will occur as long as the departure rate is greater than the arrival rate. This expression appears to be a variation on Little s formula [4] that simply states as long as the offered load is not greater than a system s capacity to process arrival traffic no queuing delay will occur. Which queuing strategy is appropriate for a given application is not well understood. For example, tests have shown that there is a linear relationship between aggregate traffic load and burstiness when a strict priority queue is used to reduce latency[5]. Attributes such as this may be detrimental to real time video if loads are sufficiently large. H.323 video is an interesting protocol in that it was developed for a Best Effort network and exhibits certain properties such as video channel transmission rate back-off when less data is needed for accurate video updates. In order to enable a QoS strategy that truly helps H.323 in both performance and reliability ITEC-Ohio and ITEC-NC are collaborating to perform H.323 traffic characterization tests as a first step in developing a QoS strategy for the Megaconference 3. II. Definitions For purposes of clarity we will adopt the following definitions from the ITU Y.1541 draft Standard. IPTD - ip packet transfer delay commonly referred to as latency. IPDV- ip packet delay variation. IPLR ip packet loss ratio. IPER ip packet error ratio. IPPT ip packet through put. IPOT Octet based IP packet throughput. III. Bounds The Y.1541[6] provides the following bounds for what it defines as Class 0 service. Class 0 service approximates the IETF EF definition but in hard bounds as opposed to relative treatment. Although we are going to test for H.323 video bounds via network emulation we are aware of the following draft recommendations. IPTD Upper bound on the mean IPTD, 150ms.

IPDV - Defined as IPTD Max - IPTD Min = IPDV in Appendix II of Y.1541[6]. This intuitively seems sufficient since we are mostly interested in the width of the jitter window. The defined bound is 50ms. IPLR Upper bound on packet loss probability, 1x10-3. Derived for layer 3 from known layer 2 (ATM) bounds. IPER Upper bound, 1x10-4. The tests will be of two types. First, we will employ network emulation tools to calculate performance bounds on H.323 clients and MCUs. The tests will include cascaded MCUs under load to simulate as nearly as possible the MCU topology that will be required for Megaconference 3. We desire to find the performance bounds for latency, loss and jitter. As a starting point we will use the values for the appropriate metrics from ITU s Y.1541 draft standard. Secondly, we wish to capture the traffic streams and be able to reproduce them at will in the lab so we may experiment with various queuing strategies and other QoS mechanisms. It is our hypothesis that we must understand the characteristics of H.323 video network traffic to improve our probability of success. Secondly, we will characterize the H.323 traffic at the network injection point particularly in regards to IPDV and burstiness on both clients and MCUs. IV. Bounds Test Equipment Verification. A: Baseline test of software and hardware configuration. A1. Test Objective: Verify test mechanisms for bounds test. A2. Entry Criteria: None. A3. Test Setup: 1. Configure Linux platform with Nistnet software for network performance emulation. 2. Connect each Ethernet port on Linux platform to a Smartbits e.g. one as a transmit and one as a receive port. See figure one. Linux Platform E0 E1 Smartbits Figure 1.

A4. Test Execution: Basic performance 1. Verify Linux platform is routing correctly via route command. 2. Send traffic at 5.38 Load (~384k data bits) of 10M ethernet interface. 3. Set packet size to 128 byte packets. A5. Expected result: It is expected all packets will be routed correctly. A6. Repeat test with following values for load, 15%,50%, 75-80%, 90-100%. Repeat test with the following values for packet sizes 1000 and 1500 bytes. Note: The tester must use some discretion in choosing value pairs for this test. The intention is to find the failure modes for the test platform. A7. Test Execution: Nistnet reliability 1. Configure Nistnet to drop 5% of traffic. 2. Send traffic from the Smartbits into E0 and route out E1 to Smartbits. A8. Expected Result: It is expected that the Nistnet will drop the specified traffic. A9. Repeat test using drop of 1%, 0.1%, 0.01% etc. A10. Repeat testing procedure for latency using 50ms, 100ms, and 500ms delay. Repeat testing procedure for jitter using 100ms delay and 11.5 sigma with no correlation and work downward to find approximately 10ms steps in the jitter window. Note: If tests show a significant delta between expected and received results the tester should collect enough data points to try and construct a functional relationship between metrics set and observed. B: Point to point H.323 client bounds test. B1: Test objective: Test performance bounds for Loss, Latency and Jitter. B2: Entry criteria: Test A has been performed and there is at least a consistent functional basis for data interpretation. B3: Test set up: Connect two H.323 clients back to back through the network emulation device (Nistnet). See figure two. Nistnet E0 E1 H.323 Client H.323Client

B3. Test Execution: 1. Configure Nistnet platform to increase latency by 10ms. 3. Observe call quality 4. Repeat calls with increasing latency every iteration by 10ms until call quality B4. Expected results: Call quality will start to become unusable around 200-400ms range. B5. Test Execution: 1. Configure Nistnet platform to increase drop starting at 0.01% to 2.25%. The following data points for drop will be tested, 0.01%, 0.1%, 0.25%, 0.5%, 0.75%, 1%, 1.25%, 1.5%, 1.75%, 2.0%, 2.25%. 3. Observe call quality. B6. Expected Results: Packet loss will effect quality ~10-4 B7. Test Execution: 1. Configure Nistnet platform to increase jitter by 10ms. 5. Observe call quality 6. Repeat calls with increasing jitter every iteration by 10ms until call quality B6. Expected Results: Jitter will effect quality when it is greater than 50ms. C: Point to point H.323 client to MCU bounds test. NistNet E0 E1 H.323 Client H.323 MCU Figure Three. C1. Test objective: Test performance bounds for latency between an MCU and client. Entry criteria is tests A and B have been completed.

C2. Entry Criteria: Tests A and B have successfully been completed. C3. Test set up: Connect an H.323 client back to back to an MCU through the network emulation device (Nistnet). C4. Test Execution: 1. Configure Nistnet platform to increase latency by 10ms. 3. Observe call quality 4. Repeat calls with increasing latency every iteration by 10ms until call quality C.5 Expected Results: Call quality will start to become unusable around 200-400ms range. C6. Test objective: Test performance bounds for loss and between an MCU and client. The entry criteria is tests A, B and C1 have been completed. C7. Test set up: Connect an H.323 client back to back to an MCU through the network emulation device (Nistnet). C8. Test Execution: 1. Configure Nistnet platform to increase drop starting at 0.01% to 2.25%. The following data points for drop will be tested, 0.01%, 0.1%, 0.5%, 0.75%, 1%, 1.25%, 1.5%, 1.75%, 2.0%, 2.25%. 4. Observe call quality. C.9 Expected Results: Packet loss will effect quality ~10-4 C10. Test objective: Test performance bounds for jitter and between an MCU and client. C11. Test set up: Connect an H.323 client back to back to an MCU through the network emulation device (Nistnet). C12. Test Execution: 1. Configure Nistnet platform to increase jitter by 10ms. 5. Observe call quality Repeat calls with increasing jitter every iteration by 10ms until call quality

C13. Expected Results: Jitter will effect quality when it is greater than 50ms. D: Switch Propagation Delay Test D1. Test Objective: The purpose of this test is to determine the propagation delay introduced by the switch that will be used for the cascading of MCUs in subsequent tests. D2. Entry Criteria. Test A has been completed and the Nistnet Emulator is functioning acceptably. D3. Test Set Up. Plug two ports of a Cisco 2924XL into the Smartbits into 10/100 ports. See figure four. Cisco 2924 Smartbits Figure Four. D4. Test Execution. 1. Configure ports on the 2924XL and the ports on the Smartbits to match the transmit and receive speeds and duplex settings of the MCUs that will be used in test E. 2. Pass traffic at approximately 384Kbps. From one port to the other on the Smartbits. 3. Note delay in traffic. D5. Expected results. It is expected that the total propagation delay will be 10ms or less. E: One Way Network Failure Via MCU Test E1. Objective: The purpose of this test is to see the effects of latency using an MCU for one way network problems. E2. Entry Criteria: All tests A through D have been completed. E3. Test Set Up. 1. Plug one client directly into the Network Emulation device. 2. Plug the Cisco 2924XL into the second port of the Nistnet device. Note: each Nistnet port is a different subnet.

3. Plug the MCU and the other client into the Cisco Switch. 4. Configure a common conference on the MCU and register it with the on board GateKeeper. 4. Register both clients with the GateKeeper. NIC Client Switch Vlan 1 Nistnet Emulator MCU Vlan 2 Appliance Client Figure 5. E3. Test Execution. 1. Configure Nistnet platform to increase latency by 10ms. 2. Establish call between Clients Through MCU at 768Kbps. 3. Observe call quality 4.Repeat calls with increasing latency every iteration by 10ms until call quality E4.Expected Results. Call quality will start to become unusable around 200-400ms range. E5. Objective: The purpose of this test is to see the effects of loss using an MCU for one way network problems. Secondly, since the errors are introduced between the Appliance and the MCU and the NIC is on the same subnet as the MCU. We wish to evaluate the MCU s role as a replay buffer. E6. Test Set Up. E7. Expected Results: Packet loss will effect quality ~10-4 E8. Test Execution.

1. Configure Nistnet platform to increase drop starting at 0.01% to 2.25%. The following data points for drop will be tested, 0.01%, 0.1%, 0.5%, 0.75%, 1%, 1.25%, 1.5%, 1.75%, 2.0%, 2.25%. 2. Establish call between Clients via MCU at 768Kbps. 3. Observe call quality. E9. Objective: The purpose of this test is to see the effects of jitter using an MCU for one-way network problems. E10. Test Set Up. E11. Test Execution. 1. Configure Nistnet platform to increase jitter by 10ms. 3.Observe call quality 4.Repeat calls with increasing jitter every iteration by 10ms until call quality E12. Expected Results: Jitter will effect quality when it is greater than 50ms. F: Two Way Network Failure Via MCU Test F1. Objective. The objective of is to examine the effects of two way latency network problems when the clients are connected via an MCU. F2. Entry Criteria: Tests A through E have been completed. F3. Test Set up. It is the same as test E2. F4. Test Execution. 1. Configure Nistnet platform to increase latency by 10ms. 2. Establish call between Clients Through MCU at 768Kbps. 3. Observe call quality 4.Repeat calls with increasing latency every iteration by 10ms until call quality F5. Expected Results: Call quality will start to become unusable around 200-400 ms range. F6. Objective: The purpose of this test is to see the effects of loss using an MCU with two way network problems.

F7. Test Set Up. F8. Test Execution. 1. Configure Nistnet platform to increase drop starting at 0.01% to 2.25%. The following data points for drop will be tested, 0.01%, 0.1%, 0.5%, 0.75%, 1%, 1.5%, 1.75%, 2.0%, 2.25%. 2. Establish call between Clients via MCU at 768Kbps. 3. Observe call quality. F9. Expected Results: Packet loss will effect quality at ~10-4 F10. Objective: The purpose of this test is to see the effects of jitter using an MCU for two- way network problems. F11. Test Set Up. F12. Test Execution. 1. Configure Nistnet platform to increase jitter by 10ms. 3.Observe call quality 4.Repeat calls with increasing jitter every iteration by 10ms until call quality F13. Expected Results: Jitter will effect quality when it is greater than 50ms. G: One-Way Cascaded MCU test. G1. Objective: The purpose of this test is to see the effects of latency using a cascaded MCU for one way network problems. G2. Test Set Up. 1. Plug each client directly into the Cisco 2924XL 2. Create two separate VLANs so that each client is on a separate subnet. 2. Plug each VLAN of the Cisco 2924XL into the ports of the Nistnet device. Note: Each Nistnet port is a different subnet. 3. Plug the MCUs into the Cisco 2924XL so that the each client and MCU pair are on different subnets and must be routed via the Nistnet router. 4. Configure a common conference on one MCU and register it with the on board GateKeeper. Register the other MCU to the same GateKeeper 5. Register both clients with the GateKeeper.

NIC Client MCU Switch VLAN 1 VLAN 2 Nistnet Emulator Appliance Client MCU Figure 6. G3. Test Execution. 1. Configure Nistnet platform to increase latency by 10ms. 2. Establish call between Clients Through MCU at 768Kbps. 3. Observe call quality 4.Repeat calls with increasing latency every iteration by 10ms until call quality G4. Expected Results. Call quality will start to become unusable around 200-400ms range. G5. Objective: The purpose of this test is to see the effects of loss using an MCU for two way network problems. Secondly, since the errors are introduced between the Appliance and the MCU and the NIC are on the same subnet as the MCU. We wish to evaluate the MCU s role as a replay buffer. G6. Test Set Up. G7. Test Execution. 1. Configure Nistnet platform to increase drop starting at 0.01% to 2.25%. The following data points for drop will be tested, 0.01%, 0.1%, 0.5%, 0.75%, 1%, 1.25%, 1.5%, 1.75%, 2.0%, 2.25%. 2. Establish call between Clients via MCU at 768Kbps. 3. Observe call quality. G8. Expected Results: Call quality will deteriorate at ~ 10-4.

G9. Objective: The purpose of this test is to see the effects of jitter using an MCU for one-way network problems. G10. Test Set Up. G11. Test Execution. 1. Configure Nistnet platform to increase jitter by 10ms. 3.Observe call quality 4.Repeat calls with increasing jitter every iteration by 10ms until call quality G12. Expected Results: Jitter will effect quality when it is greater than 50ms. H: One-Way Cascaded MCU test. H1. Objective: The purpose of this test is to see the effects of latency using a cascaded MCU for twoway network problems. H2. Test Set Up. 3. Plug each client directly into the Cisco 2924XL 4. Create two separate VLANs so that each client is on a separate subnet. 2. Plug each VLAN of the Cisco 2924XL into the ports of the Nistnet device. Note: each Nistnet port is a different subnet. 3. Plug the MCUs into the Cisco 2924XL so that the each client and MCU pair are on different subnets and must be routed via the Nistnet router. 4. Configure a common conference on one MCU and register it with the on board GateKeeper. Register the other MCU to the same GateKeeper 5. Register both clients with the GateKeeper. H3. Test Execution. 1. Configure Nistnet platform to increase latency by 10ms. 2. Establish call between Clients Through MCU at 768Kbps. 3. Observe call quality 4.Repeat calls with increasing latency every iteration by 10ms until call quality H4. Test Results. Call quality will start to become unusable around 200-400ms range. H5. Objective: The purpose of this test is to see the effects of loss using an MCU for two-way network problems. Secondly, since the errors are introduced between the Appliance and the MCU

and the NIC is on the same subnet as the MCU. We wish to evaluate the MCU s role as a replay buffer. H6. Test Set Up. H7. Test Execution. 1. Configure Nistnet platform to increase drop starting at 0.01% to 2.25%. The following data points for drop will be tested, 0.01%, 0.1%, 0.5%, 0.75%, 1%, 1.25%, 1.5%, 1.75%, 2.0%, 2.25%. 2. Establish call between Clients via MCU at 768Kbps. 3. Observe call quality. H8. Expected Results. Call quality will deteriorate at ~ 10-4. H9. Objective: The purpose of this test is to see the effects of jitter using an MCU for two-way network problems. H10. Test Set Up. H11. Test Execution. 1. Configure Nistnet platform to increase jitter by 10ms. 3.Observe call quality 4.Repeat calls with increasing jitter every iteration by 10ms until call quality H12. Expected Results: Jitter will effect quality when it is greater than 50ms. I: Traffic characterization test. I1. Test Objective. Characterize traffic in terms of IPDV and burstiness. I2. Entry Criteria. All previous tests have been successfully completed. I3. Test Setup. 1. Connect two H.323 clients, one MCU and Smartbits to Cisco Switch. I4. Test execution. 1. Set switch to snoop one H.323 client and pass that to the Smartbits.

2. Set Smartbits to trigger on source address of snooped client and additional criteria in item #4. 3. Connect both clients via call. 4. Repeat call three times each time changing the secondary trigger for the a. Control Channel (TCP stream) b. Video channel (Port number) c. Audio channel (Port number) 5. Observe IPDV and burstiness. 6. Repeat all steps for MCU. I5. Expected Results. No expectation except the traffic should be no worse than the bounds established by other tests. Bibliography [1] V. Jacobson, K. Nichols, and K. Poduri. RFC 2598: An expedited forwarding PBH, June 1999. [2] G. Armitage, A. Casati, J. Crowcroft, J. Halpern, B. Kumar, J. Schnizlein. <draft-ietfdiffserv-efresolve-00.txt>: A revised expression of the Expedited Forwarding PHB. November 2000. [3] V. Jacobson, K. Nichols, and K. Poduri. <draft-ietf-diffserv-pdb-vw-00.txt>: The Virtual Wire Per-Domain Behavior. July 2000. [4] Donald Gross and Carl M. Harris. Fundamentals of Queuing Theory. Third Edition 1998. pp. 10-13.John Wiley & Sons, New York. [5] Tizana Ferrari, End-to-end performance analysis with traffic aggregation. Computer Networks, Volume 34 Number 6, December 2000. pp 905-914. Elesevier, Amsterdam. [6] ITU Study Group 13, Revised draft Recommendation Y. 1541 'Internet protocol communication service - IP Performance and Availability Objectives and Allocations', November 2000.