TCP over ATM Performance in NASA NREN and CTI



Similar documents
ATM WAN Experiences with Command and Control & High-Performance Computing Applications at NCCOSC RDT&E Div

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

Quality of Service in ATM Networks

Distributed applications monitoring at system and network level

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

UPPER LAYER SWITCHING

Chapter 2 - The TCP/IP and OSI Networking Models

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

Link Layer. 5.6 Hubs and switches 5.7 PPP 5.8 Link Virtualization: ATM and MPLS

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty

Traffic Mangement in ATM Networks Dollar Day Sale

Question: 3 When using Application Intelligence, Server Time may be defined as.

TFTP TRIVIAL FILE TRANSFER PROTOCOL OVERVIEW OF TFTP, A VERY SIMPLE FILE TRANSFER PROTOCOL FOR SIMPLE AND CONSTRAINED DEVICES

Top-Down Network Design

Performance Evaluation of Linux Bridge

Measure wireless network performance using testing tool iperf

RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do

Hands on Workshop. Network Performance Monitoring and Multicast Routing. Yasuichi Kitamura NICT Jin Tanaka KDDI/NICT APAN-JP NOC

WAN OPTIMIZATION. Srinivasan Padmanabhan (Padhu) Network Architect Texas Instruments, Inc.

D1.2 Network Load Balancing

Clearing the Way for VoIP

Asynchronous Transfer Mode: ATM. ATM architecture. ATM: network or link layer? ATM Adaptation Layer (AAL)

CS 78 Computer Networks. Internet Protocol (IP) our focus. The Network Layer. Interplay between routing and forwarding

Measuring the Performance of VoIP over Wireless LAN

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux perfsonar developer

Challenges of Sending Large Files Over Public Internet

Distributed Systems 3. Network Quality of Service (QoS)

Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment

High-Speed TCP Performance Characterization under Various Operating Systems

DL TC72 Communication Protocols: HDLC, SDLC, X.25, Frame Relay, ATM

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding

LAN/Server Free* Backup Experiences

Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

VoIP Bandwidth Considerations - design decisions

Frequently Asked Questions

IP videoconferencing solution with ProCurve switches and Tandberg terminals

A New Approach to Developing High-Availability Server

QoS:What Is It? Why Do We Need It?

Final for ECE374 05/06/13 Solution!!


TCP Window Size for WWAN Jim Panian Qualcomm

Iperf Tutorial. Jon Dugan Summer JointTechs 2010, Columbus, OH

Chapter Thirteen. Network Design and Management. Data Communications and Computer Networks: A Business User s Approach Seventh Edition

WANs connect remote sites. Connection requirements vary depending on user requirements, cost, and availability.

Requirements of Voice in an IP Internetwork

SUNYIT. Reaction Paper 2. Measuring the performance of VoIP over Wireless LAN

Performance Measurement of Wireless LAN Using Open Source

CISCO IOS IP SERVICE LEVEL AGREEMENTS: ASSURE THE DELIVERY OF IP SERVICES AND APPLICATIONS

Verifying Network Bandwidth

IxChariot Virtualization Performance Test Plan

TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

Local Area Networking technologies Unit number: 26 Level: 5 Credit value: 15 Guided learning hours: 60 Unit reference number: L/601/1547

Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology.

The Ecosystem of Computer Networks. Ripe 46 Amsterdam, The Netherlands

Announcements. Midterms. Mt #1 Tuesday March 6 Mt #2 Tuesday April 15 Final project design due April 11. Chapters 1 & 2 Chapter 5 (to 5.

1.1. Abstract VPN Overview

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

CTS2134 Introduction to Networking. Module 07: Wide Area Networks

Understand VLANs, Wired LANs, and Wireless LANs

Multiservice Access Technologies

Mobile Communications Chapter 9: Mobile Transport Layer

Master Course Computer Networks IN2097

The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN

Mike Canney Principal Network Analyst getpackets.com

Network performance and capacity planning: Techniques for an e-business world

ATM vs. Gigabit Ethernet For High Speed LANS

How To Manage Performance On A Network (Networking) On A Server (Netware) On Your Computer Or Network (Computers) On An Offline) On The Netbook (Network) On Pc Or Mac (Netcom) On

Introduction to LAN/WAN. Network Layer (part II)

Multi-Protocol Over ATM (MPOA): Performance Tests

Voice and Delivery Data Networks

Web Load Stress Testing

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Session 11899

Per-Flow Queuing Allot's Approach to Bandwidth Management

Encapsulating Voice in IP Packets

TCP Pacing in Data Center Networks

Notes Odom, Chapter 4 Flashcards Set:

Using Linux Traffic Control on Virtual Circuits J. Zurawski Internet2 February 25 nd 2013

TCP Adaptation for MPI on Long-and-Fat Networks

TCP tuning guide for distributed application on wide area networks 1.0 Introduction

Priority Queuing of Network Game Traffic over a DOCSIS Cable Modem Link

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

CS640: Introduction to Computer Networks. Applications FTP: The File Transfer Protocol

HyperIP : VERITAS Replication Application Note

Transcription:

TCP over Performance in NASA NREN and CTI 1996 ARPA Workshop on Wide Area Performance Agatino Sciuto Computer Sciences Corporation email:agatino.sciuto@gsfc.nasa.gov NASCOM 7/22/96 1

Background Introduction - An group at NASCOM (NASA Communications) has been tasked to show that an network can support NASCOM s voice, video and data requirements. -The NILE ( NASCOM Internetworking Laboratory Environment) is the place where new ideas and technologies are tested before NASCOM offers them to its customers. Purpose Find solutions to improve TCP performance for IP data in a high speed network with large delay (RTT > 30ms). The decision was made not to rate limit the work stations. As a result any traffic generated by TCP was admitted into the network as dictated by TCP parameters. Tests Extended testing was done to optimize TCP parameters Different services (ABR, VBR and UBR) IP over using standards RFC 1483 and 1577 through EBnet routers TCP performance over CTI and NREN networks from GSFC to JPL Single and multiple TCP streams NASCOM 7/22/96 2

NASCOM TCP Performance Summary The TCP performance test effort can be divided into In lab Results TCP parameters Optimization Single TCP stream Using a combination of RFC 1323 and parameter optimization, moved IP data over at 91% utilization (33.46 Mbps) on a DS-3 link Using RFC 1483 and VBR service obtained 29.67 Mbps over a simulated 60 ms RTT DS-3 link Multiple TCP stream using UBR Obtained 34 Mbps on each of three receivers concurrently for a sender throughput of 102 Mbps 73% utilization on an link. CTI results (GSFC to JPL) Using ABR (between es) obtained 90% utilization (17.34 Mbps) on a 20 Mbps contract. NREN Tested performance of a single TCP stream using UBR EPD service in an cloud. Compared performance of FDDI- configuration to an only configuration. NASCOM 7/22/96 3

Hardware and Software used for TCP performance tests Single CPU Sparc 20 workstations running: - SOLARIS 2.3 - SUNCONSULT TCP-LFN version 2.0 ( SUN s implementation of RFC 1323 ) Installed Network Adapters -Network Peripheral FDDI adapter running latest TCP driver software Max. TCP speed reached with large window 67.93 Mbps - FORE sba 200 e NIC running FORE THOUGHT 4.0 Max. TCP speed reached with large window 97.3 Mbps CISCO 7010 routers were used with an API and an FDDI card. The software used was 11.0.8 FORE ASX 200 BX es supporting UBR EPD also running FORE THOUGHT 4.0 software. STRATACOM BPX for CTI tests. It offered a proprietary implementation of ABR NASCOM 7/22/96 4

Baseline In Lab Tests TCP parameters Optimization Proved the formula: Window size(bytes) = BW (bytes/sec)* RTT (sec) This formula is stated in RFC 1323. Work Stations Network Adapters ( FDDI and ) baseline tests. Proof of concept for WAN tests Using a channel simulator included 60 ms of delay to RTT. Results for the controlled environment are used as baseline for the WAN tests - Single TCP stream Measures the throughput of one TCP connection between two similarly configured work stations. - Multiple (three) TCP streams Three receivers and it sends three different TCP streams using three VC's within a VP. This test has not been tried on NREN NASCOM 7/22/96 5

In Lab Multiple TCP Stream Test Test Configuration ws Rec 1 FDDI Hub ws Rec 2 ws Rec 3 channel simulator ARP server router ws Source FDDI router NASCOM 7/22/96 6

In Lab Multiple TCP Stream test (cont.) Results TCP window APDU Physical RTT Throughput in bytes in bytes i n msec in Mbps R1 R2 R3 Agr. 262144 4096 FDDI- 60 25.13 22.82 22.07 70.00 524250 4096 FDDI- 60 31.05 24.28 7.54 62.87 262144 4096 60 20.70 20.38 19.07 60.15 262144 8192 60 22.63 21.03 19.93 63.59 524250 4096 60 23.63 23.42 15.79 62.84 524250 8192 60 31.79 31.41 31.05 94.25 Comments The data suggests that bandwidth allocation among the three TCP connections is fair under optimum conditions ( no fragmentation and window size appropriate to bandwidth obtained). The data also shows that can take advantage of the larger window while the FDDI adapter can not. This Test will be performed using NREN soon. NASCOM 7/22/96 7

CTI Test Test Configuration WS DS-3 cloud OR channel simulator DS-3 WS CTI Results (GSFC to JPL) Using ABR (between es) obtained 90% utilization (17.34 Mbps) on a 20 Mbps contract. Sample results for DS-3 with 60 ms RTT delay provided by a channel simulator Service type Throughput in Mbps ABR 28.71 VBR 29.67 NASCOM 7/22/96 8

CTI Test Comments This test was performed using one PVC on a VBR contract of 20 Mbps of PCR and a SCR of 10 Mbps. The PVC was ABR with a minimum cell rate of 1 Mbps and a PCR of 20 Mbps. The throughput obtained varied with time and in same occasions it was at about the minimum cell rate. This was a good lesson for us. On the VBR test simulated in the lab the most important lesson was the inability of TCP to slow down enough to avoid cell drops at the. In order to achieve good performance the Bandwidth*delay=Windowsize formula needs to be exactly matched to the current network conditions. This makes VBR service undesirable for data transfer because any changes in either RTT or available bandwidth will reduce TCP throughput considerably. NASCOM 7/22/96 9

NREN TCP Test Purpose Assuming an FDDI LAN and WAN, find the bottleneck and show what actions can be taken to improve its performance. Compare FDDI- to only performance. Test Single TCP streams using FDDI- and only. Procedure Using ttcp send 350 Mbytes of data from one workstation to another In every case the sending workstation controls the receivers parameters settings using UNIX scripts. A single script will change TCP parameters, provide TCP statistics such as retransmissions and throughput results. Comments The results show the inability of FDDI to take advantage of the increased window size while can. NASCOM 7/22/96 10

Single TCP stream test Test Configuration A loopback VC was created at JPL. The es were configured to send the bulk of data through the service while the acks were looped within the lab. FDDI ws router Sample Results NREN Service NREN to JPL Service to JPL TCP window size APDU size Physical path RTT Throughput in Bytes in Bytes ws to ws in msec. in Mbps. 262144 4096 FDDI- 66 30.34 262144 8192 FDDI- 66 30.69 524250 4096 FDDI- 66 12.84 524250 8192 FDDI- 66 20.08 262144 4096 66 31.73 262144 8192 66 31.73 524250 4096 66 61.88 524250 8192 66 61.88 NASCOM 7/22/96 11 router ws FDDI

Conclusion When using VBR the window size has to match the bandwidth delay product. Increasing initial and maximum burst sizes does not improve TCP performance if the network conditions change. ABR adjusts more easily to network changes. However its performance can be reduced to its minimum cell rate at times and it is not widely implemented yet. A large per VC buffer on the plays a major role in improving TCP performance. UBR with EPD fairly allocated bandwidth among three TCP streams. In FDDI- tests it was shown the inability of this configuration to take advantage of the increased window size. NASCOM 7/22/96 12

Future tests We are working at increasing the number of NASA sites involved in these tests. Multiple TCP stream tests are imminent on NREN between two sites. Tests of applications using TCP ( for example ftp ) are also under consideration. Proposals of cooperation with other testing groups will be welcomed by NASCOM. NASCOM 7/22/96 13