TCP AND ATM IN WIDE AREA NETWORKS Benjamin J. Ewy, Joseph B. Evans, Victor S. Frost, Gary J. Minden Telecommunications & Information Sciences Laboratory Department of Electrical Engineering & Computer Science University of Kansas Lawrence, KS 66045-2228 evans@eecs.ukans.edu Gigabit Networking Workshop 94 12 June 1994
CONTENTS INTRODUCTION EXPERIMENTS EXTRAPOLATION CONCLUSION
CONTENTS è INTRODUCTION MAGIC Network Overview of Results EXPERIMENTS EXTRAPOLATION CONCLUSION
INTRODUCTION
TCP and ATM in Wide Area Networks INTRODUCTION INTRODUCTION MAGIC Network 2.4 Gb/s Wide-Area Network ( 1000 kilometers) support for 622 Mb/s (OC-12c) and 155 Mb/s (OC-3c) circuits hosts at KU, BCBL, Sprint, EDC used for tests Minnesota Supercomputer Center Minneapolis, Minnesota EROS Data Center Sioux Falls, South Dakota 2.4 Gb/s SONET/ATM backbone Command & Control Battle Lab Ft. Leavenworth = Campus Network University of Kansas Lawrence, Kansas US Sprint Kansas City
TCP and ATM in Wide Area Networks INTRODUCTION Overview of Results default TCP/IP performance over ATM WAN is poor buffer overflow caused by bandwidth mismatch, multiple sources TCP rate control not working? TCP windows must be large enough for WAN solutions ATM TCP
CONTENTS p INTRODUCTION è EXPERIMENTS Experiment 1 Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 EXTRAPOLATION CONCLUSION
EXPERIMENTS
TCP and ATM in Wide Area Networks EXPERIMENTS EXPERIMENTS Experiment 1 question: WAN performance limited by TCP window size? experiment: DEC Alpha with a DEC OTTO OC-3c interface to DEC Alpha over a 600 km link, 8.8 ms round-trip delay results TCP Window Size 0.5k 1k 2k 4k 8k 16k 32k 64k 128k Throughput (Mb/s) 0.47 0.93 1.8 3.7 7.4 14.9 29.8 59.6 119 comments consistent with the theoretical limits caused by latency pacing not needed because no rate mismatch large windows necessary for acceptable throughput
TCP and ATM in Wide Area Networks EXPERIMENTS Experiment 2 questions: high bandwidth TCP sources will overrun ATM switch buffers at points of bandwidth mismatch? improved by pacing? experiment: Alpha (OC-3c) in Lawrence, Kansas to SPARC-10 (TAXI) in South Dakota (600 km) - a single host to another host Alphas - DEC OTTO cards, SPARC-10 - Fore Systems 100 Mb/s TAXI switches - Fore Systems ASX-100 128 kb TCP windows, 64 kb write buffers ATM pacing at 70 Mb/s results: No Pacing Pacing 0.87 Mb/s 68.20 Mb/s
TCP and ATM in Wide Area Networks EXPERIMENTS Experiment 3 questions: multiple high bandwidth TCP sources will overrun ATM switch buffers at multiplexing points? improved using pacing? experiment: Two Alphas (OC-3c) in Lawrence, Kansas to SPARC-10 (TAXI) in South Dakota (600 km) - two hosts to a third host Alphas - DEC OTTO cards, SPARC-10 - Fore Systems 100 Mb/s TAXI, switches - Fore Systems ASX-100 128 kb TCP windows, 64 kb write buffers ATM pacing at 35 Mb/s each results: No Pacing Pacing 1.66 Mb/s 52.36 Mb/s
TCP and ATM in Wide Area Networks EXPERIMENTS Experiment 4 questions: interoperability? packet losses? pacing effects? experiment: scenario a: two SPARC-10s (TAXI) in Kansas and South Dakota to SGI Onyx (TAXI) in Kansas City - Fore interfaces only scenario b: two Alphas (OC-3c) in Lawrence, Kansas to SGI Onyx (TAXI) in Kansas City - two hosts supporting pacing to a third host results: No Pacing Pacing scenario a 46.71 Mb/s - scenario b - 61.17 Mb/s comments: two to four packet losses per second observed in scenario a no packet losses observed in scenario b
TCP and ATM in Wide Area Networks EXPERIMENTS Experiment 5 question: will TCP rate control be more effective if TCP segment size small relative to buffers? experiment: Alpha (OC-3c) in Lawrence, Kansas to SPARC-10 (TAXI) in South Dakota (600 km), vary TCP segment size results: 30 25 Throughput and TCP Segment Size 262144 byte windows 131072 byte windows 65536 byte windows 32768 byte windows 16384 byte windows 20 Throughput (Mb/s) 15 10 5 0 0 2000 4000 6000 8000 10000 12000 14000 16000 TCP Segment Size
TCP and ATM in Wide Area Networks EXPERIMENTS Experiment 6 question: does TCP performance trade-off exist due to congestion limits versus machine processing limits? experiment: Alpha (OC-3c) in Lawrence, Kansas to Alpha (OC-3c) at same location, vary TCP segment size results: 140 Throughput and TCP Segment Size 120 262144 byte windows 131072 byte windows 65536 byte windows 32768 byte windows 16384 byte windows 100 Throughput (Mb/s) 80 60 40 20 0 0 2000 4000 6000 8000 10000 12000 14000 16000 TCP Segment Size
CONTENTS p INTRODUCTION p EXPERIMENTS è EXTRAPOLATION Buffer Size Effects TCP Processing Bounds CONCLUSION
EXTRAPOLATION
TCP and ATM in Wide Area Networks EXTRAPOLATION EXTRAPOLATION Buffer Size Effects segment size and buffers (Fore ASX-100 has 12 kb buffer) assume given cell loss rate, find buffers needed for given load 1 Allowable Load versus ATM Switch Buffer Capacity 0.9 0.8 0.7 Allowable Load 0.6 0.5 0.4 0.3 0.2 9180 bytes/packet 4352 bytes/packet 1500 bytes/packet 536 bytes/packet 0.1 0 50 100 150 200 250 300 350 400 Per VC Buffer Size (KB)
TCP and ATM in Wide Area Networks EXTRAPOLATION TCP Processing Bounds throughput limits due to segment size and machine speed 1200 Throughput Projected Versus SPECint and TCP Segement Size extrapolate from measured data to higher machine capabilities 1000 12000 bytes/packet 9180 bytes/packet 6000 bytes/packet 4000 bytes/packet 1500 bytes/packet 600 bytes/packet 800 Throughput 600 400 200 0 0 100 200 300 400 500 600 700 SPECint
CONTENTS p INTRODUCTION p EXPERIMENTS p EXTRAPOLATION è CONCLUSION
CONCLUSION
TCPandATMinWideAreaNetworks CONCLUSION CONCLUSION congestion limitations WAN limitations processing limitations ATM pacing TCP pacing