TCP Tuning Techniques for High-Speed Wide-Area Networks. Wizard Gap
|
|
|
- Brenda Booker
- 10 years ago
- Views:
Transcription
1 NFNN2, 20th-21st June 2005 National e-science Centre, Edinburgh TCP Tuning Techniques for High-Speed Wide-Area Networks Distributed Systems Department Lawrence Berkeley National Laboratory Wizard Gap Slide from Matt Mathis, PSC Slide: 2 1
2 Today s Talk This talk will cover: Information needed to be a wizard Current work being done so you don t have to be a wizard Outline TCP Overview TCP Tuning Techniques (focus on Linux) TCP Issues Network Monitoring Tools Current TCP Research Slide: 3 How TCP works: A very short overview Congestion window (CWND) = the number of packets the sender is allowed to send The larger the window size, the higher the throughput Throughput = Window size / Round-trip Time TCP Slow start exponentially increase the congestion window size until a packet is lost this gets a rough estimate of the optimal congestion window size CWND slow start: exponential increase packet loss congestion avoidance: linear increase timeout retransmit: slow start again time Slide: 4 2
3 TCP Overview Congestion avoidance additive increase: starting from the rough estimate, linearly increase the congestion window size to probe for additional available bandwidth multiplicative decrease: cut congestion window size aggressively if a timeout occurs CWND slow start: exponential increase packet loss congestion avoidance: linear increase timeout retransmit: slow start again time Slide: 5 TCP Overview Fast Retransmit: retransmit after 3 duplicate acks (got 3 additional packets without getting the one you are waiting for) this prevents expensive timeouts no need to go into slow start again At steady state, CWND oscillates around the optimal window size With a retransmission timeout, slow start is triggered again CWND slow start: exponential increase packet loss congestion avoidance: linear increase timeout retransmit: slow start again time Slide: 6 3
4 Terminology The term Network Throughput is vague and should be avoided Capacity: link speed Narrow Link: link with the lowest capacity along a path Capacity of the end-to-end path = capacity of the narrow link Utilized bandwidth: current traffic load Available bandwidth: capacity utilized bandwidth Tight Link: link with the least available bandwidth in a path Achievable bandwidth: includes protocol and host issues 45 Mbps 10 Mbps 100 Mbps 45 Mbps source sink Narrow Link Tight Link Slide: 7 More Terminology RTT: Round-trip time Bandwidth*Delay Product = BDP The number of bytes in flight to fill the entire path Example: 100 Mbps path; ping shows a 75 ms RTT BDP = 100 * / 2 = 3.75 Mbits (470 KB) LFN: Long Fat Networks A network with a large BDP Slide: 8 4
5 TCP Performance Tuning Issues Getting good TCP performance over high-latency high-bandwidth networks is not easy! You must keep the pipe full, and the size of the pipe is directly related to the network latency Example: from LBNL (Berkeley, CA) to ANL (near Chicago, IL), the narrow link is 1000 Mbits/sec, and the one-way latency is 25ms Need (1000 / 8) *.025 sec = MBytes of data in flight to fill the pipe Slide: 9 Setting the TCP buffer sizes It is critical to use the optimal TCP send and receive socket buffer sizes for the link you are using. Recommended size = 2 x Bandwidth Delay Product (BDP) if too small, the TCP window will never fully open up if too large, the sender can overrun the receiver, and the TCP window will shut down Default TCP buffer sizes are way too small for this type of network default TCP send/receive buffers are typically 64 KB with default TCP buffers, you can only get a small % of the available bandwidth! Slide: 10 5
6 Importance of TCP Tuning Throughput (Mbits/sec) Tuned for LAN KB TCP Buffers Tuned for WAN KB TCP Buffers Tuned for Both LAN (rtt = 1ms) WAN (rtt = 50ms) Slide: 11 TCP Buffer Tuning: System Need to adjust system max TCP buffer Example: in Linux (2.4 and 2.6) add the entries below to the file /etc/sysctl.conf, and then run "sysctl -p # increase TCP max buffer size net.core.rmem_max = net.core.wmem_max = # increase Linux autotuning TCP buffer limits # min, default, and max number of bytes to use net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = Similar changes needed for other Unix OS s For more info, see: Slide: 12 6
7 TCP Buffer Tuning: Application Must adjust buffer size in your applications: int skt, int sndsize = 2 * 1024 * 1024; err = setsockopt(skt, SOL_SOCKET, SO_SNDBUF, (char *)&sndsize,(int)sizeof(sndsize)); and/or err = setsockopt(skt, SOL_SOCKET, SO_RCVBUF, (char *)&sndsize,(int)sizeof(sndsize)); It s a good idea to check the following: err = getsockopt(skt, SOL_SOCKET, SO_RCVBUF, (char *)&sockbufsize, &size); If (size!= sndsize) printf(stderr, Warning: requested TCP buffer of %d, but only got %d \n, sndsize, size); Slide: 13 Determining the Buffer Size The optimal buffer size is twice the bandwidth*delay product of the link: buffer size = 2 * bandwidth * delay The ping program can be used to get the delay e.g.: >ping -s 1500 lxplus.cern.ch 1500 bytes from lxplus012.cern.ch: icmp_seq=0. time=175. ms 1500 bytes from lxplus012.cern.ch: icmp_seq=1. time=176. ms 1500 bytes from lxplus012.cern.ch: icmp_seq=2. time=175. ms pipechar or pathrate can be used to get the bandwidth of the slowest hop in your path. (see next slides) Since ping gives the round trip time (RTT), this formula can be used instead of the previous one: buffer size = bandwidth * RTT Slide: 14 7
8 ping time = 50 ms Buffer Size Example Narrow link = 500 Mbps (62 Mbytes/sec) e.g.: the end-to-end network consists of all 1000 BT ethernet and OC-12 (622 Mbps) TCP buffers should be:.05 sec * 62 = 3.1 Mbytes Slide: 15 Sample Buffer Sizes UK to... UK (RTT = 5 ms, narrow link = 1000 Mbps) : 625 KB Europe: (RTT = 25 ms, narrow link = 500 Mbps): 1.56 MB US: (RTT = 150 ms, narrow link = 500 Mbps): 9.4 MB Japan: (RTT = 260, narrow link = 150 Mbps): 4.9 MB Note: default buffer size is usually only 64 KB, and default maximum buffer size for is only 256KB Linux Autotuning default max = 128 KB; times too small! Home DSL, UK to US (RTT = 150, narrow link = 1 Mbps): 19 KB Default buffers are OK. Slide: 16 8
9 More Problems: TCP congestion control Path = LBL to CERN (Geneva) OC-3, (in 2000), RTT = 150 ms average BW = 30 Mbps Slide: 17 Work-around: Use Parallel Streams RTT = 70 ms graph from Tom Dunigan, ORNL Slide: 18 9
10 Tuned Buffers vs. Parallel Steams Throughput (Mbits/sec) no tuning tuned TCP buffers 10 parallel streams, no tuning tuned TCP buffers, 3 parallel streams Slide: 19 Parallel Streams Issues Potentially unfair Places more load on the end hosts But they are necessary when you don t have root access, and can t convince the sysadmin to increase the max TCP buffers graph from Tom Dunigan, ORNL Slide: 20 10
11 NFNN2, 20th-21st June 2005 National e-science Centre, Edinburgh Network Monitoring Tools traceroute >traceroute pcgiga.cern.ch traceroute to pcgiga.cern.ch ( ), 30 hops max, 40 byte packets 1 ir100gw-r2.lbl.gov ( ) 0.49 ms 0.26 ms 0.23 ms 2 er100gw.lbl.gov ( ) 0.68 ms 0.54 ms 0.54 ms ( ) 1.00 ms *d9* 1.29 ms 4 lbl2-ge-lbnl.es.net ( ) 0.47 ms 0.59 ms 0.53 ms 5 snv-lbl-oc48.es.net ( ) ms ms ms 6 chi-s-snv.es.net ( ) ms ms ms 7 ar1-chicago-esnet.cern.ch ( ) ms ms ms 8 cernh9-pos100.cern.ch ( ) ms ms ms 9 cernh4.cern.ch ( ) ms ms ms 10 pcgiga.cern.ch ( ) ms ms ms Can often learn about the network from the router names: ge = Gigabit Ethernet oc48 = 2.4 Gbps (oc3 = 155 Mbps, oc12=622 Mbps) Slide: 22 11
12 Iperf iperf : very nice tool for measuring end-to-end TCP/UDP performance Can be quite intrusive to the network Example: Server: iperf -s -w 2M Client: iperf -c hostname -i 2 -t 20 -l 128K -w 2M Client connecting to hostname [ ID] Interval Transfer Bandwidth [ 3] sec 66.0 MBytes 275 Mbits/sec [ 3] sec 107 MBytes 451 Mbits/sec [ 3] sec 106 MBytes 446 Mbits/sec [ 3] sec 107 MBytes 443 Mbits/sec [ 3] sec 106 MBytes 447 Mbits/sec [ 3] sec 106 MBytes 446 Mbits/sec [ 3] sec 107 MBytes 450 Mbits/sec [ 3] sec 106 MBytes 445 Mbits/sec [ 3] sec 58.8 MBytes 59.1 Mbits/sec [ 3] sec 871 MBytes 297 Mbits/sec Slide: 23 pathrate / pathload Nice tools from Georgia Tech: pathrate: measures the capacity of the narrow link pathload: measures the available bandwidth Both work pretty well. pathrate can take a long time (up to 20 minutes) These tools attempt to be non-intrusive Open Source; available from: Slide: 24 12
13 pipechar Tool to measure hop-by-hop available bandwidth, capacity, and congestion Takes 1-2 minutes to measure an 8 hop path client-side only tool: puts very little load on the network (about 100 Kbits/sec) But not always accurate Results affected by host speed Hard to measure links faster than host interface Results after a slow hop typically not accurate, for example, if the first hop is a wireless link, and all other hops are 100 BT or faster, then results are not accurate Available from: part of the netest package Slide: 25 pipechar output dpsslx04.lbl.gov(59)>pipechar firebird.ccs.ornl.gov PipeChar statistics: 82.61% reliable From localhost: Mbps GigE ( Mbps) 1: ir100gw-r2.lbl.gov ( ) Mbps GigE < % BW used> 2: er100gw.lbl.gov ( ) Mbps GigE < % BW used> 3: lbl2-ge-lbnl.es.net ( ) Mbps congested bottleneck < % BW used> 4: snv-lbl-oc48.es.net ( ) Mbps OC192 < % BW used> 5: orn-s-snv.es.net ( ) Mbps congested bottleneck < % BW used> 6: ornl-orn.es.net ( ) Mbps congested bottleneck < % BW used> 7: orgwy-ext.ornl.gov ( ) Mbps congested bottleneck < % BW used> 8: ornlgwy-ext.ens.ornl.gov ( ) Mbps congested bottleneck < % BW used> 9: ccsrtr.ccs.ornl.gov ( ) Mbps GigE ( Mbps) 10: firebird.ccs.ornl.gov ( ) Slide: 26 13
14 tcpdump / tcptrace tcpdump: dump all TCP header information for a specified source/destination ftp://ftp.ee.lbl.gov/ tcptrace: format tcpdump output for analysis using xplot NLANR TCP Testrig : Nice wrapper for tcpdump and tcptrace tools Sample use: tcpdump -s 100 -w /tmp/tcpdump.out host hostname tcptrace -Sl /tmp/tcpdump.out xplot /tmp/a2b_tsg.xpl Slide: 27 tcptrace and xplot X axis is time Y axis is sequence number the slope of this curve gives the throughput over time. xplot tool make it easy to zoom in Slide: 28 14
15 Zoomed In View Green Line: ACK values received from the receiver Yellow Line tracks the receive window advertised from the receiver Green Ticks track the duplicate ACKs received. Yellow Ticks track the window advertisements that were the same as the last advertisement. White Arrows represent segments sent. Red Arrows (R) represent retransmitted segments Slide: 29 Other Tools NLANR Tools Repository: SLAC Network MonitoringTools List: Slide: 30 15
16 Other TCP Issues Things to be aware of: TCP slow-start On a path with a 50 ms RTT, it takes 12 RTT s to ramp up to full window size, so need to send about 10 MB of data before the TCP congestion window will fully open up. host issues Memory copy speed I/O Bus speed Disk speed Slide: 31 TCP Slow Start Slide: 32 16
17 Duplex Mismatch Issues A common source of trouble with Ethernet networks is that the host is set to full duplex, but the Ethernet switch is set to half-duplex, or visa versa. Most newer hardware will auto-negotiate this, but with some older hardware, auto-negotiation sometimes fails result is a working but very slow network (typically only 1-2 Mbps) best for both to be in full duplex if possible, but some older 100BT equipment only supports half-duplex NDT is a good tool for finding duplex issues: Slide: 33 Jumbo Frames Standard Ethernet packet is 1500 bytes (aka: MTU) Some gigabit Ethernet hardware supports jumbo frames (jumbo packet) up to 9 KBytes This helps performance by reducing the number of host interrupts Some jumbo frame implementations do not interoperate Most routers allow at most 4K MTUs First Ethernet was 3 Mbps (1972) First 10 Gbit/sec Ethernet hardware: 2001 Ethernet speeds have increased 3000x since the 1500 byte frame was defined Computers now have to work 3000x harder to keep the network full Slide: 34 17
18 Linux Autotuning Sender-side TCP buffer autotuning introduced in Linux 2.4 TCP send buffer starts at 64 KB As the data transfer takes place, the buffer size is continuously readjusted up max autotune size (default = 128K) Need to increase defaults: (in /etc/sysctl.conf) # increase TCP max buffer size net.core.rmem_max = net.core.wmem_max = # increase Linux autotuning TCP buffer limits # min, default, and max number of bytes to use net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = Receive buffers need to be bigger than largest send buffer used Use setsockopt() call Slide: 35 Linux 2.4 Issues ssthresh caching ssthresh (Slow Start Threshold): size of CWND to use when switching from exponential increase to linear increase The value for ssthresh for a given path is cached in the routing table. If there is a retransmission on a connection to a given host, then all connections to that host for the next 10 minutes will use a reduced ssthresh. Or, if the previous connect to that host is particularly good, then you might stay in slow start longer, so it depends on the path The only way to disable this behavior is to do the following before all new connections (you must be root): sysctl -w net.ipv4.route.flush=1 The web100 kernel patch adds a mechanism to permanently disable this behavior: sysctl -w net.ipv4.web100_no_metrics_save = 1 Slide: 36 18
19 ssthresh caching The value of CWND when this loss happened will get cached Slide: 37 Linux 2.4 Issues (cont.) SACK implementation problem For very large BDP paths where the TCP window is > 20 MB, you are likely to hit the Linux SACK implementation problem. If Linux has too many packets in flight when it gets a SACK event, it takes too long to located the SACKed packet, you get a TCP timeout and CWND goes back to 1 packet. Restricting the TCP buffer size to about 12 MB seems to avoid this problem, but limits your throughput. Another solution is to disable SACK. sysctl -w net.ipv4.tcp_sack = 0 This is still a problem in 2.6, but they are working on a solution Transmit queue overflow If the interface transmit queue overflows, the Linux TCP stack treats this as a retransmission. Increasing txqueuelen can help: ifconfig eth0 txqueuelen 1000 Slide: 38 19
20 NFNN2, 20th-21st June 2005 National e-science Centre, Edinburgh Recent/Current TCP Work TCP Response Function Well known fact that TCP does not scale to highspeed networks Average TCP congestion window = 1.2 p segments p = packet loss rate What this means: For a TCP connection with 1500-byte packets and a 100 ms round-trip time, filling a 10 Gbps pipe would require a congestion window of 83,333 packets, and a packet drop rate of at most one drop every 5,000,000,000 packets. requires at most one packet loss every 6000s, or 1h:40m to keep the pipe full Slide: 40 20
21 Proposed TCP Modifications High Speed TCP: Sally Floyd BIC/CUBIC: LTCP (Layered TCP) HTCP: (Hamilton TCP) Scalable TCP Slide: 41 Proposed TCP Modifications (cont.) XCP: XCP rapidly converges on the optimal congestion window using a completely new router paradigm. This makes it very difficult to deploy and test FAST TCP: Each of these alternatives give roughly similar throughput Vary mainly in stability and friendliness with other protocols Each of these require sender-side only modifications to standard TCP Slide: 42 21
22 TCP: Reno vs. BIC TCP-Reno (Linux 2.4) BIC-TCP (Linux 2.6) Slide: 43 TCP: Reno vs. BIC BIC-TCP recovers from loss more aggressively than TCP-Reno Slide: 44 22
23 Sample Results From Doug Leith, Hamilton Institute, Fairness Between Flows Link Utilization Slide: 45 New Linux 2.6 changes Added receive buffer autotuning: adjust receive window based on RTT sysctl net.ipv4.tcp_moderate_rcvbuf Still need to increase max value: net.ipv4.tcp_rmem Starting in Linux (and back-ported to ), BIC TCP is part of the kernel, and enabled by default. Bug found that caused performance problems under some circumstances, fixed in Added ability to disable ssthresh caching (like web100) net.ipv4.tcp_no_metrics_save = 1 Slide: 46 23
24 Linux 2.6 Issues "tcp segmentation offload issue: Linux 2.6 ( < ) has bug with certain Gigabit and 10 Gig ethernet drivers and NICs that support "tcp segmentation offload", These include Intel e1000 and ixgb drivers, Broadcom tg3, and the s2io 10 GigE drivers. To fix this problem, use ethtool to disable segmentation offload: ethtool -K eth0 tso off Bug fixed in Linux Slide: 47 Linux Results BIC-TCP is ON by default in Linux 2.6 un-tuned results up to 100x faster! Path Linux 2.4 Un-tuned Linux 2.4 Hand-tuned Linux 2.6 with BIC Linux 2.6, no BIC LBL to ORNL RTT = 67 ms 10 Mbps 300 Mbps 700 Mbps 500 Mbps LBL to PSC RTT = 83 ms 8 Mbps 300 Mbps 830 Mbps 625 Mbps LBL to IE RTT = 153 ms 4 Mbps 70 Mbps 560 Mbps 140 Mbps Results = Peak Speed during 3 minute test Must increase default max TCP send/receive buffers Sending host = 2.8 GHz Intel Xeon with Intel e1000 NIC Slide: 48 24
25 Linux rc3 Results Note that BIC reaches Max throughput MUCH faster RTT = 67 ms Slide: 49 Remaining Linux BIC Issues But: on some paths BIC still seems to have problems RTT = 83 ms Slide: 50 25
26 NFNN2, 20th-21st June 2005 National e-science Centre, Edinburgh Application Performance Issues Techniques to Achieve High Throughput over a WAN Consider using multiple TCP sockets for the data stream Use a separate thread for each socket Keep the data pipeline full use asynchronous I/O overlap I/O and computation read and write large amounts of data (> 1MB) at a time whenever possible pre-fetch data whenever possible Avoid unnecessary data copies manipulate pointers to data blocks instead Slide: 52 26
27 Use Asynchronous I/O I/O followed by processing overlapped I/O and processing Next IO starts when processing ends process previous block almost a 2:1 speedup remote IO Slide: 53 Throughput vs. Latency Most of the techniques we have discussed are designed to improve throughput Some of these might increase latency with large TCP buffers, OS will buffer more data before sending it Goal of a distributed application programmer: hide latency Some techniques to help decrease latency: use separate control and data sockets use TCP_NODELAY option on control socket combine control messages together into 1 larger message whenever possible on TCP_NODELAY sockets Slide: 54 27
28 scp Issues Don t use scp to copy large files! scp has its own internal buffering/windowing that prevents it from ever being able to fill LFNs! Explanation of problem and openssh patch solution from PSC Slide: 55 Conclusions The wizard gap is starting to close (slowly) If max TCP buffers are increased Tuning TCP is not easy! no single solution fits all situations need to be careful TCP buffers are not too big or too small sometimes parallel streams help throughput, sometimes they hurt Linux 2.6 helps a lot Design your network application to be as flexible as possible make it easy for clients/users to set the TCP buffer sizes make it possible to turn on/off parallel socket transfers probably off by default Design your application for the future even if your current WAN connection is only 45 Mbps (or less), some day it will be much higher, and these issues will become even more important Slide: 56 28
29 For More Information links to all network tools mentioned here sample TCP buffer tuning code, etc. Slide: 57 29
TCP tuning guide for distributed application on wide area networks 1.0 Introduction
TCP tuning guide for distributed application on wide area networks 1.0 Introduction Obtaining good TCP throughput across a wide area network usually requires some tuning. This is especially true in high-speed
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput Guojun Jin Brian Tierney Distributed Systems Department Lawrence Berkeley National Laboratory 1 Cyclotron
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE Original problem IT-DB was backuping some data to Wigner CC. High transfer rate was required in order to avoid service degradation. 4G out of... 10G
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
End-to-End Network/Application Performance Troubleshooting Methodology
End-to-End Network/Application Performance Troubleshooting Methodology Wenji Wu, Andrey Bobyshev, Mark Bowden, Matt Crawford, Phil Demar, Vyto Grigaliunas, Maxim Grigoriev, Don Petravick Fermilab, P.O.
Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan
Application Level Congestion Control Enhancements in High BDP Networks Anupama Sundaresan Organization Introduction Motivation Implementation Experiments and Results Conclusions 2 Developing a Grid service
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage
Lecture 15: Congestion Control CSE 123: Computer Networks Stefan Savage Overview Yesterday: TCP & UDP overview Connection setup Flow control: resource exhaustion at end node Today: Congestion control Resource
TCP in Wireless Mobile Networks
TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer
Iperf Tutorial. Jon Dugan <[email protected]> Summer JointTechs 2010, Columbus, OH
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH Outline What are we measuring? TCP Measurements UDP Measurements Useful tricks Iperf Development What are we measuring? Throughput?
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page
Pump Up Your Network Server Performance with HP- UX
Pump Up Your Network Server Performance with HP- UX Paul Comstock Network Performance Architect Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject
Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
Throughput Issues for High-Speed Wide-Area Networks
Throughput Issues for High-Speed Wide-Area Networks Brian L. Tierney ESnet Lawrence Berkeley National Laboratory http://fasterdata.es.net/ (HEPIX Oct 30, 2009) Why does the Network seem so slow? TCP Performance
TCP Adaptation for MPI on Long-and-Fat Networks
TCP Adaptation for MPI on Long-and-Fat Networks Motohiko Matsuda, Tomohiro Kudoh Yuetsu Kodama, Ryousei Takano Grid Technology Research Center Yutaka Ishikawa The University of Tokyo Outline Background
ALTHOUGH it is one of the first protocols
TCP Performance - CUBIC, Vegas & Reno Ing. Luis Marrone [email protected] Lic. Andrés Barbieri [email protected] Mg. Matías Robles [email protected] LINTI - Facultad de Informática
Linux TCP Implementation Issues in High-Speed Networks
Linux TCP Implementation Issues in High-Speed Networks D.J.Leith Hamilton Institute, Ireland www.hamilton.ie 1. Implementation Issues 1.1. SACK algorithm inefficient Packets in flight and not yet acknowledged
Data Networks Summer 2007 Homework #3
Data Networks Summer Homework # Assigned June 8, Due June in class Name: Email: Student ID: Problem Total Points Problem ( points) Host A is transferring a file of size L to host B using a TCP connection.
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS for Very Large Data Transfers Michael Smitasin Lawrence Berkeley National Laboratory (LBNL) Brian Tierney Energy Sciences Network (ESnet) [ 2 ] Example Workflow
Overview of Network Measurement Tools
Overview of Network Measurement Tools Jon M. Dugan Energy Sciences Network Lawrence Berkeley National Laboratory NANOG 43, Brooklyn, NY June 1, 2008 Networking for the Future of Science
ITL Lab 5 - Performance Measurements and SNMP Monitoring 1. Purpose
Lab 5 - Performance Measurements and SNMP Monitoring 1 Purpose Before the Lab Measure the performance (throughput) of TCP connections Measure the performance of UDP connections; observe an RTP flow Examine
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013
CSE 473 Introduction to Computer Networks Jon Turner Exam Solutions Your name: 0/3/03. (0 points). Consider a circular DHT with 7 nodes numbered 0,,...,6, where the nodes cache key-values pairs for 60
TCP over Wireless Networks
TCP over Wireless Networks Raj Jain Professor of Computer Science and Engineering Washington University in Saint Louis Saint Louis, MO 63130 Audio/Video recordings of this lecture are available at: http://www.cse.wustl.edu/~jain/cse574-10/
perfsonar: End-to-End Network Performance Verification
perfsonar: End-to-End Network Performance Verification Toby Wong Sr. Network Analyst, BCNET Ian Gable Technical Manager, Canada Overview 1. IntroducGons 2. Problem Statement/Example Scenario 3. Why perfsonar?
High Performance Bulk Data Transfer
High Performance Bulk Data Transfer Brian Tierney and Joe Metzger, ESnet Joint Techs, Columbus OH, July, 2010 Why does the network seem so slow? Wizard Gap Slide from Matt Mathis, PSC Today s Talk This
Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)
Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including
Using Linux Traffic Control on Virtual Circuits J. Zurawski Internet2 [email protected] February 25 nd 2013
Using Linux Traffic Control on Virtual Circuits J. Zurawski Internet2 [email protected] February 25 nd 2013 1. Abstract Research and Education (R&E) networks have experimented with the concept of
TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.
TCP Labs WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.2015 Hands-on session We ll explore practical aspects of TCP Checking the effect
Network Friendliness of Mobility Management Protocols
Network Friendliness of Mobility Management Protocols Md Sazzadur Rahman, Mohammed Atiquzzaman Telecommunications and Networks Research Lab School of Computer Science, University of Oklahoma, Norman, OK
4 High-speed Transmission and Interoperability
4 High-speed Transmission and Interoperability Technology 4-1 Transport Protocols for Fast Long-Distance Networks: Comparison of Their Performances in JGN KUMAZOE Kazumi, KOUYAMA Katsushi, HORI Yoshiaki,
Network Probe. Figure 1.1 Cacti Utilization Graph
Network Probe Description The MCNC Client Network Engineering group will install several open source network performance management tools on a computer provided by the LEA or charter school to build a
Effects of Interrupt Coalescence on Network Measurements
Effects of Interrupt Coalescence on Network Measurements Ravi Prasad, Manish Jain, and Constantinos Dovrolis College of Computing, Georgia Tech., USA ravi,jain,[email protected] Abstract. Several
Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding
Names & Addresses EE 122: IP Forwarding and Transport Protocols Scott Shenker http://inst.eecs.berkeley.edu/~ee122/ (Materials with thanks to Vern Paxson, Jennifer Rexford, and colleagues at UC Berkeley)
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand Hari Subramoni *, Ping Lai *, Raj Kettimuthu **, Dhabaleswar. K. (DK) Panda * * Computer Science and Engineering Department
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008 When you buy a broadband Wide Area Network (WAN) you want to put the entire bandwidth capacity to
EVERYTHING A DBA SHOULD KNOW
EVERYTHING A DBA SHOULD KNOW ABOUT TCPIP NETWORKS Chen (Gwen),HP Software-as-a-Service 1. TCP/IP Problems that DBAs Can Face In this paper I ll discuss some of the network problems that I ve encountered
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do
RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do RFC 6349 is the new transmission control protocol (TCP) throughput test methodology that JDSU co-authored along with
Experiences Using Web100 for End-To-End Network Performance Tuning
Experiences Using Web100 for End-To-End Network Performance Tuning Thomas J. Hacker Center for Parallel Computing University of Michigan Brian D. Athey Cell & Developmental Biology University of Michigan
Mike Canney Principal Network Analyst getpackets.com
Mike Canney Principal Network Analyst getpackets.com 1 My contact info contact Mike Canney, Principal Network Analyst, getpackets.com [email protected] 319.389.1137 2 Capture Strategies capture Capture
TCP/IP Optimization for Wide Area Storage Networks. Dr. Joseph L White Juniper Networks
TCP/IP Optimization for Wide Area Storage Networks Dr. Joseph L White Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
TCP/IP Over Lossy Links - TCP SACK without Congestion Control
Wireless Random Packet Networking, Part II: TCP/IP Over Lossy Links - TCP SACK without Congestion Control Roland Kempter The University of Alberta, June 17 th, 2004 Department of Electrical And Computer
B-2 Analyzing TCP/IP Networks with Wireshark. Ray Tompkins Founder of Gearbit www.gearbit.com
B-2 Analyzing TCP/IP Networks with Wireshark June 15, 2010 Ray Tompkins Founder of Gearbit www.gearbit.com SHARKFEST 10 Stanford University June 14-17, 2010 TCP In this session we will examine the details
15-441: Computer Networks Homework 2 Solution
5-44: omputer Networks Homework 2 Solution Assigned: September 25, 2002. Due: October 7, 2002 in class. In this homework you will test your understanding of the TP concepts taught in class including flow
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Transport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
Allocating Network Bandwidth to Match Business Priorities
Allocating Network Bandwidth to Match Business Priorities Speaker Peter Sichel Chief Engineer Sustainable Softworks [email protected] MacWorld San Francisco 2006 Session M225 12-Jan-2006 10:30 AM -
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about
TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
Key Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
FEW would argue that one of TCP s strengths lies in its
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 13, NO. 8, OCTOBER 1995 1465 TCP Vegas: End to End Congestion Avoidance on a Global Internet Lawrence S. Brakmo, Student Member, IEEE, and Larry L.
A Survey on Congestion Control Mechanisms for Performance Improvement of TCP
A Survey on Congestion Control Mechanisms for Performance Improvement of TCP Shital N. Karande Department of Computer Science Engineering, VIT, Pune, Maharashtra, India Sanjesh S. Pawale Department of
Mobile Communications Chapter 9: Mobile Transport Layer
Mobile Communications Chapter 9: Mobile Transport Layer Motivation TCP-mechanisms Classical approaches Indirect TCP Snooping TCP Mobile TCP PEPs in general Additional optimizations Fast retransmit/recovery
International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 Comparison of TCP I-Vegas with TCP Vegas in Wired-cum-Wireless Network Nitin Jain & Dr. Neelam Srivastava Abstract
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
Performance Measurement of Wireless LAN Using Open Source
Performance Measurement of Wireless LAN Using Open Source Vipin M Wireless Communication Research Group AU KBC Research Centre http://comm.au-kbc.org/ 1 Overview General Network Why Network Performance
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
The Case Against Jumbo Frames. Richard A Steenbergen <[email protected]> GTT Communications, Inc.
The Case Against Jumbo Frames Richard A Steenbergen GTT Communications, Inc. 1 What s This All About? What the heck is a Jumbo Frame? Technically, the IEEE 802.3 Ethernet standard defines
Low-rate TCP-targeted Denial of Service Attack Defense
Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu
First Midterm for ECE374 03/24/11 Solution!!
1 First Midterm for ECE374 03/24/11 Solution!! Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit if you show your
First Midterm for ECE374 02/25/15 Solution!!
1 First Midterm for ECE374 02/25/15 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam
Final for ECE374 05/06/13 Solution!!
1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -
TCP Pacing in Data Center Networks
TCP Pacing in Data Center Networks Monia Ghobadi, Yashar Ganjali Department of Computer Science, University of Toronto {monia, yganjali}@cs.toronto.edu 1 TCP, Oh TCP! 2 TCP, Oh TCP! TCP congestion control
Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control
Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Professor M. Chiang Electrical Engineering Department, Princeton University ELE539A February 21, 2007 Lecture Outline TCP
1000Mbps Ethernet Performance Test Report 2014.4
1000Mbps Ethernet Performance Test Report 2014.4 Test Setup: Test Equipment Used: Lenovo ThinkPad T420 Laptop Intel Core i5-2540m CPU - 2.60 GHz 4GB DDR3 Memory Intel 82579LM Gigabit Ethernet Adapter CentOS
ICTCP: Incast Congestion Control for TCP in Data Center Networks
ICTCP: Incast Congestion Control for TCP in Data Center Networks Haitao Wu, Zhenqian Feng, Chuanxiong Guo, Yongguang Zhang {hwu, v-zhfe, chguo, ygz}@microsoft.com, Microsoft Research Asia, China School
Linux 2.4 Implementation of Westwood+ TCP with rate-halving: A Performance Evaluation over the Internet
Linux. Implementation of TCP with rate-halving: A Performance Evaluation over the Internet A. Dell Aera, L. A. Grieco, S. Mascolo Dipartimento di Elettrotecnica ed Elettronica Politecnico di Bari Via Orabona,
Configuring Your Computer and Network Adapters for Best Performance
Configuring Your Computer and Network Adapters for Best Performance ebus Universal Pro and User Mode Data Receiver ebus SDK Application Note This application note covers the basic configuration of a network
Measure wireless network performance using testing tool iperf
Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,
A Survey: High Speed TCP Variants in Wireless Networks
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com A Survey:
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
TCP/IP Inside the Data Center and Beyond. Dr. Joseph L White, Juniper Networks
Dr. Joseph L White, Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may use this material in presentations
TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme
Chapter 2 Technical Basics: Layer 1 Methods for Medium Access: Layer 2 Chapter 3 Wireless Networks: Bluetooth, WLAN, WirelessMAN, WirelessWAN Mobile Networks: GSM, GPRS, UMTS Chapter 4 Mobility on the
2 TCP-like Design. Answer
Homework 3 1 DNS Suppose you have a Host C, a local name server L, and authoritative name servers A root, A com, and A google.com, where the naming convention A x means that the name server knows about
Visualizations and Correlations in Troubleshooting
Visualizations and Correlations in Troubleshooting Kevin Burns Comcast [email protected] 1 Comcast Technology Groups Cable CMTS, Modem, Edge Services Backbone Transport, Routing Converged Regional
Improving our Evaluation of Transport Protocols. Sally Floyd Hamilton Institute July 29, 2005
Improving our Evaluation of Transport Protocols Sally Floyd Hamilton Institute July 29, 2005 Computer System Performance Modeling and Durable Nonsense A disconcertingly large portion of the literature
Campus Network Design Science DMZ
Campus Network Design Science DMZ Dale Smith Network Startup Resource Center [email protected] The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see
Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network
Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network 作 者 :Daiqin Yang, Ka-Cheong Leung, and Victor O. K. Li 出 處 :Wireless Communications and Networking Conference, 2007.WCNC
The Effect of Packet Reordering in a Backbone Link on Application Throughput Michael Laor and Lior Gendel, Cisco Systems, Inc.
The Effect of Packet Reordering in a Backbone Link on Application Throughput Michael Laor and Lior Gendel, Cisco Systems, Inc. Abstract Packet reordering in the Internet is a well-known phenomenon. As
Challenges of Sending Large Files Over Public Internet
Challenges of Sending Large Files Over Public Internet CLICK TO EDIT MASTER TITLE STYLE JONATHAN SOLOMON SENIOR SALES & SYSTEM ENGINEER, ASPERA, INC. CLICK TO EDIT MASTER SUBTITLE STYLE OUTLINE Ø Setting
First Midterm for ECE374 03/09/12 Solution!!
1 First Midterm for ECE374 03/09/12 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam
AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS
AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS Srikanth Tiyyagura Department of Computer Science and Engineering JNTUA College of Engg., pulivendula, Andhra Pradesh, India.
Network Diagnostic Tools. Jijesh Kalliyat Sr.Technical Account Manager, Red Hat 15th Nov 2014
Network Diagnostic Tools Jijesh Kalliyat Sr.Technical Account Manager, Red Hat 15th Nov 2014 Agenda Network Diagnostic Tools Linux Tcpdump Wireshark Tcpdump Analysis Sources of Network Issues If a system
High Speed Internet Access Using Satellite-Based DVB Networks
High Speed Internet Access Using Satellite-Based DVB Networks Nihal K. G. Samaraweera and Godred Fairhurst Electronics Research Group, Department of Engineering University of Aberdeen, Aberdeen, AB24 3UE,
A Congestion Control Algorithm for Data Center Area Communications
A Congestion Control Algorithm for Data Center Area Communications Hideyuki Shimonishi, Junichi Higuchi, Takashi Yoshikawa, and Atsushi Iwata System Platforms Research Laboratories, NEC Corporation 1753
TCP for Wireless Networks
TCP for Wireless Networks Outline Motivation TCP mechanisms Indirect TCP Snooping TCP Mobile TCP Fast retransmit/recovery Transmission freezing Selective retransmission Transaction oriented TCP Adapted
Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics
Outline 15-441 Computer Networking Lecture 8 TCP & Congestion Control TCP connection setup/data transfer TCP Reliability Congestion sources and collapse Congestion control basics Lecture 8: 09-23-2002
Comparison of 40G RDMA and Traditional Ethernet Technologies
Comparison of 40G RDMA and Traditional Ethernet Technologies Nichole Boscia, Harjot S. Sidhu 1 NASA Advanced Supercomputing Division NASA Ames Research Center Moffett Field, CA 94035-1000 [email protected]
Monitoring Android Apps using the logcat and iperf tools. 22 May 2015
Monitoring Android Apps using the logcat and iperf tools Michalis Katsarakis [email protected] Tutorial: HY-439 22 May 2015 http://www.csd.uoc.gr/~hy439/ Outline Introduction Monitoring the Android
Congestions and Control Mechanisms n Wired and Wireless Networks
International OPEN ACCESS Journal ISSN: 2249-6645 Of Modern Engineering Research (IJMER) Congestions and Control Mechanisms n Wired and Wireless Networks MD Gulzar 1, B Mahender 2, Mr.B.Buchibabu 3 1 (Asst
