application reads data socket door TCP receive buffer 32 bits source port # dest port # sequence number acknowledgement number not used checksum

Similar documents
Chapter 6 Congestion Control and Resource Allocation

Transport Layer and Data Center TCP

Outline. TCP connection setup/data transfer Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics

Transport Layer Protocols

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

TCP in Wireless Mobile Networks

Computer Networks. Chapter 5 Transport Protocols

COMP 3331/9331: Computer Networks and Applications. Lab Exercise 3: TCP and UDP (Solutions)

B-2 Analyzing TCP/IP Networks with Wireshark. Ray Tompkins Founder of Gearbit

Computer Networks - CS132/EECS148 - Spring

TCP/IP Optimization for Wide Area Storage Networks. Dr. Joseph L White Juniper Networks

First Midterm for ECE374 03/09/12 Solution!!

15-441: Computer Networks Homework 2 Solution

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013

Computer Networks UDP and TCP

TCP Flow Control. TCP Receiver Window. Sliding Window. Computer Networks. Lecture 30: Flow Control, Reliable Delivery

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

Prefix AggregaNon. Company X and Company Y connect to the same ISP, and they are assigned the prefixes:

A Survey on Congestion Control Mechanisms for Performance Improvement of TCP

Chapter 5. Transport layer protocols

Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding

Higher Layer Protocols: UDP, TCP, ATM, MPLS

La couche transport dans l'internet (la suite TCP/IP)

CSE331: Introduction to Networks and Security. Lecture 9 Fall 2006

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Final for ECE374 05/06/13 Solution!!

CSE 123: Computer Networks

Mobile Communications Chapter 9: Mobile Transport Layer

ICOM : Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM

Midterm Exam CMPSCI 453: Computer Networks Fall 2011 Prof. Jim Kurose

Visualizations and Correlations in Troubleshooting

This sequence diagram was generated with EventStudio System Designer (

2 TCP-like Design. Answer

TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

Data Communications & Networks. Session 6 Main Theme Reliable Data Transfer. Dr. Jean-Claude Franchitti

Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan

Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network

First Midterm for ECE374 03/24/11 Solution!!

TCP Adaptation for MPI on Long-and-Fat Networks

TCP over Wireless Networks

TCP Performance Management for Dummies

Digital Audio and Video Data

[Prof. Rupesh G Vaishnav] Page 1

La couche transport dans l'internet (la suite TCP/IP)

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)

High Speed Internet Access Using Satellite-Based DVB Networks

IP - The Internet Protocol

TCP, Active Queue Management and QoS

q Connection establishment (if connection-oriented) q Data transfer q Connection release (if conn-oriented) q Addressing the transport user

COMP 361 Computer Communications Networks. Fall Semester Midterm Examination

Transport layer protocols for ad hoc networks

Data Networks Summer 2007 Homework #3

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

TCP/IP Over Lossy Links - TCP SACK without Congestion Control

Low-rate TCP-targeted Denial of Service Attack Defense

Networking Overview. (as usual, thanks to Dave Wagner and Vern Paxson)

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN

TCP for Wireless Networks

Router-assisted congestion control. Lecture 8 CS 653, Fall 2010

First Midterm for ECE374 02/25/15 Solution!!

TCP Westwood for Wireless

Architecture and Performance of the Internet

Transport Protocols and Distributed Applications

Multipath TCP in Practice (Work in Progress) Mark Handley Damon Wischik Costin Raiciu Alan Ford

Computer Networks. Data Link Layer

D. SamKnows Methodology 20 Each deployed Whitebox performs the following tests: Primary measure(s)

Parallel TCP Data Transfers: A Practical Model and its Application

Ethernet. Ethernet. Network Devices

Transport Layer. Chapter 3.4. Think about

TCP in Wireless Networks

Congestions and Control Mechanisms n Wired and Wireless Networks

Professor: Ian Foster TAs: Xuehai Zhang, Yong Zhao. Winter Quarter.

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

TCP/IP Networking for Wireless Systems. Integrated Communication Systems Group Ilmenau University of Technology

Recent advances in transport protocols

Indian Institute of Technology Kharagpur. TCP/IP Part I. Prof Indranil Sengupta Computer Science and Engineering Indian Institute of Technology

Advanced Computer Networks Project 2: File Transfer Application

Access Control: Firewalls (1)

How do I get to

Per-Flow Queuing Allot's Approach to Bandwidth Management

TCP/IP Inside the Data Center and Beyond. Dr. Joseph L White, Juniper Networks

Voice over IP. Demonstration 1: VoIP Protocols. Network Environment

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

SELECTIVE-TCP FOR WIRED/WIRELESS NETWORKS

Analytic Models for the Latency and Steady-State Throughput of TCP Tahoe, Reno and SACK

17: Queue Management. Queuing. Mark Handley

SJBIT, Bangalore, KARNATAKA

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

The Fundamentals of Intrusion Prevention System Testing

Why Congestion Control. Congestion Control and Active Queue Management. Max-Min Fairness. Fairness

Homework 2 assignment for ECE374 Posted: 02/21/14 Due: 02/28/14

Computer Networks Practicum 2015

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin,

Configuring TCP Intercept (Preventing Denial-of-Service Attacks)

ECSE-6600: Internet Protocols Exam 2

A Study on TCP Performance over Mobile Ad Hoc Networks

A Survey: High Speed TCP Variants in Wireless Networks

What is a DoS attack?

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

FEW would argue that one of TCP s strengths lies in its

Transcription:

socket door TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 point-to-point: one sender, one receiver reliable, in-order, byte steam: no message boundaries pipelined: TCP congestion and flow control set window size send & receive buffers application writes data TCP send buffer segment application reads data TCP receive buffer socket door full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver TCP segment structure URG: urgent data (generally not used) ACK: ACK # valid PSH: push data now (generally not used) RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP) 32 bits source port # dest port # head len sequence number acknowledgement number not used UAP R S F checksum Receive window Urg data pnter Options (variable length) application data (variable length) counting by bytes of data (not segments!) # bytes rcvr willing to accept

TCP seq. # s and ACKs Seq. # s: byte stream number of first byte in segment s data ACKs: seq # of next byte expected from other side cumulative ACK Q: how receiver handles out-of-order segments A: TCP spec doesn t say, - up to implementor User types C host ACKs receipt of echoed C Host A Host B Seq=42, ACK=79, data = C Seq=79, ACK=43, data = C Seq=43, ACK=80 simple telnet scenario host ACKs receipt of C, echoes back C time TCP Round Trip Time and Timeout Q: how to set TCP timeout value? longer than RTT but RTT varies too short: premature timeout unnecessary retransmissions too long: slow reaction to segment loss Q: how to estimate RTT? SampleRTT: measured time from segment transmission until ACK receipt ignore retransmissions SampleRTT will vary, want estimated RTT smoother average several recent measurements, not just current SampleRTT

TCP Round Trip Time and Timeout EstimatedRTT = (1- α)*estimatedrtt + α*samplertt Exponential weighted moving average influence of past sample decreases exponentially fast typical value: α = 0.125 Example RTT estimation: RTT: gaia.cs.umass.edu to fantasia.eurecom.fr 350 300 RTT (milliseconds) 250 200 150 100 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 time (seconnds) SampleRTT Estimated RTT

TCP Round Trip Time and Timeout Setting the timeout EstimtedRTT plus safety margin large variation in EstimatedRTT -> larger safety margin first estimate of how much SampleRTT deviates from EstimatedRTT: DevRTT = (1-β)*DevRTT + β* SampleRTT-EstimatedRTT (typically, β = 0.25) Then set timeout interval: TimeoutInterval = EstimatedRTT + 4*DevRTT TCP reliable data transfer TCP creates rdt service on top of IP s unreliable service Pipelined segments Cumulative acks TCP uses single retransmission timer Retransmissions are triggered by: timeout events duplicate acks Initially consider simplified TCP sender: ignore duplicate acks ignore flow control, congestion control

TCP sender events: data rcvd from app: Create segment with seq # seq # is byte-stream number of first data byte in segment start timer if not already running (think of timer as for oldest unacked segment) expiration interval: TimeOutInterval timeout: retransmit segment that caused timeout restart timer Ack rcvd: If acknowledges previously unacked segments update what is known to be acked start timer if there are outstanding segments TCP Sender (simplified) NextSeqNum = InitialSeqNum SendBase = InitialSeqNum loop (forever) { switch(event) event: data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data) event: timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } } /* end of loop forever */

Seq=92, 8 bytes data TCP: retransmission scenarios Host A Host B Host A Host B timeout X loss Seq=92, 8 bytes data ACK=100 Seq=92 timeout Seq=92, 8 bytes data Seq=100, 20 bytes data ACK=100 ACK=120 SendBase = 100 time Seq=92, 8 bytes data ACK=100 lost ACK scenario Sendbase = 100 SendBase = 120 SendBase = 120 Seq=92 timeout time Seq=92, 8 bytes data ACK=120 premature timeout TCP retransmission scenarios (more) Host A Host B timeout Seq=100, 20 bytes data X loss ACK=100 SendBase = 120 ACK=120 time Cumulative ACK scenario

TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already ACKed Arrival of in-order segment with expected seq #. One other segment has ACK pending Arrival of out-of-order segment higher-than-expect seq. #. Gap detected Arrival of segment that partially or completely fills gap TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no next segment, send ACK Immediately send single cumulative ACK, ACKing both in-order segments Immediately send duplicate ACK, indicating seq. # of next expected byte Immediate send ACK, provided that segment starts at lower end of gap Fast Retransmit Time-out period often relatively long: long delay before resending lost packet Detect lost segments via duplicate ACKs. Sender often sends many segments back-to-back If segment is lost, there will likely be many duplicate ACKs. If sender receives 3 ACKs for the same data, it supposes that segment after ACKed data was lost: fast retransmit: resend segment before timer expires

Fast retransmit algorithm: event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } else { increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { resend segment with sequence number y } a duplicate ACK for already ACKed segment fast retransmit TCP Flow Control receive side of TCP connection has a receive buffer: app process may be slow at reading from buffer flow control sender won t overflow receiver s buffer by transmitting too much, too fast speed-matching service: matching the send rate to the receiving app s drain rate

TCP Flow control: how it works (Suppose TCP receiver discards out-of-order segments) spare room in buffer = RcvWindow = RcvBuffer- [LastByteRcvd - LastByteRead] Rcvr advertises spare room by including value of RcvWindow in segments Sender limits unacked data to RcvWindow guarantees receive buffer doesn t overflow TCP Connection Management Recall: TCP sender, receiver establish connection before exchanging data segments initialize TCP variables: seq. #s buffers, flow control info (e.g. RcvWindow) client: connection initiator Socket clientsocket = new Socket("hostname","port number"); server: contacted by client Socket connectionsocket = welcomesocket.accept(); Three way handshake: Step 1: client host sends TCP SYN segment to server specifies initial seq # no data Step 2: server host receives SYN, replies with SYNACK segment server allocates buffers specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data

Three-Way Handshake TCP Connection Management (cont.) Closing a connection: client closes socket: clientsocket.close(); close client FIN server Step 1: client end system sends TCP FIN control segment to server ACK FIN close Step 2: server receives FIN, replies with ACK. Closes connection, sends FIN. timed wait closed ACK

TCP Connection Management (cont.) Step 3: client receives FIN, replies with ACK. Enters timed wait - will respond with ACK to received FINs Step 4: server, receives ACK. Connection closed. closing client FIN ACK FIN server closing Note: with small modification, can handle simultaneous FINs. timed wait ACK closed closed TCP Connection Management (cont) TCP server lifecycle TCP client lifecycle

Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for network to handle different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem! Causes/costs of congestion: scenario 1 two senders, two receivers one router, infinite buffers Host B Host A λ in : original data unlimited shared output link buffers λ out no retransmission large delays when congested maximum achievable throughput

Causes/costs of congestion: scenario 2 one router, finite buffers sender retransmission of lost packet unneeded retransmissions: link carries multiple copies of pkt Host A λ in : original data λ out λ' in : original data, plus retransmitted data Host B finite shared output link buffers Causes/costs of congestion H o s t A λ o u t H o s t B Another cost of congestion: when packet dropped, any upstream transmission capacity used for that packet was wasted!

Approaches towards congestion control Two broad approaches towards congestion control: End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at TCP Congestion Control end-end control (no network assistance) sender limits transmission: LastByteSent-LastByteAcked Roughly, rate = CongWin RTT CongWin Bytes/sec CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events

TCP AIMD multiplicative decrease: cut CongWin in half after loss event 24 Kbytes congestion window additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing 16 Kbytes 8 Kbytes time Long-lived TCP connection TCP Slow Start When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate When connection begins, increase rate exponentially fast until first loss event

TCP Slow Start (more) When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast RTT Host A Host B one segment two segments four segments time Refinement After 3 dup ACKs: CongWin is cut in half window then grows linearly But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Philosophy: 3 dup ACKs indicates network capable of delivering some segments timeout before 3 dup ACKs is more alarming

Refinement (more) Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event Summary: TCP Congestion Control When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

Congestion Avoidance TCP s strategy control congestion once it happens repeatedly increase load in an effort to find the point at which congestion occurs, and then back off Alternative strategy predict when congestion is about to happen reduce rate before packets start being discarded call this congestion avoidance, instead of congestion control Random Early Detection (RED) Notification is implicit just drop the packet (TCP will timeout) could make explicit by marking the packet Early random drop rather than wait for queue to become full, drop each arriving packet with some drop probability whenever the queue length exceeds some drop level

RED Details Compute average queue length AvgLen = (1 - Weight) * AvgLen + Weight * SampleLen 0 < Weight < 1 (usually 0.002) SampleLen is queue length each time a packet arrives MaxThreshold MinThreshold AvgLen RED Details (cont) Two queue length thresholds if AvgLen <= MinThreshold then enqueue the packet if MinThreshold < AvgLen < MaxThreshold then calculate probability P drop arriving packet with probability P if ManThreshold <= AvgLen then drop arriving packet

RED Details (cont) Drop Probability Curve P(drop) 1.0 MaxP AvgLen MinThresh MaxThresh TCP Vegas Idea: source watches for some sign that router s queue is building up and congestion will happen too; e.g., RTT grows

TCP Vegas Let BaseRTT be the minimum of all measured RTTs (commonly the RTT of the first packet) If not overflowing the connection, then ExpectRate = CongestionWindow/BaseRTT Source calculates sending rate (ActualRate) once per RTT Source compares ActualRate with ExpectRate Diff = ExpectedRate - ActualRate if Diff < α increase CongestionWindow linearly else if Diff > β decrease CongestionWindow linearly else leave CongestionWindow unchanged TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K Practically this does not happen in TCP as connections with lower RTT are able to grab the available link bandwidth more quickly. TCP connection 1 TCP connection 2 bottleneck router capacity R

Fairness (more) Fairness and UDP Multimedia apps often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly Fairness and parallel TCP connections nothing prevents app from opening parallel cnctions between 2 hosts. Web browsers do this Example: link of rate R supporting 9 cnctions; new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2! TCP Options: Protection Against Wrap Around Sequence 32-bit SequenceNum Bandwidth T1 (1.5 Mbps) Ethernet (10 Mbps) T3 (45 Mbps) FDDI (100 Mbps) STS-3 (155 Mbps) STS-12 (622 Mbps) STS-24 (1.2 Gbps) Time Until Wrap Around 6.4 hours 57 minutes 13 minutes 6 minutes 4 minutes 55 seconds 28 seconds

TCP Options: Keeping the Pipe Full 16-bit AdvertisedWindow Bandwidth T1 (1.5 Mbps) Ethernet (10 Mbps) T3 (45 Mbps) FDDI (100 Mbps) STS-3 (155 Mbps) STS-12 (622 Mbps) STS-24 (1.2 Gbps) Delay x Bandwidth Product 18KB 122KB 549KB 1.2MB 1.8MB 7.4MB 14.8MB assuming 100ms RTT TCP Extensions Implemented as header options Store timestamp in outgoing segments Extend sequence space with 32-bit timestamp (PAWS) Shift (scale) advertised window