Last Course Review Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for to handle different from flow control! manifestations: lost packets (buffer overflow at s) long delays (queueing in buffers) a top-0 problem! Approaches towards control Two broad approaches towards control: End-end control: explicit feedback from inferred from end-system observed loss, delay approach taken by TCP Network-assisted control: s provide feedback to end systems single bit indicating (SNA, DECbit, TCP/IP ECN, ) explicit rate sender should send at Introduction - Introduction -2 TCP Congestion Control end-end control ( assistance) sender limits transmission: LastByteSent-LastByteAcked CongWin Roughly, rate = CongWin RTT Bytes/sec CongWin is dynamic, function of perceived How does sender perceive? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events Introduction -3 TCP AIMD multiplicative decrease: cut CongWin in half after loss event 24 Kbytes 6 Kbytes 8 Kbytes window Long-lived TCP connection additive increase: increase CongWin by MSS every RTT in the absence of loss events: probing time Introduction -4 TCP Slow Start (more) TCP Fairness When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast RTT Host A Host B time Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection TCP connection 2 bottleneck capacity R Introduction -5 Introduction -6
Fairness (more) Fairness and UDP Multimedia apps often do t use TCP do t want rate throttled by control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly Fairness and parallel TCP connections thing prevents app from opening parallel connections between 2 hosts. Web browsers do this Example: link of rate R supporting 9 connections; new app asks for TCP, gets rate R/0 new app asks for TCPs, gets R/2! Chapter goals: understand principles behind layer services: routing (path selection) dealing with scale how a works advanced topics: IPv6, mobility instantiation and implementation in the Introduction -7 Introduction -8 datagram s 4.4 IP: 4.6 Routing in the Network layer segment from sending to receiving host on sending side encapsulates segments into datagrams on receiving side, delivers segments to layer layer protocols in every host, Router examines header fields in all IP datagrams passing through it Introduction -9 Introduction -0 Key Network-Layer Functions Interplay between routing and forwarding forwarding: move packets from s input to appropriate output routing: determine route taken by packets from source to dest. Routing algorithms analogy: routing: process of planning trip from source to dest forwarding: process of getting through single interchange value in arriving packet s header routing algorithm local forwarding table header value output link 0 000 00 0 00 3 2 2 3 2 Introduction - Introduction -2 2
Connection setup 3 rd important function in some architectures:, frame relay, X.25 Before datagrams flow, two hosts and intervening s establish virtual connection Routers get involved Network and layer connection service: Network: between two hosts Transport: between two processes Introduction -3 Network service model Q: What service model for channel ing datagrams from sender to receiver? Example services for individual datagrams: guaranteed delivery Guaranteed delivery with less than 40 msec delay Example services for a flow of datagrams: In-order datagram delivery Guaranteed minimum bandwidth to flow Restrictions on changes in interpacket spacing Introduction -4 Network layer service models: Network Architecture Service Model best effort CBR VBR ABR UBR Guarantees? Bandwidth Loss Order Timing ne constant rate guaranteed rate guaranteed minimum ne Congestion feedback (inferred via loss) datagram s 4.4 IP: Introduction -5 Introduction -6 Network layer connection and connection-less service Datagram provides -layer connectionless service VC provides -layer connection service Analogous to the -layer services, but: Service: host-to-host No choice: provides one or the other Implementation: in the core Virtual circuits source-to-dest path behaves much like telephone circuit performance-wise actions along source-to-dest path call setup, teardown for each call before data can flow each packet carries VC identifier (t destination host address) every on source-dest path maintains state for each passing connection link, resources (bandwidth, buffers) may be allocated to VC Introduction -7 Introduction -8 3
VC implementation Forwarding table VC number A VC consists of:. Path from source to destination 2. VC numbers, one number for each link along path 3. Entries in forwarding tables in s along path Packet belonging to VC carries a VC number. VC number must be changed on each link. New VC number comes from forwarding table Introduction -9 Forwarding table in rthwest : interface number 2 2 3 22 32 Incoming interface Incoming VC # Outgoing interface Outgoing VC # 2 3 22 2 63 8 3 7 2 7 97 3 87 Routers maintain connection state information! Introduction -20 Virtual circuits: signaling protocols used to setup, maintain teardown VC used in, frame-relay, X.25 t used in today s Datagram s call setup at layer s: state about end-to-end connections -level concept of connection packets forwarded using destination host address packets between same source-dest pair may take different paths 5. Data flow begins 6. Receive data 4. Call connected 3. Accept call. Initiate call 2. incoming call. Send data 2. Receive data Introduction -2 Introduction -22 Forwarding table 4 billion possible entries Longest prefix matching Destination Address Range 00000 0000 0000000 00000000 through 0 00000 0000 0000 Link Interface Prefix Match Link Interface 00000 0000 0000 0 00000 0000 000000 00000 0000 000 2 otherwise 3 00000 0000 000000 00000000 through 00000 0000 000000 00000 0000 00000 00000000 through 2 00000 0000 000 otherwise 3 Examples DA: 00000 0000 00000 00000 DA: 00000 0000 000000 0000 Which interface? Which interface? Introduction -23 Introduction -24 4
Datagram or VC : why? data exchange among computers elastic service, strict timing requirement. smart end systems (computers) can adapt, perform control, error recovery simple inside, complexity at edge many link types different characteristics uniform service difficult evolved from telephony human conversation: strict timing, reliability requirements need for guaranteed service dumb end systems telephones complexity inside Introduction -25 datagram s 4.4 IP: Introduction -26 Router Architecture Overview Two key functions: run routing algorithms/protocol (RIP, OSPF, BGP) forwarding datagrams from incoming to outgoing link Input Port Functions Physical layer: bit-level reception Data link layer: e.g., Ethernet see chapter 5 Decentralized switching: given datagram dest., lookup output port using forwarding table in input port memory goal: complete input port processing at line speed queuing: if datagrams arrive faster than forwarding rate into switch fabric Introduction -27 Introduction -28 Three types of switching fabrics Switching Via Memory First generation s: traditional computers with switching under direct control of CPU packet copied to system s memory speed limited by memory bandwidth (2 bus crossings per datagram) Input Port Memory Output Port System Bus Introduction -29 Introduction -30 5
Switching Via a Bus Switching Via An Interconnection Network datagram from input port memory to output port memory via a shared bus bus contention: switching speed limited by bus bandwidth Gbps bus, Cisco 900: sufficient speed for access and enterprise s (t regional or backbone) overcome bus bandwidth limitations Banyan s, other interconnection nets initially developed to connect processors in multiprocessor Advanced design: fragmenting datagram into fixed length cells, switch cells through the fabric. Cisco 2000: switches Gbps through the interconnection Introduction -3 Introduction -32 Output Ports Output port queueing Buffering required when datagrams arrive from fabric faster than the transmission rate Scheduling discipline chooses among queued datagrams for transmission Introduction -33 buffering when arrival rate via switch exceeds output line speed queueing (delay) and loss due to output port buffer overflow! Introduction -34 Input Port Queuing Fabric slower than input ports combined -> queueing may occur at input queues Head-of-the-Line (HOL) blocking: queued datagram at front of queue prevents others in queue from moving forward queueing delay and loss due to input buffer overflow! datagram s 4.4 IP: Introduction -35 Introduction -36 6
The Network layer Host, layer functions: Network layer Routing protocols path selection RIP RIP, OSPF, BGP Transport layer: TCP, UDP forwarding table Link layer layer IP protocol addressing conventions datagram format packet handling conventions ICMP protocol error reporting signaling datagram s 4.4 IP: Introduction -37 Introduction -38 7