Introduction Abusayeed Saifullah CS 5600 Computer Networks These slides are adapted from Kurose and Ross
Roadmap 1.1 what is the Inter? 1.2 work edge end systems, works, links 1.3 work core packet switching, circuit switching, work structure 1.4 delay, loss, throughput in works 1.5 protocol layers, service models 1.6 works under attack: security 1.7 history
The work core v mesh of interconnected routers v Two fundamental approaches to moving data through a work Packet switching: forward packets from one router to the next, across links on path from source to destination Circuit switching: dedicated source-destination path
Packet-switching: store-and-forward L bits per packet source 3 2 1 R bps R bps des+na+on v takes L/R seconds to transmit (push out) L-bit packet into link at R bps v store and forward: entire packet must arrive at router before it can be transmitted on next link v end-end delay = 2L/R (assuming zero propagation delay) one-hop numerical example: L = 7.5 Mbits R = 1.5 Mbps one-hop transmission delay = 5 sec more on delay shortly
Two key work-core functions routing: determines sourcedestination route taken by packets routing algorithms forwarding: move packets from router s input to appropriate router output routing algorithm local forwarding table header value output link 0100 0101 0111 1001 3 2 2 1 3 2 1 dest address in arriving packet s header
Alternative core: circuit switching end-end resources allocated to, reserved for call between source & dest: v In diagram, each link has four circuits. call gets 2 nd circuit in top link and 1 st circuit in right link. v dedicated resources: no sharing circuit-like (guaranteed) performance v circuit segment idle if not used by call (no sharing) v Commonly used in traditional telephone works
Circuit switching: FDM versus TDM FDM Example: 4 users frequency TDM time frequency time
Packet switching versus circuit switching
Packet switching versus circuit switching packet switching allows more users to use work! example: 1 Mb/s link each user: 100 kb/s when active active 10% of time.. N users 1 Mbps link v circuit-switching: 10 users v packet switching: with 35 users, probability > 10 active at same time is less than.0004 * Q: how did we get value 0.0004? Q: what happens if > 35 users?
Inter structure: work of works v End systems connect to Inter via ISPs (Inter Service Providers) Residential, company and university ISPs v Access ISPs in turn must be interconnected. v So that any two hosts can send packets to each other v Resulting work of works is very complex v Evolution was driven by economics and national policies v Let s take a stepwise approach to describe current Inter structure
Inter structure: work of works Question: given millions of ISPs, how to connect them together?
Inter structure: work of works Option: connect each ISP to every other ISP? connecting each ISP to each other directly doesn t scale: O(N 2 ) connections.
Inter structure: work of works Option: connect each ISP to a global transit ISP? Customer and provider ISPs have economic agreement. global ISP
Inter structure: work of works Global ISPs must be interconnected. ISP A Inter exchange point IXP IXP ISP B ISP C peering link
Inter structure: work of works and regional works may arise to connect s to ISPS ISP A IXP IXP ISP B ISP C regional
Inter structure: work of works and content provider works (e.g., Google, Microsoft, Akamai ) may run their own work, to bring services, content close to end users ISP A ISP B ISP B IXP Content provider work IXP regional
Roadmap 1.1 what is the Inter? 1.2 work edge end systems, works, links 1.3 work core packet switching, circuit switching, work structure 1.4 delay, loss, throughput in works 1.5 protocol layers, service models 1.6 works under attack: security 1.7 history
How do loss and delay occur? packets queue in router buffers v packet arrival rate to link (temporarily) exceeds output link capacity v packets queue, wait for turn packet being transmitted (delay) A B packets queueing (delay) free (available) buffers: arriving packets dropped (loss) if no free buffers
Four sources of packet delay A transmission propagation B nodal processing queueing d nodal = d proc + d queue + d trans + d prop d proc : nodal processing check bit errors determine output link typically < msec d queue : queueing delay time waiting at output link for transmission depends on congestion level of router
Four sources of packet delay A transmission propagation B nodal processing queueing d nodal = d proc + d queue + d trans + d prop d trans : transmission delay: L: packet length (bits) R: link bandwidth (bps) d trans = L/R d trans and d prop very different d prop : propagation delay: d: length of physical link s: propagation speed in medium (~2x10 8 m/sec) d prop = d/s
Caravan analogy ten-car caravan toll booth 100 km 100 km toll booth v cars propagate at 100 km/hr v toll booth takes 12 sec to service car (bit transmission time) v car~bit; caravan ~ packet v Q: How long until caravan is lined up before 2nd toll booth? time to push entire caravan through toll booth onto highway = 12*10 = 120 sec time for last car to propagate from 1st to 2nd toll both: 100km/ (100km/hr)= 1 hr A: 62 minutes
Real Inter delays and routes v what do real Inter delay & loss look like? v traceroute program: provides delay measurement from source to router along endend Inter path towards destination. For all i: sends three packets that will reach router i on path towards destination router i will return packets to sender sender times interval between transmission and reply. 3 probes 3 probes 3 probes
Real Inter delays, routes traceroute: Rolla to www.louvre.fr
Real Inter delays, routes traceroute: Rolla to www.louvre.fr 3 delay measurements from St Louis * means no response (probe lost, router not replying)
Packet loss v queue (aka buffer) preceding link in buffer has finite capacity v packet arriving to full queue dropped (aka lost) v lost packet may be retransmitted by previous node, by source end system, or not at all A buffer (waiting area) packet being transmitted B packet arriving to full buffer is lost
Throughput v throughput: rate (bits/time unit) at which bits transferred between sender/receiver instantaneous: rate at given point in time average: rate over longer period of time server server, sends with bits (fluid) file of into F bits pipe to send to client link pipe capacity R s bits/sec that can carry fluid at rate R s bits/sec) link pipe capacity R c bits/sec that can carry fluid at rate R c bits/sec)
Throughput (more) v R s < R c What is average end-end throughput? R s bits/sec R c bits/sec v R s > R c What is average end-end throughput? R s bits/sec R c bits/sec bottleneck link link on end-end path that constrains end-end throughput
Throughput: Inter scenario v per-connection endend throughput: min (R c,r s,r/10) v in practice: R c or R s is often bottleneck R s R s R s R R c R c R c 10 connections (fairly) share backbone bottleneck link R bits/sec