Basic Multiplexing models? Supermarket?? Computer Networks - Vassilis Tsaoussidis
Schedule Where does statistical multiplexing differ from TDM and FDM Why are buffers necessary - what is their tradeoff, how far shall we use buffers? How do we model networks and how do we represent the models Understand basic concepts in multiplexing Store-and forward leads to delays but better utilization of resources
Where queuing theory helps Resources: Bandwidth, Processors, Storage Demand is unscheduled=> delay or loss likely Quantify Delay or loss System has four components: An entry point (arrival rate important) A waiting area (waiting time important) A service entity (service time differs consider size) An exit point (nothing associated with it hence, ignore it)
Q-ing models In a typical application, customers demand resources at random times and use the resources for variable duration This assumption holds - packets have variable length, therefore demanding varying service time When all resources are in use, arriving customers form a queue We assume that interarrival times are independent random variables
Classification Customer arrival pattern Service time distribution Number of servers Max number in the system For example M/M/1/K corresponds to a queuing system in which the interarrival times are exponentially distributed/the service times are exponentially distributed/ there is a single server/ at most K customers are allowed in the system M/M/1 is similar with no restriction in the number of customers. M/G/1: exponential arrivals/service time follow general distribution/single server M/D/1: exponential arrivals, constant (deterministic) service times/single server
Arrival process/service time/servers/max occupancy M=exponential M=exponential 1 server K customers D=Deterministic D=Determinis tic C servers G=General G=General infinite Unspecified if unlimited Arrival rate =1/E[t], t=interarrival time Service rate =1/E[X], X=service time
Colnclusions Buffers are good, infinite (too large) buffers of no use Better without traffic lights? Better without separation lines? QoS has a price! There is always a tradeoff.
Traffic analogy Consider m lanes in a highway In FDM, crossover is not allowed, thereby each lane can be used only by cars in the lane. In Statistical multiplexing crossover is allowed (or otherwise a single sixfold lane is used) thereby using the road better
Multiplexing Traffic Transmission capacity C : number of bits per second which can be transmitted. TDM: with m traffic streams, link capacity is divided into m time portions - one per traffic stream FDM: channel bandwidth is divided into m sub-channels, (I.e. bandwidth portions) each with bandwidth W/m and transmission capacity C/m per channel, where C is capacity of total bandwidth W L bit long packet needs L/(C/m)=Lm/C transmission time In statistical multiplexing we have a single queue and FIFO transmission scheme; the full capacity is allocated in each packet. L-bit packet takes L/C seconds for transmission.
TDM and SM For n packets in a Q belonging in m streams, which are L-bit long we need n x L transmission time with SM For n packets belonging in m streams, which are L-bit long and time slot is L/m we need at least n x L transmission time in TDM m =3, n=9 OR m=3 n=7 m1: 1r, 2r, 3r; m2: 2r, 3r; m3: 1r, 3r; time slot is 1/3 ms; packet transmission time is 1ms Scheduling in TDM (ms): 1m11/3, 2 -, 3m31/3, (3 rounds complete 2 packet transmissions in 3 ms) 4m12, 5m22, 6-, (3 times complete 3 ms) 7m13, 8m23, 9m33 --> 9ms Scheduling in SM (ms): 1m11, 2m31, 3m12, 4m22, 5m13, 6m23, 7m33-->7ms
Why/When buffering helps With buffers: Link is better scheduled Queuing delay is increased Resource utilization is better
Basic Queuing model for multiplexer Customers (connection requests or packets) arrive to the system according to some arrival pattern System: multiplexer, line, or network Customer spends some time T in the system After T, customer departs from the system System may come into a blocking state (e.g. due to lack of resources) Measures: Time spent in system: T Number of customers in the system: N(t) Fraction of arriving customers that are lost/blocked: Pb Average number of packets/customers per second that pass through the system: Throughput
Arrival rates and traffic load A(t): number of arrivals from time 0 to t B(t): number of blocked customers (0->t) D(t): number of departed customers (0->t) N(t): number of customers in the system = A(t)-B(t)-D(t) Long term arrival rate, throughput: A( t) lim customers / sec t D( t) throughput lim customers / sec t
Little s formula Relates: Average time spent in the system E[T] Average number of customers in the system E[N] Arrival rate Then: E[N]= E[T] That is, crowded systems (large N)are associated with longer customers delays (large T) a fast food (take-away food) needs small waiting room, while on a rainy day traffic moves slower (large T) and the streets are more crowded (large N).
Little's Law says that, under steady state conditions, the average number of items in a queuing system equals the average rate at which items arrive multiplied by the average time that an item spends in the system. Letting L =average number of items in the queuing system, W = average waiting time in the system for an item, and A =average number of items arriving per unit time, the law is L=AW (N= ) This relationship is remarkably simple and general. We require stationarity assumptions about the underlying stochastic processes, but it is quite surprising what we do not require. We have not mentioned how many servers there are, whether each server has its own queue or a single queue feeds all servers, what the service time distributions are, or what the distribution of inter-arrival times is, or what is the order of service of items, etc.
Proof :, sec. ). Graphic proof: The integral equals the Sum. Multiply and divide by A(t).
Servers.=>customer arrives every 1/ sec If = departure rate and system has 1 server=>, else m For 1 server: => congestion => underutilization = ->average load
Use of Buffers: why, when Assume no buffers:? Buffers help in scheduling the link. Arrival pattern (i.e. density of arrivals) may overcome max service rate (that is, capacity); buffering allows for later service Buffers shape traffic to match the properties of the server
Example: k arrivals at time t0 No buffers: ->fraction of Buffers: -> =1pkt/ms, to: k pkts, tk 0 pkts =>k ms->k pkts, =kpkts/kms=1pkt/ms= RTT should be smaller than Qing delay, buffers at the senders are better than buffers in the network. Network buffers affect all arrivals, should be kept as small as possible.
Cost of buffers Cost of shaping traffic to match the service rate is the increase of queuing delay Queuing Delay appears to all packets Less impact Better service: a new approach to network services and fairness
,, (layer), 95% 7-layer model OSI.,.
OSI: 7 Layers
(7)? Layers are only to help simplify things by breaking the functionality into pieces. 3. A network is not consisted of 7 layers. Some guys found that 7 layers describe appropriately and completely the network functionality. The Internet community however ignored them..
Multiplexing and Demultiplexing (demux key) Encapsulation (header/body)
Internet Architecture Internet Engineering Task Force (IETF) Application vs Application Protocol (FTP, HTTP) Features does not imply strict layering hourglass shape
Signal Bit Bit-rate, Baud-rate Byte Frame/Packet/Message Contention/Congestion Flow
Performance Metrics (B, D, J, R, F) Bandwidth and derived metrics Delay and derived metrics Jitter Reliability Fairness
Performance: Bandwidth Bandwidth Amount of data that can be transmitted per time unit Example: 10Mbps Notation: b-bit, B-Byte KB = 2 10 bytes Mbps = 10 6 bits per second Bandwidth related to bit width
Bandwidth-derived metrics Bandwidth: (bps) » bits. Throughput: (Bps),,. Overhead Bytes (B) Throughput= Data Sent in Bytes / Time Application Throughput=Data Received in Bytes / Time Goodput: (Bps) To Throughput headers retransmission overhead.. Goodput=Throughput-(Overhead/time)
Latency (delay) Time it takes to send message from point A to point B Example: 24 milliseconds (ms) Sometimes interested in in round-trip time (RTT) e.g. to know how long to wait for an ACK Components of Delay (latency) Delay = Processing + Propagation + Transmit + Queue Propagation = Distance / SpeedOfLight Transmit = Size / Bandwidth
Speed of light 3.0 x 10 8 meters/second in a vacuum 2.3 x 10 8 meters/second in a cable 2.0 x 10 8 meters/second in a fiber Notes no queuing delays in direct link bandwidth not relevant if Size = 1 bit process-to-process latency includes software overhead software overhead can dominate when Message is small
Delay Propagation Delay (ms). Transmission Delay (ms). Transmission Delay = Transmitted Data / Bandwidth Queuing Delay (ms) buffers routers. Processing Delay (ms). Delay = Propagation Delay + Transmission Delay + Queuing Delay + Processing Delay.
Relative importance of bandwidth and latency small message (e.g., 1 bit): 1ms vs 100ms dominates 1Mbps vs 100Mbps large message (e.g., 12.5 MB): 1Mbps vs 100Mbps dominates 1ms vs 100ms Delay x Bandwidth Product e.g., 100ms RTT and 45Mbps Bandwidth = 560KB of data Application Needs bandwidth requirements: average v. peak rate jitter: variance in latency (inter-packet gap)
Overhead Overhead: (B). Overhead=Headers + Retransmission Overhead.
Reliability: What Goes Wrong in the Network? Bit-level errors (electrical interference) Packet-level errors (congestion) Link and node failures Messages are delayed Messages are deliver out-of-order Third parties eavesdrop The key problem is to fill in the gap between what applications expect and what the underlying technology provides.
Tradeoffs / - Throughput / Delay / Jitter / Fairness
Fairness -,. Fairness index n ( i 1 f ( x, x,..., x ) 1 2 n n n i 0 f 1, 1 max x x i i 2 n 1 i, then nxn 1, then x ) i x fairness 1. numerator / deno min ator 1 i 2 2 1 n (, flows), f(x1, x2,..xn), i throughput flow i. 1.
(P2P): Bandwidth 10Mbps. Propagation Delay 50ms. Overhead 40 bytes. 1450bytes. RTT.
Bandwidth 10Mpbs 1KB 1 (2, 4, 8 ). 40 Bytes headers KB. 20 Propagation Delay 10ms. Goodput Throughput.
Bandwidth Propagation Delay ).?