Competitive Analysis of QoS Networks What is QoS? The art of performance analysis What is competitive analysis? Example: Scheduling with deadlines Example: Smoothing real-time streams Example: Overflow management QoS: Quality of Service Current Internet: Best Effort promise No hard guarantees Insufficient for real applications: mission critical real-time Future Internet (and current telecom): Guaranteed performance Several Quality of Service parameters 1 Lecture 12 2 Typical QoS Parameters ATM (and IPv6) standards support: packet loss rate bandwidth end-to-end delay delay jitter QoS setup: A Contract Incoming connection specifies requirement If network can meet requirement: User commits to maximal usage Network commits to minimal allocation Otherwise, connection is rejected During connection, network and user may monitor each other (and sue if contract violated...) Lecture 12 3 Lecture 12 4
The Local Problem If demand can be always fully satisfied, then 1. No problem! 2. In fact: over-provisioning, bad economics Our focus: overload conditions Some requests will be denied Want to do optimally in these cases! But optimality is typically NP-complete Classical Approaches to Analysis 1 Deterministic: behavior is absolutely guaranteed always ( worst case ). How: use low-level deterministic guarantees. Example: reserve declared peak bandwidth for each connection, reject requests for more bandwidth. Result: many rejects, under-utilization. Lecture 12 5 Lecture 12 6 Classical Approaches to Analysis 2 Probabilistic: bound the average case. Idea: gather statistics to serve most cases Same example: bandwidth reservation. Recommendation: do overbooking, since rarely all peaks occur together Define Rarely! Assuming simple customer model, e.g.: Poisson arrival, exponential service time Statistical independence Proven success for telephony (Erlang) Recent discoveries: unrealistic for data! E.g.: Heavy tailed distributions for web pages (selfsimilar behavior) Lecture 12 7 Lecture 12 8
New Idea: Relativistic Evaluation Assume nothing about the input Get relative guarantees on the output Philosophy: some scenarios are tough; want good behavior with respect to each given instance! Main strength: comparison to best possible solutions, including those which know the future (off-line algorithms) Example: Three Stock Brokers The deterministic broker: In the worst case, you ll lose it all. The probabilistic broker: I have a model, that used to work usually... The relativistic broker: Guarantees a fraction of the best possible profits--as if you had tomorrow s newspapers (!) Lecture 12 9 Lecture 12 10 Specifically: Competitive Analysis Given an algorithm for a problem: For each instance I, OPT(I): value of best possible solution, ALG(I): value obtained by algorithm The algorithm s competitive factor is Example 1: Deadline scheduling with values Model: Unit-length jobs (packets) arrive with deadlines and values Each job takes one time units to process (send) Goal: maximize value of on-time jobs max I ALG( I) OPT( I) Lecture 12 11 Lecture 12 12
Deadline scheduling (cont.) The Greedy Algorithm Example: job # 1 2 3 4 5 6 arrival 0 0 0 0 2 2 Greedy Algorithm: look at all jobs whose deadline did not expire; schedule the highest-value job deadline 2 2 4 4 4 4 value 1 1 10 10 10 10 Naive on-line scheduling benefit: 22 Best scheduling benefit: 40 Example: Optimal schedule: job 1 at time 1, job 2 at time 2. Total value: 2+ε Greedy Schedule: job 2 at time 1. Total value: 2+ε job # 1 2 arrival 0 0 deadline 1 2 value 1 1+ε The fundamental on-line conflict: Should we throw away jobs #1, #2? May be there will be no more jobs! Lecture 12 15 OPT Greedy Lecture 12 16 Greedy Algorithm Is Good! Greedy Algorithm: Analysis Theorem: If all jobs have equal lengths, the benefit of the Greedy Algorithm is at least ½ of the best possible. Note: The best possible, even for off-line! Fix an optimal schedule OPT. O: set of jobs scheduled by OPT G: set of jobs scheduled by Greedy Lemma: val(o-g) val(g). Proof: We construct a 1-1 mapping from O-G to G: Map j O-G to k G that Greedy scheduled when OPT scheduled j. Observation: val(k) val(j). QED Lecture 12 17 timeline j arrival k j j deadline OPT Greedy Lecture 12 18
Greedy Algorithm: Analysis (2) Theorem: If all jobs have equal lengths, the benefit of Greedy is at least ½ of the best possible. Proof: QED Value = length =(deadline arrival) Model: Value of a job is exactly its length; job is either lost, or scheduled immediately (losing current job) In this case: length is known at arrival time Example: job # 1 2 3 4 5 arrival 0 1 3 6 10 deadline 2 4 7 11 16 value 2 3 4 5 6 Greedy-like strategy: 6 Optimal benefit: 12 (jobs #1, #3, #5) In the worst case: val( G) 2val( OPT) What can be done better on-line?? Lecture 12 19 Lecture 12 25 On-line strategy for val=length Doubling algorithm: job j preempts a job k only if val(j) 2 val(k). Theorem: The Doubling Algorithm gets at least ¼ of the best possible benefit. Proof: For any job of T time units scheduled by Doubling, less than T+T+2T=4T time units could be used by any schedule. T/8 T/2 < 2T Scheduling with deadlines: The Basic Premise Networks like predictable, regular traffic (CBR): Can allocate resources (bandwidth, buffers,...) and commit to QoS Allows for better pricing Users like bursty information: Hit that link! Compress that movie! T/4 T Lecture 12 26 Lecture 12 27
Case in Point: Movies Huge bandwidth consumer Without compression: Unthinkable (up to 1 Gbps for HDTV stream) With compression: highly bursty temporal and spatial information: frames vary in sizes, scenes vary in bandwidth rate What to do? (1) Conservative rich: reserve peak required bandwidth. Wasteful, costly: ratio of peak to average rate may be well over 5:1 Conservative poor: compress to specified bandwidth Lecture 12 28 Lecture 12 29 MPEG with CBR What to do? (2) input stream compression ratio parameters keep bandwidth constant output stream Practical solution: trade bandwidth for memory With large storage space: trickle the stream, playback from memory need huge space how about live broadcast? compression algorithm Why is it always blurry exactly when it starts getting interesting? Lecture 12 30 Lecture 12 31
What to do? (3) The Setting Smoothing: reserve more than average bandwidth; start playback a short time after transmission starts. source playback device server's buffer client's buffer input stream communication link output stream source smoother link reconstruct playback Lecture 12 32 enforcing CBR: smoothing reconstruction: de-smoothing Lecture 12 33 Smoothing: step by step Source generates stream Server fills some of its buffer may lose premature data: overflow Server starts transmitting Client fills some of its buffer Client starts playing out may also lose late data: underflow Smoothing: Questions How much does the server hold before transmitting? How much does the client hold before playing out? What s the best buffer space at the server? the client? What s the best bandwidth to use? Lecture 12 34 Lecture 12 35
Smoothing: Some Answers server buffer = client buffer server delay = 0 In general: B = C D D is delay C is link capacity (bandwidth) B is buffer size (server and client) Algorithm: Client waits for B/C time units after getting first packet, then starts playing out Why is this true? Server has minimum possible overflow: trivial. Client never overflows: in D time units, can receive at most D C=B bits Client never underflows: each bit transmitted is delayed at most B/C=D time units Lecture 12 36 Lecture 12 37 What does this mean? In cases where two out of {buffer size; link capacity; delay} are given, we can choose the third optimally. These values ensure minimum data loss Is this our true goal? Is This Our Target Function? A refined approach: Assign a value to each packet Goal: maximize value of replayed packets Tool: Competitive Analysis Sufficient to focus on server s buffer alone Lecture 12 38 Lecture 12 39
The Single Buffer Model Discrete time. Packets: size = 1, arbitrary value (wlog, 1) Line rate = 1 packet / step In each step: Set of packets arrive: arbitrary! One packet transmitted: Alg. Set of packets dropped: Alg. Goal: maximize total value transmitted Algorithm General Model: Buffer with QoS Consider a switch (router): input ports output ports link buffer link buffer link buffer link buffer Lecture 12 40 Lecture 12 41 General Model: Buffer with QoS Zoom in on a link buffer incoming packets buffer management dropped packets What should we require? TCP motivation: FIFO order. Real-time traffic: Each packet has a deadline. outgoing stream Simple Results Theorem: Any deterministic algorithm has competitive ratio of at least 1.28. Greedy Algorithm: upon overflow, discard cheapest packets Theorem: Competitiveness of Greedy at most 4. Lecture 12 42 Lecture 12 43
The Lower Bound: Setup The algorithm is fixed for any input sequence The adversary picks an input sequence. To prove the bound, we consider two sequences If the algorithm does well on one, it does badly on the other We consider only packets with values: 1 or 4. The Lower Bound: Scenario B 1-packets arrive at time 0. In each time step 1..t, a single 4-packet arrives. t is when the algorithm transmits an 4-packet (no more 1-packets) t small : no more packets arrive. t large : a batch of B 4-packets arrive at time t+1. Lecture 12 44 Lecture 12 45 The Lower Bound: Algebra Now solve for t the equation Define and solve for x: Get Which gives the lower bound CR 1.28197 (very slightly bigger at α 4.01545) Simple Upper Bound: Argument Consider a time interval I=[t, t+b]. Let A(I) be the set of packets arriving in that interval. OPT can accept at most the 2B top packets of A(I) B transmitted, B stored GREEDY sends in [t,t+2b] at least the B top packets of A(I). No packet can push them out in [t, t+b] If a value v is stored at time t+b, at least value v is sent in [t+b, t+2b] Lecture 12 46 Lecture 12 47
Simple Upper Bound: Math OPT can accept at most the 2B top packets of A(I) GREEDY sends in [t, t+2b] at least the B top packets of A(I). Is It Really 4? Answer: competitive ratio can t be better than 2 Bad scenario: is cheap, valuable B GREEDY: lose B out of 2B valuable packets Lecture 12 48 Lecture 12 49 Why 2? OPT discards all cheap packets immediately, sends only valuable ones. OPT: lose B cheap packets Not All Greedys Are Created Equal When an overflow occurs, may have several packets of the same value eligible to be discarded. Does it matter which we throw away? Tail-Drop: drop the newest Head-Drop: drop the oldest Theorem: Head-Drop is the best greedy algorithm (up to 3/2 better than Tail-Drop). Lecture 12 50 Lecture 12 51
Head-Drop is the Best Greedy Algorithm The proof is based on two observations 1. All Greedy algorithms have the same buffer occupancy 2. Out of all Greedy algorithms, head-drop always has the least weight in its buffer Head-Drop Beats Tail-Drop 3:2 Head Drop Tail Drop 1 + 3/2B B/2 +B The competitive viewpoint: A low-value packet sent is a waste of bandwidth Lecture 12 52 Lecture 12 53 Summary: Scientific Life Cycle reality validation implementation That s it, abstraction model analysis results interpretation folks! Competitive Analysis motivates new useful algorithms Lecture 12 54 Lecture 12 55