q Admin and recap q Case studies: Content Distribution o Forward proxy (web cache) o Akamai o YouTube q P2P networks o Overview

Similar documents
Measuring the Web: Part I - - Content Delivery Networks. Prof. Anja Feldmann, Ph.D. Dr. Ramin Khalili Georgios Smaragdakis, PhD

How To Understand The Power Of A Content Delivery Network (Cdn)

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

P2P File Sharing: BitTorrent in Detail

The BitTorrent Protocol

Distributed Systems. 23. Content Delivery Networks (CDN) Paul Krzyzanowski. Rutgers University. Fall 2015

Distributed Systems. 25. Content Delivery Networks (CDN) 2014 Paul Krzyzanowski. Rutgers University. Fall 2014

CS514: Intermediate Course in Computer Systems

DATA COMMUNICATOIN NETWORKING

Content Distribu-on Networks (CDNs)

Lecture 6 Content Distribution and BitTorrent

DDoS Vulnerability Analysis of Bittorrent Protocol

Scalability of web applications. CSCI 470: Web Science Keith Vertanen

The Role and uses of Peer-to-Peer in file-sharing. Computer Communication & Distributed Systems EDA 390

Web Caching and CDNs. Aditya Akella

Indirection. science can be solved by adding another level of indirection" -- Butler Lampson. "Every problem in computer

CSCI-1680 CDN & P2P Chen Avin

Overlay Networks. Slides adopted from Prof. Böszörményi, Distributed Systems, Summer 2004.

Content Delivery Networks. Shaxun Chen April 21, 2009

Peer-to-peer filetransfer protocols and IPv6. János Mohácsi NIIF/HUNGARNET TF-NGN meeting, 1/Oct/2004

Internet Content Distribution

Internet Content Distribution

Web DNS Peer-to-peer systems (file sharing, CDNs, cycle sharing)

architecture: what the pieces are and how they fit together names and addresses: what's your name and number?

Request Routing, Load-Balancing and Fault- Tolerance Solution - MediaDNS

From Internet Data Centers to Data Centers in the Cloud

Computer Networks - CS132/EECS148 - Spring

Building a Highly Available and Scalable Web Farm

Peer-to-Peer Networks. Chapter 2: Initial (real world) systems Thorsten Strufe

Content Delivery Networks (CDN) Dr. Yingwu Zhu

Java Bit Torrent Client

CS 188/219. Scalable Internet Services Andrew Mutz October 8, 2015

Department of Computer Science Institute for System Architecture, Chair for Computer Networks. File Sharing

The Internet is Flat: A brief history of networking over the next ten years. Don Towsley UMass - Amherst

The old Internet. Software in the Network: Outline. Traditional Design. 1) Basic Caching. The Arrival of Software (in the network)

Global Server Load Balancing

Incentives Build Robustness in BitTorrent

GLOBAL SERVER LOAD BALANCING WITH SERVERIRON

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with IBM WebSphere 7

Lecture 8a: WWW Proxy Servers and Cookies

Application Layer. CMPT Application Layer 1. Required Reading: Chapter 2 of the text book. Outline of Chapter 2

SiteCelerate white paper

HW2 Grade. CS585: Applications. Traditional Applications SMTP SMTP HTTP 11/10/2009

Content Delivery Networks

CDN and Traffic-structure

CS5412: TORRENTS AND TIT-FOR-TAT

Large-Scale Web Applications

Building Scalable Web Sites: Tidbits from the sites that made it work. Gabe Rudy

Communications and Networking

Department of Computer Science Institute for System Architecture, Chair for Computer Networks. Caching, Content Distribution and Load Balancing

Technical Overview Simple, Scalable, Object Storage Software

Web Application Hosting Cloud Architecture

The Value of Content Distribution Networks Mike Axelrod, Google Google Public

Distributed Systems 19. Content Delivery Networks (CDN) Paul Krzyzanowski

Accelerating Wordpress for Pagerank and Profit

Seminar RVS MC-FTP (Multicast File Transfer Protocol): Simulation and Comparison with BitTorrent

Understanding Slow Start

First Midterm for ECE374 02/25/15 Solution!!

Overview. Tor Circuit Setup (1) Tor Anonymity Network

CNT5106C Project Description

DNS, CDNs Weds March Lecture 13. What is the relationship between a domain name (e.g., youtube.com) and an IP address?

Scalable Internet Services and Load Balancing

Distributed Systems. 24. Content Delivery Networks (CDN) 2013 Paul Krzyzanowski. Rutgers University. Fall 2013

The Feasibility of Supporting Large-Scale Live Streaming Applications with Dynamic Application End-Points

The Effectiveness of Request Redirection on CDN Robustness

Peer-to-Peer Data Management

VIA COLLAGE Deployment Guide

DEPLOYMENT GUIDE. Deploying the BIG-IP LTM v9.x with Microsoft Windows Server 2008 Terminal Services

CHAPTER 4 PERFORMANCE ANALYSIS OF CDN IN ACADEMICS

MC-FTP (Multicast File Transfer Protocol): Implementation and Comparison with

Peer-to-Peer Networks. Chapter 6: P2P Content Distribution

ICP. Cache Hierarchies. Squid. Squid Cache ICP Use. Squid. Squid

The Algorithm of Sharing Incomplete Data in Decentralized P2P

P2P: centralized directory (Napster s Approach)

Web Load Stress Testing

Designing a Cloud Storage System

TCP/IP Protocol Suite. Marshal Miller Chris Chase

CSC2231: Akamai. Stefan Saroiu Department of Computer Science University of Toronto

Concept of Cache in web proxies

DEPLOYMENT GUIDE. Deploying F5 for High Availability and Scalability of Microsoft Dynamics 4.0

Bloom Filter based Inter-domain Name Resolution: A Feasibility Study

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study

bbc Adobe LiveCycle Data Services Using the F5 BIG-IP LTM Introduction APPLIES TO CONTENTS

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November ISSN

1 Introduction: Network Applications

Content Delivery and the Natural Evolution of DNS

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

Should Internet Service Providers Fear Peer-Assisted Content Distribution?

Measuring CDN Performance. Hooman Beheshti, VP Technology

Experimentation with the YouTube Content Delivery Network (CDN)

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Transcription:

Outline CS433/533 Computer Networks Lecture 13 CDN & P2P for Scalability q Admin and recap q Case studies: Content Distribution o Forward proxy (web cache) o Akamai o YouTube q P2P networks o Overview o The scalability problem 10/10/2012 1 2 Admin Recap: Load Direction Assignment three questions state net path property between s/clients specific request of a client selection algorithm notify client about selection (direction mech.) routing 3 4 Recap: Direction Mechanisms Scalability of Traditional C-S Web Servers DNS name1 DNS name2 IP1 IP2 IPn DNS app. Cluster1 in US East Load balancer Cluster2 in US West Load balancer Cluster2 in Europe Load balancer - Rewrite - Direct reply - Fault tolerance proxy Load balancer s 5 6 1

Outline q Admin and recap q Case studies: Content Distribution Content Distribution History... With 25 years of Internet experience, we ve learned exactly one way to deal with the exponential growth: Caching. (1997, Van Jacobson) 7 8 Initial Approach: Forward Cache/Proxy Benefits of Forward Web Caching Web caches/proxy placed at entrance of an ISP Client sends all http requests to web cache if object at web cache, web cache immediately returns object in http response else requests object from origin, then returns http response to client client client http request http response http request http response Proxy http request http response http request http response origin origin 9 Assume: cache is close to client (e.g., in same network) smaller response time: cache closer to client decrease traffic from distant s link at institutional/local ISP network often bottleneck Cache Hit ratio increases logarithmically with number of users Web protocols evolved extensively to accommodate caching, e.g. HTTP 1.1 institutional network public Internet 10 Mbps access link 1 Gbps LAN origin s Institutional proxy cache 10 What Went Wrong with Forward Web Caches? However, client site (forward) Web caching was developed with a strong ISP perspective, leaving content providers out of the picture It is the ISP who places a cache and controls it ISP s main interest to use Web caches is to reduce bandwidth In the USA: Bandwidth relative cheap In Europe, there were many more Web caches However, ISPs can arbitrarily tune Web caches to deliver stale content Content Provider Perspective Content providers care about User experience latency Content freshness Accurate access statistics Avoid flash crowds Minimize bandwidth usage in their access link 11 12 2

Content Distribution Networks CDN design perspective: service to content publishers o performance scalability (high throughput, going beyond single throughput) o geographic scalability (low propagation latency, going to close-by s) o content publisher control/access 13 Akamai q Akamai original and largest commercial CDN operates around 137,000 s in over 1,150 networks in 87 countries q Akamai (AH kuh my) is Hawaiian for intelligent, clever and informally cool. Founded Apr 99, Boston MA by MIT students q Akamai evolution: o Files/streaming (our focus at this moment) o Secure pages and whole pages o Dynamic page assembly at the edge (ESI) o Distributed applications 14 Akamai Scalability Bottleneck Basic of Akamai Architecture q Content publisher (e.g., CNN, NYTimes) o provides base HTML documents o runs origin (s) q Akamai runs o edge s for hosting content Deep deployment into 1150 networks See Akamai 2009 investor analysts meeting 15 o customized DNS redirection s to select edge s based on closeness to client browser load 16 Linking to Akamai q Originally, URL Akamaization of embedded content: e.g., <IMG SRC= http://www.provider.com/image.gif > changed to <IMGSRC = http://a661. g.akamai.net/hash/image.gif> Note that this DNS redirection unit is per customer, not individual files. q URL Akamaization is becoming obsolete and supported mostly for legacy reasons o Currently most content publishers prefer to use DNS CNAME to link to Akamai s 17 Akamai Load Direction Flow Customer DNS s Multiple redirections to find nearby edge s (4) Client is given 2 nearby web Client gets CNAME replica entry s (fault (2) tolerance) with domain name in Akamai Client requests site (3) LDNS Hierarchy of CDN DNS s (5) (1) (6) Internet Web replica s Web client More details see Global hosting system : FT Leighton, DM Lewin US Patent 6,108,703, 2000. 18 3

Exercise: Zoo machine Akamai Load Direction q Check any web page of New York Times and find a page with an image q Find the URL q Use %dig +trace +recurse to see Akamai load direction 19 If the directed edge does not have requested content, it goes back to the original (source). 20 Two-Level Direction Local DNS Alg: Potential Input o p(m, e): path properties (from a client site m to an edge sever e) Akamai might use a one-hop detour routing (see akamai-detour.pdf) o a k m: request load from client site m to publisher k proximity: high-level DNS determines client location; directs to low-level DNS, who manages a close-by cluster 21 o x e : load on edge e o caching state of a e 22 Local DNS Alg q Details of Akamai algorithms are proprietary q A Bin-Packing algorithm (column 12 of Akamai Patent) every T second o Compute the load to each publisher k (called serial number) o Sort the publishers from increasing load o For each publisher, associate a list of random s generated by a hash function o Assign the publisher to the first that does not overload Hash Bin-Packing LB: maps request to individual machines inside cluster 23 24 4

Experimental Study of Akamai Load Balancing Server Pool: to Yahoo Target: a943.x.a.yimg.com (Yahoo) q Methodology o 2-months long measurement o 140 PlanetLab nodes (clients) 50 US and Canada, 35 Europe, 18 Asia, 8 South America, the rest randomly scattered o Every 20 sec, each client queries an appropriate CNAME for Yahoo, CNN, Fox News, NY Times, etc. Web replica IDs Client 1: Berkeley day Web replica IDs Client 2: Purdue Akamai Low-Level DNS Server Akamai Web replica 1 Akamai Web replica 2. night 06/1/05 16:16 Akamai Web client Web replica 3 See http://www.aqualab.cs.northwestern.edu/publications/ajsu06dba.pdf 25 26 Server Diversity for Yahoo Server Pool: Multiple Akamai Hosted Sites Majority of PL nodes see between 10 and 50 Akamai edge-s Nodes far away from Akamai hot-spots Number of Akamai Web Replicas 27 Clients 28 Load Balancing Dynamics Redirection Effectiveness: Measurement Methodology Berkeley Brazil Korea 9 Best Akamai Replica Servers ping ping ping ping 29 Akamai Low-Level DNS Server Planet Lab Node 30 5

Do redirections reveal network conditions? Akamai Streaming Architecture q Rank = r1+r2-1 o 16 means perfect correlation o 0 means poor correlation Brazil is poor MIT and Amsterdam are excellent A content publisher (e.g., a radio or a TV station) encodes streams and transfer them to entry points Group a set of streams (e.g., some popular some not) into a bucket called a portset. A set of reflectors will distribute a given portset. When a user watches a stream from an edge, the subscribes to a reflector 31 Compare with Web architecture. 32 Testing Akamai Streaming Load Balancing http://video.google.com/videoplay?docid=-6304964351441328559# You Tube (a) Add 7 probing machines to the same edge (b) Observe slow down (c) Notice that Akamai removed the edge from DNS; probing machines stop 33 q 02/2005: Founded by Chad Hurley, Steve Chen and Jawed Karim, who were all early employees of PayPal. q 10/2005: First round of funding ($11.5 M) q 03/2006: 30 M video views/day q 07/2006: 100 M video views/day q 11/2006: acquired by Google q 10/2009: Chad Hurley announced in a blog that YouTube serving well over 1 B video views/day (avg = 11,574 video views /sec ) 34 Pre-Google Team Size YouTube Design Alg. q 2 Sysadmins q 2 Scalability software architects q 2 feature developers q 2 network engineers q 1 DBA q 0 chefs while (true) { identify_and_fix_bottlenecks(); drink(); sleep(); notice_new_bottleneck(); } 35 36 6

YouTube Major Components YouTube: Web Servers q Web s q Components NetScaler q Video s q Thumbnail s q Database s q Netscaler load balancer; Apache; Python App Servers; Databases q Python q Web code (CPU) is not bottleneck q JIT to C to speedup q C extensions q Pre-generate HTML responses q Development speed more important Web s Databases Apache Python App Server 37 38 YouTube: Video Popularity YouTube: Video Popularity How to design a system to handle highly skewed distribution? See Statistics and Social Network of YouTube Videos, 2008. See Statistics and Social Network of YouTube Videos, 2008. 39 40 YouTube: Video Server Architecture q Tiered architecture o CDN s (for popular videos) Low delay; mostly in-memory operation o YouTube s (not popular 1-20 per day) CDN YouTube Redirection Architecture YouTube s Most popular Request YouTube Colo 1 Others YouTube Colo N 41 42 7

YouTube Video Servers q Each video hosted by a mini-cluster consisting of multiple machines q Video s use the lighttpd web for video transmission: q Apache had too much overhead (used in the first few months and then dropped) q Async io: uses epoll to wait on multiple fds q Switched from single process to multiple process configuration to handle more connections Thumbnail Servers q Thumbnails are served by a few machines q Problems running thumbnail s o A high number of requests/sec as web pages can display 60 thumbnails on page o Serving a lot of small objects implies lots of disk seeks and problems with file systems inode and page caches may ran into per directory file limit Solution: storage switched to Google BigTable 43 44 Thumbnail Server Software Architecture Thumbnails Server: lighttpd/aio q Design 1: Squid in front of Apache o Problems Squid worked for a while, but as load increased performance eventually decreased: Went from 300 requests/second to 20 under high loads Apache performed badly, changed to lighttpd q Design 2: lighttpd default: By default lighttpd uses a single thread o Problem: often stalled due to I/O q Design 3: switched to multiple processes contending on shared accept o Problems: high contention overhead/individual caches 45 46 Scalability of Content Distribution Objectives of P2P DNS origin edge. s The scalability problem Share the resources (storage and bandwidth) of individual clients to improve scalability/ robustness Internet The Lookup problem More generally, moving from a host-centric Internet to a data-centric Internet supporting data persistency, availability, and authenticity 47 48 8

P2P But P2P is not new Original Internet was a p2p system: The original ARPANET connected UCLA, Stanford Research Institute, UCSB, and Univ. of Utah No DNS or routing infrastructure, just connected by phone lines Computers also served as routers P2P Systems File Sharing: BitTorrent Streaming: Octoshape, Adobe 10.1 later PPLive Games: Xbox P2P is simply an iteration of scalable distributed systems Outline An Upper Bound on Scalability q Admin and recap o Case studies: Content Distribution o Forward proxy (web cache) o Akamai o YouTube o P2P networks o Overview o The scalability problem Assume need to achieve same rate to all clients only uplinks can be bottlenecks What is an upper bound on scalability? C 1 C 2 C3 C n 51 52 The Scalability Problem Maximum throughput R = min{, ( +ΣC i )/n} C 1 C n Theoretical Capacity: upload is bottleneck Assume c 0 > ( +ΣC i )/n Tree i: à client i: c i /(n-1) client i à other n-1 clients R = min{, ( +ΣC i )/n} c i / (n-1) c 0 c i c 1 c 2 c n The bound is theoretically approachable C 2 C3 Tree 0: has remaining c m = c0 (c1 + c2 + cn)/(n-1) send to client i: cm/n C 1 c m /n C 2 C i C n 53 54 9

Why not Building the Trees? Key Design Issues s Robustness Resistant to churns and failures Efficiency s q Clients come and go (churns): maintaining the trees is too expensive q Each eeds N connections C 1 C 2 C3 C n 55 A client has content that others need; otherwise, its upload capacity may not be utilized Incentive: clients are willing to upload 70% of Gnutella users share no files nearly 50% of all responses are returned by the top 1% of sharing hosts Lookup problem C 1 C 2 C3 C n 56 Discussion: How to handle the issues? Outline Robustness Efficiency s/ seeds q Admin and recap o Case studies: Content Distribution o Forward proxy (web cache) o Akamai Incentive C 1 C 2 C3 C n o YouTube o P2P networks o Overview o The scalability problem o BitTorrent 57 58 BitTorrent Outline A P2P file sharing protocol Created by Bram Cohen in 2004 Spec at bep_0003: http://www.bittorrent.org/ beps/bep_0003.html q Admin and recap o Case studies: Content Distribution o P2P networks o Overview o The scalability problem o BitTorrent o Lookup 59 60 10

BitTorrent Mostly tracker based BitTorrent: Lookup HTTP GET MYFILE.torrent Tracker-less mode; based on the Kademlia DHT web MYFILE.torrent http://mytracker.com:6969/ S3F5YHG6FEB FG5467HGF367 F456JI9N5FF4E user 61 62 Metadata (.torrent) File Structure Meta info contains information necessary to contact the tracker and describes the files in the torrent URL of tracker file name file length piece length (typically 256KB) SHA-1 hashes of pieces for verification also creation date, comment, creator, Tracker Protocol Communicates with clients via HTTP/HTTPS Client GET request info_hash: uniquely identifies the file peer_id: chosen by and uniquely identifies the client client IP and port numwant: how many peers to return (defaults to 50) stats: e.g., bytes uploaded, downloaded Tracker GET response interval: how often to contact the tracker list of peers, containing peer id, IP and port stats Tracker Protocol Outline web tracker register list of peers ID1 169.237.234.1:6881 ID2 190.50.34.6:5692 ID3 34.275.89.143:4545 ID50 231.456.31.95:6882 user q Admin and recap o Case studies: Content Distribution o P2P networks o Overview o The scalability problem o BitTorrent o Lookup o Robustness and efficiency Peer 50 Peer 2 Peer 1 65 66 11

Piece-based Swarming Divide a large file into small blocks and request block-size content from different peers Block: unit of download Block: 16KB File If do not finish downloading a block from one peer within timeout (say due to churns), switch to requesting the block from another peer 67 Detail: Peer Protocol (Over TCP) Remote Peer BitField/have BitField/have 0 1 0 1 Piece 256KB Peers exchange bitmap representing content availability bitfield msg during initial connection have msg to notify updates to bitmap Local Peer Incomplete Piece to reduce bitmap size, aggregate multiple blocks as a piece 68 Peer Request http://www.bittorrent.org/beps/ bep_0003.html Key Design Points If peer A has a piece that peer B needs, peer B sends interested to A unchoke: indicate that A allows B to request request: B requests a specific block from A 1.interested/ 3. request 2. unchoke/ 4. piece request: which data blocks to request? unchoke: which peers to serve? 1.interested/ 3. request 2. unchoke/ 4. piece piece: specific data 69 Request: Block Availability Request (local) rarest first achieves the fastest replication of rare pieces obtain something of value Block Availability: Revisions When downloading starts (first 4 pieces): choose at random and request them from the peers get pieces as quickly as possible obtain something to offer to others Endgame mode defense against the last-block problem : cannot finish because missing a few last pieces send requests for missing pieces to all peers in our peer list send cancel messages upon receipt of a piece 12

BitTorrent: Unchoke q Periodically (typically every 10 seconds) calculate data-receiving rates from all peers q Upload to (unchoke) the fastest - constant number (4) of unchoking slots - partition upload bw equally among unchoked 1.interested/ 3. request 2. unchoke/ 4. piece Optimistic Unchoking Periodically select a peer at random and upload to it typically every 3 unchoking rounds (30 seconds) Multi-purpose mechanism allow bootstrapping of new clients continuously look for the fastest peers (exploitation vs exploration) commonly referred to as tit-for-tat strategy 13