Internet Content Distribution

Similar documents
Internet Content Distribution

Web Caching and CDNs. Aditya Akella

Content Delivery Networks (CDN) Dr. Yingwu Zhu

CS 188/219. Scalable Internet Services Andrew Mutz October 8, 2015

Overlay Networks. Slides adopted from Prof. Böszörményi, Distributed Systems, Summer 2004.

First Midterm for ECE374 03/24/11 Solution!!

GLOBAL SERVER LOAD BALANCING WITH SERVERIRON

The Effectiveness of Request Redirection on CDN Robustness

How To Understand The Power Of A Content Delivery Network (Cdn)

Measuring CDN Performance. Hooman Beheshti, VP Technology

Understanding Slow Start

The Evolution of Application Acceleration:

DATA COMMUNICATOIN NETWORKING

Load Balancing Web Applications

CS514: Intermediate Course in Computer Systems

Measuring the Web: Part I - - Content Delivery Networks. Prof. Anja Feldmann, Ph.D. Dr. Ramin Khalili Georgios Smaragdakis, PhD

Protocolo HTTP. Web and HTTP. HTTP overview. HTTP overview

Content Distribution Networks (CDN)

Peer-to-Peer Networks. Chapter 6: P2P Content Distribution

Open Issues in Content Distribution

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

Teridion. Rethinking Network Performance. The Internet. Lightning Fast. Technical White Paper July,

Bandwidth Aggregation, Teaming and Bonding

Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju

Scalable Internet Services and Load Balancing

Understanding IBM Lotus Domino server clustering

Chapter 15: Advanced Networks

First Midterm for ECE374 02/25/15 Solution!!

Performance evaluation of redirection schemes in content distribution networks

Computer Networks Homework 1

Broadband Bonding Network Appliance TRUFFLE BBNA6401

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

Electromeet Participant Guide Optimising Your Internet Connection

Scalable Internet Services and Load Balancing

DOSarrest External MULTI-SENSOR ARRAY FOR ANALYSIS OF YOUR CDN'S PERFORMANCE IMMEDIATE DETECTION AND REPORTING OF OUTAGES AND / OR ISSUES

Evaluating Cooperative Web Caching Protocols for Emerging Network Technologies 1

Rapid IP redirection with SDN and NFV. Jeffrey Lai, Qiang Fu, Tim Moors December 9, 2015

Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using

Web Performance. Lab. Bases de Dados e Aplicações Web MIEIC, FEUP 2014/15. Sérgio Nunes

Background (

Thread level parallelism

HIGH-SPEED BRIDGE TO CLOUD STORAGE

Homework 2 assignment for ECE374 Posted: 02/20/15 Due: 02/27/15

Lesson 7 - Website Administration

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

Scalable Linux Clusters with LVS

Internet Services. Amcom. Support & Troubleshooting Guide

Single Pass Load Balancing with Session Persistence in IPv6 Network. C. J. (Charlie) Liu Network Operations Charter Communications

Web Load Stress Testing

Front-End Performance Testing and Optimization

NEFSIS DEDICATED SERVER

Load Balancing and Sessions. C. Kopparapu, Load Balancing Servers, Firewalls and Caches. Wiley, 2002.

Computer Networks - CS132/EECS148 - Spring

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Global Server Load Balancing

Chapter 7. Address Translation

Content-Aware Load Balancing using Direct Routing for VOD Streaming Service

Netsweeper Whitepaper

Homework 2 assignment for ECE374 Posted: 02/21/14 Due: 02/28/14

Next Generation Application Delivery

How to Build a Massively Scalable Next-Generation Firewall

TESTING & INTEGRATION GROUP SOLUTION GUIDE

The Value of a Content Delivery Network

How Solace Message Routers Reduce the Cost of IT Infrastructure

Proxy Server, Network Address Translator, Firewall. Proxy Server

Deploying in a Distributed Environment

AppDirector Load balancing IBM Websphere and AppXcel

WEB SERVER MONITORING SORIN POPA

Content Delivery Networks. Shaxun Chen April 21, 2009

Protocols. Packets. What's in an IP packet

Request Routing, Load-Balancing and Fault- Tolerance Solution - MediaDNS

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Configuring Your Gateman Proxy Server

TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing

Building a Highly Available and Scalable Web Farm

WINDOWS AZURE EXECUTION MODELS

Copyright 1

Dynamic DNS How-To Guide

Meeting Worldwide Demand for your Content

Radware s AppDirector and AppXcel An Application Delivery solution for applications developed over BEA s Weblogic

Coyote Point Systems White Paper

Load balancer (VPX) Manual

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study

SSL VPN Technology White Paper

Router Architectures

LOAD BALANCING IN WEB SERVER

CHAPTER 4 PERFORMANCE ANALYSIS OF CDN IN ACADEMICS

Virtual Appliance Setup Guide

Web Development. Owen Sacco. ICS2205/ICS2230 Web Intelligence

Chapter 6 Virtual Private Networking Using SSL Connections

high-quality steaming over the Internet

TRUFFLE Broadband Bonding Network Appliance BBNA6401. A Frequently Asked Question on. Link Bonding vs. Load Balancing

Signiant Agent installation

5 Easy Steps to Implementing Application Load Balancing for Non-Stop Availability and Higher Performance

Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral

Content Distribu-on Networks (CDNs)

Eight Considerations for Evaluating Disk-Based Backup Solutions

How Comcast Built An Open Source Content Delivery Network National Engineering & Technical Operations

The Role and uses of Peer-to-Peer in file-sharing. Computer Communication & Distributed Systems EDA 390

Transcription:

Internet Content Distribution Chapter 2: Server-Side Techniques (TUD Student Use Only)

Chapter Outline Server-side techniques for content distribution Goals Mirrors Server farms Surrogates DNS load balancing Parallel downloading Kangasharju: Internet Content Distribution 2

Why Server-Side Techniques? Server-side techniques are aimed at helping the content provider to lower her costs Costs can be: Costs of running a server Costs of a network connection Typically, it is easy to upgrade network connection Easy = it only takes money Upgrading servers is feasible only up to a point Processors do not have infinite speeds Not possible to put enough memory to handle thousands of simultaneous clients Kangasharju: Internet Content Distribution 3

Problems on the Server-Side What happens when we do not have enough capacity? With a small number of users all works well Kangasharju: Internet Content Distribution 4

Problems on the Server-Side Problems start when we have lots of users Kangasharju: Internet Content Distribution 5

Problem in Short Problem is that we cannot handle the traffic Two main aspects 1. Not enough server capacity 2. Not enough network capacity Both can be alleviated (or solved) with money Buying enough network bandwidth is possible, but extremely expensive Possible to buy tens of Gbps (in theory at least) However, a single server has a maximum capacity One CPU is only so fast Can only add X GB of memory (limited by hardware/os) Network cards go only up to certain speed First bottleneck is the server Kangasharju: Internet Content Distribution 6

Solution If one server cannot handle all the traffic, we ll install several servers Total capacity is the sum of the individual capacities Such an arrangement is called server farm Questions: 1. How many servers do I install? 2. Where do I install them? 3. How do I get the users to use those servers? We focus more on questions 2 and 3 Answer to question 1 is more a business decision Kangasharju: Internet Content Distribution 7

Server Farms Typically server farms are hosted in a single data center This means all the servers share the same network connection to the Internet Means: Must still spend lots of money on that Advantages: Easier to manage, since all servers are in the same place Increased service capacity Disadvantages: Still need big pipe to Internet If the network path from the user to the data center is the problem, then the user will not see many benefits How about distributing the server farm? Kangasharju: Internet Content Distribution 8

Mirror Servers We can take servers from a server farm and install them in different geographical locations Traditionally this has been called mirroring Mirror servers are an old technology Already used for FTP servers in 1980 s Still in popular use, especially for open software downloads For example, SourceForge Idea behind a mirror server is to copy the content from the origin server and offer it on a different server Users access the content from the different server For example because it s closer to them (or cheaper) In the old days main goal of mirroring was to reduce international bandwidth costs (e.g., ftp.funet.fi) Kangasharju: Internet Content Distribution 9

Advantages: Mirror Servers Easy to collect lots of data, one mirror can mirror several origin servers Can be installed close to users Teaches users about networking (hopefully :-) Disadvantages: Users must use mirrors for us to get any benefits Typically no automatic mirror selection Content on mirror might be out-of-date Biggest problem with mirrors: How to get users to use them? Existing solutions: Manual selection from a list Automatic selection Parallel download from several mirrors (also used in P2P networks) Kangasharju: Internet Content Distribution 10

Manual Selection of a Mirror Manual selection means that the user has to select the mirror somehow manually Type a different URL, pick mirror from list, click on an extra link List of mirrors must somehow be available These days typically on a website User picks mirror and uses it Typically you have to choose it every time you download Sufficient procedure if: 1. Users understand what they are doing 2. Selection does not happen too often Otherwise too confusing or annoying Kangasharju: Internet Content Distribution 11

Automatic Selection of a Mirror Two main techniques currently in use Note: They are currently used for co-located server farms, not so much for real mirrors But both techniques would work for geographically distributed mirror servers 1. Surrogate servers 2. DNS load balancing Main goal and current use of both is to balance load on a server farm Only real difference is that DNS load balancing is visible to clients, surrogates are not (necessarily) Kangasharju: Internet Content Distribution 12

Surrogate Servers Surrogates sometimes also called server-side proxies Dictionary definition of surrogate explains where the name comes from Traditionally web sites work as follows: User wants URL: www.foo.com DNS: www.foo.com is 192.168.0.1 This server has IP 192.168.0.1 and all the content for www.foo.com Kangasharju: Internet Content Distribution 13

Surrogate Servers Surrogate is put in front of the server farm and receives all client requests Surrogate decides to which content server to forward the request Content server processes the requests and sends reply to surrogate Client receives reply from surrogate User wants URL: www.foo.com DNS: www.foo.com is 192.168.0.1 This server has IP 192.168.0.1 but no content. This server is the surrogate. These servers have all the content. They can have any IP addresses Kangasharju: Internet Content Distribution 14

Surrogates: Pros and Cons Advantages of surrogates: Totally invisible to client, no need to modify clients Allows for fine grained load balancing because surrogate sees actual HTTP requests Note: Not used in practice, but theoretically possible Also, see below about L4 switches Can build a cache into surrogate --> Less load on content servers Disadvantages of surrogates: Surrogate can become performance bottleneck since all requests must go through the surrogate Even if an L4 switch is used, processing is more complicated than in a normal router Extra hardware to buy and maintain Kangasharju: Internet Content Distribution 15

Surrogates: Practical Details Surrogate can be implemented with a web proxy or with an L4 switch Web proxy: Real web proxy, has to parse HTTP request Can easily become a bottleneck, since HTTP processing is not cheap (compared to layer 3 or 4 processing) L4 switch: L4 stands for Level 4 of the OSI model, i.e., transport Simply a redirector based on the port number in TCP header Much more common on client side Summary: Surrogates not widely used in practice Kangasharju: Internet Content Distribution 16

DNS Load Balancing DNS load balancing uses DNS to send clients to different content servers Reply to DNS query for server name results in several IP addresses Client picks one of them and sends request to that server 192.168.0.1 User wants URL: www.foo.com DNS: www.foo.com is 192.168.0.10 192.168.0.1 192.168.0.5 All servers have all the content. 192.168.0.10 192.168.0.5 Kangasharju: Internet Content Distribution 17

DNS Load Balancing Details Basic idea: Redirect each client to a different content server by giving different DNS answers Same idea as DNS redirection (Chapter 4), but goals different DNS server of content provider decides which server handles the clients request Typically some kind of round-robin algorithm But any kind of complicated load balancing is possible Clients typically receive a list of several IP addresses for the given hostname Client can choose any of the received addresses, but most current DNS client implementations pick the first Allow only short caching times for replies Clients must refresh DNS lookups --> Adapt load balancing Kangasharju: Internet Content Distribution 18

DNS Load Balancing: Pros and Cons Advantages: Easy to implement, DNS lookups are mandatory anyway No additional hardware needed Can in principle use any load balancing algorithm Disadvantages: Client can keep on using the wrong server Unlikely to happen, though, since this is controlled by OS, not user No fine-grained control over load balancing Granularity: This client goes to that server for X amount of time Note: Client = Any browser behind same DNS server! Not so much a problem for server-side load balancing, but a bigger issue for client DNS redirection (Chapter 4) DNS load balancing demo with cnn.com Kangasharju: Internet Content Distribution 19

Comparison Surrogates Allows for fine-grained load balancing Even per request! Typically must process up to application level Large effort Not widely used DNS load balancing Extremely widely used by all major websites Currently trend is to use CDN CDNs use kind of DNS load balancing Not much additional processing needed on top of DNS request processing Relatively coarse-grained But not much of a problem in practice (statistics!) Kangasharju: Internet Content Distribution 20

Let s get back to mirrors Parallel Downloads DNS load balancing could be used to select mirrors Other alternative was manual selection Question: Why select at all? Or rather, why not select them all? Motivation behind parallel downloads is to eliminate the need for mirror selection Main benefit is increased download speed Results in the following from Rodriguez & Biersack, Dynamic Parallel-Access to Replicated Content in the Internet, IEEE/ACM Transactions on Networking, Aug. 2002 Kangasharju: Internet Content Distribution 21

What Are Parallel Downloads? Client downloads different parts of the file from different sources at the same time Not used for web content Widely used in P2P file sharing networks All modern file sharing networks use parallel downloads Two assumptions for efficiency: 1. File to be downloaded is relatively large Several hundred KB and larger 2. Paths from client to the sources are bottleneck-disjoint See below First assumption makes parallel downloads unsuitable for web content Kangasharju: Internet Content Distribution 22

How Does Parallel Download Work? Downloading from a single server, user is limited by that server s upload bandwidth In the case below, user cannot use her full bandwidth May make users unhappy (I pay for nothing!) 500 Kbps Actual speed Capacity 2 Mbps Kangasharju: Internet Content Distribution 23

How Does Parallel Download Work? Downloading from several servers in parallel, user can fill her download link to capacity 500 Kbps 2 Mbps Kangasharju: Internet Content Distribution 24

Bottleneck-Disjoint Paths If user s access link to the network is the bottleneck, parallel downloads do not help at all Might not hurt either, but parallel download has some overhead 2 Mbps 500 Kbps Kangasharju: Internet Content Distribution 25

Practical Details Two types of parallel download defined: History-based Dynamic History-based parallel access: All sources are known and past bandwidths to them are known When client downloads file, it checks past bandwidths Pick the best sources for download Dynamic parallel access: Dynamically select best source according to current download speeds This approach popular for P2P networks Kangasharju: Internet Content Distribution 26

Experiment Setup Client in France, sources all over the world File size 763 KB (Squid proxy caching software) Kangasharju: Internet Content Distribution 27

History-Based Parallel Access History-based parallel access to two servers simultaneously Kangasharju: Internet Content Distribution 28

History-Based Parallel Access Optimum calculated after-the-fact Similar results obtained for larger sets of servers Observations: During night, history-based access achieves good performance During day, often downloading from either single server is faster than parallel! Solutions: Different bandwidth estimates for different times of day Complicated Fully dynamic mirror selection Kangasharju: Internet Content Distribution 29

Dynamic Parallel Access One client, set of known servers, one file File divided into equal-size blocks Client requests file as follows 1. Client requests 1 block from each server 2. When server finished uploading, client requests new block from that server 3. When all blocks are there, client reassembles file Problems: Servers idle for a while when waiting for new request Not all servers terminate at the same time Kangasharju: Internet Content Distribution 30

Solutions to Problems 1. Number of blocks should be much larger than number of servers 2. Blocks should be small in size Provides fine-grained balancing of server capabilities Aim is to finish all downloads at the same time 3. Blocks should be large enough to avoid idle times Between two blocks is 1 RTT idle time If blocks are large, idle times are a small fraction of total time Also, possible to pipeline requests to some degree However, for solutions 2 and 3, file should be large Kangasharju: Internet Content Distribution 31

Performance File size 763 KB, 30 blocks, 4 servers Kangasharju: Internet Content Distribution 32

Results Servers chosen to minimize common network links Parallel downloads are almost equal to optimal Time goes from 50 seconds to 20 seconds Performance independent of the time of day Similar results when some servers are fast and other slow, but: In this case, parallel downloads have only small performance advantage over the fastest single server But: No risk of picking a bad server Kangasharju: Internet Content Distribution 33

Small Documents Document 10 KB, 4 blocks, 2 servers Advantage exists, but is quite small Kangasharju: Internet Content Distribution 34

Shared Bottleneck Link Modem client, 763 KB, 30 blocks, 2 servers Kangasharju: Internet Content Distribution 35

Results and Summary Not much gain from parallel access In fact, picking just the better server gives better performance Summary Parallel downloading efficient in heterogeneous cases Requires large files and bottleneck-disjoint paths Currently widely used in P2P file sharing networks Kangasharju: Internet Content Distribution 36

Chapter Summary Server-side techniques for content distribution Goals Mirrors Server farms Surrogates DNS load balancing Parallel downloading Kangasharju: Internet Content Distribution 37