Dynamic Load Balancing and Node Migration in a Continuous Media Network



Similar documents
networks Live & On-Demand Video Delivery without Interruption Wireless optimization the unsolved mystery WHITE PAPER

Requirements of Voice in an IP Internetwork

Mobile Communications Chapter 9: Mobile Transport Layer

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Content Delivery Networks. Shaxun Chen April 21, 2009

TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

Video Streaming Without Interruption

VoIP QoS. Version 1.0. September 4, AdvancedVoIP.com. Phone:

Streaming Stored Audio & Video

Distributed Systems. 23. Content Delivery Networks (CDN) Paul Krzyzanowski. Rutgers University. Fall 2015

Lecture 33. Streaming Media. Streaming Media. Real-Time. Streaming Stored Multimedia. Streaming Stored Multimedia

TCP for Wireless Networks

Optimizing Enterprise Network Bandwidth For Security Applications. Improving Performance Using Antaira s Management Features

Voice-Over-IP. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Content Delivery Networks

Need for Signaling and Call Control

Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross

The old Internet. Software in the Network: Outline. Traditional Design. 1) Basic Caching. The Arrival of Software (in the network)

An Active Packet can be classified as

Strategies. Addressing and Routing

7 Streaming Architectures

A Digital Fountain Approach to Reliable Distribution of Bulk Data

Intelligent Content Delivery Network (CDN) The New Generation of High-Quality Network

Telecommunication Services Engineering (TSE) Lab. Chapter III 4G Long Term Evolution (LTE) and Evolved Packet Core (EPC)

Distributed Systems 19. Content Delivery Networks (CDN) Paul Krzyzanowski

White Paper. Enterprise IPTV and Video Streaming with the Blue Coat ProxySG >

HOW PUBLIC INTERNET IS FINALLY READY FOR HD VIDEO BACKHAUL

TDM services over IP networks

Module 2 Overview of Computer Networks

How To Provide Qos Based Routing In The Internet

Distributed Systems. 25. Content Delivery Networks (CDN) 2014 Paul Krzyzanowski. Rutgers University. Fall 2014

MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport

Three Key Design Considerations of IP Video Surveillance Systems

QoS in VoIP. Rahul Singhai Parijat Garg

QoS Issues for Multiplayer Gaming

How To Understand The Power Of A Content Delivery Network (Cdn)

Quality of Service Testing in the VoIP Environment

Packetized Telephony Networks

Protocols. Packets. What's in an IP packet

VOICE over IP H.323 Advanced Computer Network SS2005 Presenter : Vu Thi Anh Nguyet

Internet Infrastructure Measurement: Challenges and Tools

Distributed Systems 3. Network Quality of Service (QoS)

Per-Flow Queuing Allot's Approach to Bandwidth Management

White paper. Latency in live network video surveillance

Content Distribution over IP: Developments and Challenges

high-quality steaming over the Internet

Mobile Computing/ Mobile Networks

Receiving the IP packets Decoding of the packets Digital-to-analog conversion which reproduces the original voice stream

Multimedia Networking and Network Security

6. Streaming Architectures 7. Multimedia Content Production and Management 8. Commercial Streaming Systems: An Overview 9. Web Radio and Web TV

Choosing a Content Delivery Method

Voice over IP. Overview. What is VoIP and how it works. Reduction of voice quality. Quality of Service for VoIP

Chapter 2. Literature Review

Note! The problem set consists of two parts: Part I: The problem specifications pages Part II: The answer pages

Classes of multimedia Applications

What s a protocol? What s a protocol? A closer look at network structure: What s the Internet? What s the Internet? What s the Internet?

QOS Requirements and Service Level Agreements. LECTURE 4 Lecturer: Associate Professor A.S. Eremenko

Mobile VoIP: Managing, scheduling and refining voice packets to and from mobile phones

Question: 3 When using Application Intelligence, Server Time may be defined as.

Content Delivery Networks

Measurement of IP Transport Parameters for IP Telephony

VoIP over P2P networks

Region 10 Videoconference Network (R10VN)

Cable Modems. Definition. Overview. Topics. 1. How Cable Modems Work

Voice over IP (VoIP) Overview. Introduction. David Feiner ACN Introduction VoIP & QoS H.323 SIP Comparison of H.323 and SIP Examples

Multimedia Requirements. Multimedia and Networks. Quality of Service

Overview of Network Hardware and Software. CS158a Chris Pollett Jan 29, 2007.

Peer-to-Peer Networks. Chapter 6: P2P Content Distribution

Meeting Worldwide Demand for your Content

Application Note How To Determine Bandwidth Requirements

Computer Network and Communication

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Bandwidth Control in Multiple Video Windows Conferencing System Lee Hooi Sien, Dr.Sureswaran

4. H.323 Components. VOIP, Version 1.6e T.O.P. BusinessInteractive GmbH Page 1 of 19

HIGH AVAILABILITY FOR BUSINESS- CRITICAL PROCESSES WITH VIPRINET

Multimedia Communications Voice over IP

The Value of a Content Delivery Network

Distributed Systems. 24. Content Delivery Networks (CDN) 2013 Paul Krzyzanowski. Rutgers University. Fall 2013

Quality of Service Analysis of site to site for IPSec VPNs for realtime multimedia traffic.

Communications and Computer Networks

Cisco Application Networking for Citrix Presentation Server

Comparison of Voice over IP with circuit switching techniques

12 Quality of Service (QoS)

Local-Area Network -LAN

Final for ECE374 05/06/13 Solution!!

Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

Mobile Multimedia Meet Cloud: Challenges and Future Directions

Top-Down Network Design

IJMIE Volume 2, Issue 7 ISSN:

How To Solve A Network Communication Problem

VOICE OVER IP AND NETWORK CONVERGENCE

CDN and Traffic-structure

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

Challenges of Sending Large Files Over Public Internet

Extending SANs Over TCP/IP by Richard Froom & Erum Frahim

How To Design A Layered Network In A Computer Network

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs

Airlift: Video Conferencing as a Cloud Service using Inter- Datacenter Networks

Multi Protocol Label Switching (MPLS) is a core networking technology that

Transcription:

Dynamic Load Balancing and Node Migration in a Continuous Media Network Anthony J. Howe Supervisor: Dr. Mantis Cheng University of Victoria Draft: April 9, 2001 Abstract This report examines current technologies available in a continuous media network and presents two new areas. Current network structures include forward proxy caching, server farms, and content distribution networks. Methods for broadcast distribution include controlled connection and one-way connection communication. Two important areas of research not addressed by current technologies are dynamic load balancing and node migration. 1 Introduction Streaming media is found everywhere on the Internet! Radio stations use this technology to deliver their programs to a broader audience than can be reached by a local broadcast. Live broadcasts from public music concerts are streamed across the Internet. News web sites such as CNN.com deliver news through streaming video clips. Individuals are able to deliver their own mp3 collections to a world wide audience using Nullsoft s SHOUTcast. The increased application of streaming media brings increased demand. Technologies for continuous media on the Internet are rapidly evolving to meet this high demand. A streaming technology involves data compression algorithms, and the reliable delivery of data from the to the end user. The Internet makes jitter free delivery of continuous data difficult due to unpredictable network congestion. Jitter is the variation of time between the delivery of a series of packets of continuous media [1]. For continuous media reliable delivery is not as critical as jitter free delivery. A continuous stream that experiences jitter will result in distorted audio or video for the end user. This report examines current streaming delivery technologies and proposes and explains two important areas of research not addressed by current technologies: dynamic load balancing and node migration. 1

2 Current Technologies In 1995 Real Networks released a continuous media delivery application. This application delivered streaming media from a single server [2]. A single server limited the available audience size due to the performance of the server and the network bandwidth available. To reduce load on a central server three common network structures have evolved: forward proxy caching, server farms, and content distribution networks [3]. In addition the method of broadcast distribution has evolved. This section examines these network structures and streaming technologies and discusses current technologies that employ these ideas. 2.1 Forward Proxy Caching Forward proxy caching attempts to bring continuous data closer to the end user. It has been in use for several years as front ends to web servers [3]. Figure 1 shows a forward proxy caching network. The nodes pass all their requests through the proxy before going to the origin. If the proxy is able to answer the request it returns the relevant information. The proxy provides faster response time to the node and reduces load on the origin. A disadvantage of proxy caches is that an uncommon request still has to go to the origin. Many uncommon requests from nodes around the Internet at one time could overload the origin. origin Internet first time requests are sent to origin proxy cache node node node node's send requests to the proxy Figure 1: A Foward Proxy Cache 2

Real Networks has used this proxy caching technology in their RealSystem Proxy 8 [4]. The RealSystem Proxy 8 is able to handle both on-demand media and also deliver live broadcasts. For on-demand data, the first time media is requested it comes directly from the origin but is then stored in the cache of the RealSystem Proxy for any future requests. For live media a single stream from the origin is split and delivered to many requesting nodes. This saves bandwidth since many nodes access the stream from the local proxy as opposed to each accessing a separate live stream from the origin. 2.2 Server Farms Server Farms rely on an intelligent switch to evenly distribute requests among a group of computers hosting the same information [3]. They appear to the user as a single origin. Figure 2 shows a server farm. Nodes are distributed evenly among the servers. A server farm provides redundancy. If any server fails requests are just routed to other servers. An advantage to this type of network is that all the servers are in one location and can be managed easily. A disadvantage to a server farm is in the delivery of continuous data. A server farm still has the problem of jitter free delivery to nodes many hops away. Server Farm server server server Intelligent Switch Internet node node node Figure 2: An example of a server farm Nullsoft s SHOUTcast (http://www.shoutcast.com) technology is prone to this problem. The goal of SHOUTcast is to enable anyone to be able to deliver streaming data to the Internet [5]. Their product is not concerned with splitting streams and using caches at the edges of the network but only the 3

delivery from the to mirror servers. Figure 3 shows a sample SHOUTcast network. A generator node sends streams to mirror servers. These mirror servers deliver the content to end users. There is no intelligent switch to these servers and it is up to the end user to choose a mirror. Since the end user may be many hops away from the mirrors, jitter free delivery of data from the mirrors may be impossible due to unpredictable network congestion. SHOUTcast generator SHOUTcast mirror SHOUTcast mirror SHOUTcast mirror node node node Figure 3: A Sample Shoutcast network 2.3 Content Distribution Networks A content distribution network takes both the advantages of server farms and caching proxies. Figure 4 shows a content distribution network. Replicas of the origins data are transferred to servers named surrogates located geographically far apart. When a node requests data they first communicate to the request routing system that forwards the node to the best surrogate. The measure of best can be derived from geographic location and current network load and congestion. By locating the surrogates in various geographic locations nodes have a higher chance of experiencing quick and jitter free delivery of data from a close surrogate. An advantage of the content distribution network is that the origin can be decoupled from the delivery network. The owner of the origin can out the management of the delivery of data to another organization. Organizations such as Digital Island (http://www.digitalisland.com) and Akamai 4

origin Distribution System Request Router Vancouver surrogate 1 1 1 Calgary surrogate Toronto surrogate 2 2 2 3 3 3 node node node Figure 4: A content distribution network (http://www.akamai.com) currently provide distribution infrastructures spread throughout the world to deliver discrete and continuous data. RealNetworks (http://www.realnetworks.com) provides a distribution technology called iq that can be used in the above global networks to distribute continuous data. RealNetworks iq technology is concerned with the delivery of data streams from a to servers that may be located throughout the world such as in a Content Distribution Network [6]. RealNetworks iq allows for continuous media distribution or live broadcast distribution. It uses redundant methods to propagate streams to surrogate servers located throughout the world. Redundant methods include sending multiple streams on multiple networks and then merging them back at the, and forward error correction. The servers in the iq network act as peers and are able to share capacity among each other and react in case of a network failure [6]. 2.4 Two Methods for Broadcast Distribution The white paper Live Broadcast Distribution with RealSystem Server 8 [6] by RealNetworks discusses two methods for broadcast distribution. The earlier method used two TCP connections and a UDP connection to deliver a data stream. The new method uses just one UDP connection for delivery of a data stream from a to the destination. The earlier method has more latency and is not as efficient as the UDP 5

method [6]. Figure 5a shows this method. The UDP connection is used for the main distribution of live data. The return channel is used for notifying the of lost packets. The TCP control channel is used for the control of the data stream. Using the extra lines for lost packets and control introduces a latency in the data stream. persistant TCP control channel unidirectional data channel (UDP) return channel for resent requests receiver a unidirectional data channel (UDP) receiver b Figure 5: Two methods for broadcast distribution To reduce the latency in the data stream the newer broadcast distribution method uses just one UDP connection. To add redundancy to this UDP line, the stream is encoded using forward error correction (FEC) [6]. One example of forward error correction is using Solomon codes. Figure 5b shows this method. In addition to RealNetworks another company developing a form of FEC is Digital Fountain (http://www.digitalfountain.com/). 3 Dynamic Network Load Balancing and Node Migration The above technologies do not address two important areas for the distribution of continuous data: dynamic network load balancing and node migration. The first area of distribution not addressed is the dynamic addition more surrogate servers to the network when demand for continuous data is high and dynamic removal of surrogates when demand for continuous data is low. Reduction of surrogates means that nodes will have to be consolidated to the remaining surrogates; this is the second important area of distribution of continuous data. Both of these areas will add more scalability and jitter free performance to the above two technologies. 6

3.1 Dynamic Network Load Balancing Figure 6 shows a Content Distribution Network with surrogate servers. Two continuous data s are distributed to master surrogates on the network by the distribution hubs. The distribution hubs may broadcast the continuous data to the master surrogates using multicast communication. The master surrogates may further distribute the data to slave surrogates. A grouping of a master surrogate and slave surrogates is known as a surrogate group. Network nodes initially connect to one of the surrogates of the best surrogate group. A definition of best may be not only be geographically close but also refer to the status of the network. A surrogate group that is further away geographically may be chosen for a node if there is network congestion between the node and its closest surrogate group. src Source Network src dh Content Distribution Network dh S s Ss Ss S s Internet src - server - master surrogate server S s - slave surrogate server dh - distribution hub Figure 6: Distributing Streaming Content from to surrogate to end user A surrogate group must promote or bring online another slave surrogate when the group reaches capacity. All additional requests for continuous data will be forwarded to the newly promoted slave surrogate. The newly promoted slave surrogate will receive a data stream from a surrogate master. Figure 7 shows the addition of a surrogate as the network capacity increases for a surrogate group. Each surrogate can host a maximum of two nodes. In Figure 7a the network is at capacity and must promote the available offline surrogate. In Figure 7b the extra surrogate has been promoted and waits for a node connection. The next requesting node connects to the new surrogate in Figure 7c. When the network load on a surrogate group decreases, the slave surrogates 7

S_O Ss Ss a b c - master surrogate server S s - slave surrogate server S_O - offline surrogate server Figure 7: The promotion of an offline surrogate will be demoted or taken offline. When this happens all the nodes on the demoted slave surrogates must be consolidated onto the remaining surrogates. Node migration is discussed in section 3.2. 3.2 Node Migration Node migration is essential in consolidation of surrogate servers or for dynamic load balancing. A requirement of node migration is to ensure the continuous data is delivered such that the end user does not notice an interruption in the media. Node migration requires changes of buffer sizes on the node, and a swapping protocol. Each node requires a buffer to smooth out jitter caused by unpredictable inter-packet delay. 3.2.1 Changing Buffer Sizes Since all surrogates receive the original streaming at different latencies hosts must adapt their buffer sizes accordingly to continue to deliver media to the end user without interruption. Latency is not observable from a user point of view except at connection time. At connection time the latency is caused by end-to-end transmission delay and setup time. When a node moves to a surrogate of greater latency from the original than its current surrogate then its buffer will shrink. When a node moves to a surrogate of less latency from the original than its current surrogate then its buffer will grow. The details of these findings can be found in the report Buffer Management in a Continous Content Distribution Network by Howe [7]. 3.2.2 The Swapping Protocol When a node swaps from a surrogate of greater latency than the new surrogate, the node must catch up its delivery to equal that of other nodes connected to the same surrogate. If all the nodes connected to the surrogate are receiving the broadcast from a multicast then the recently swapped node must use a merge 8

method similar to the one described in the report The Split and Merge Protocol for Interactive Video-on-Demand by Liao et al [8]. 4 Conclusion Two important areas of continuous media distribution are not addressed by current technologies. These two areas include dynamic network load balancing and node migration. Dynamic network load balancing and node migration will aid in scalability and jitter free delivery of continuous content distribution networks. Both of these areas require further investigation. References [1] G. Coulouris, J. Dollimore, and T. Kindberg. Distributed Systems: Concepts and Design. Addison-Wesley, third edition, 2000. [2] Rob Glaser. Realsystem iq transforming digital media delivery. Public Announcement Video Stream, http://www.realnetworks.com/realsystem/, December 2000. Date Viewed: April 5, 2001. [3] G. Tomlinson M. Day, B. Cain and P. Rzewski. A model for content internetworking. Technical report, Network Working Group Internet-Draft, February 2001. [4] Realsystem Proxy 8 overview. RealSystem iq Whitepaper, RealNetworks, http://www.realnetworks.com/realsystem/, December 2000. [5] SHOUTcast online documentation. Nullsoft, http://www.shoutcast.com/support/docs/. Date Viewed: April 5, 2001. [6] Live broadcast distribution with Realsystem Server 8. RealSystem iq Whitepaper, RealNetworks, http://www.realnetworks.com/realsystem/, December 2000. [7] Anthony J. Howe. Buffer management in a continuous content distribution network. Technical report, University of Victoria, March 2001. Draft. [8] Wanjiun Liao and Victor O. K. Li. The split and merge protocol for interactive video-on-demand. IEEE MultiMedia, 4(4):51 62, October December 1997. 9