Scaling Server-Based Channel-Change Acceleration to Millions of IPTV Subscribers



Similar documents
VoIP Bandwidth Considerations - design decisions

Advanced Networking Voice over IP: RTP/RTCP The transport layer

Ethernet Switch Evaluation For Streaming Media Multicast Applications

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

diversifeye Application Note

Voice over IP: RTP/RTCP The transport layer

Encapsulating Voice in IP Packets

The Benefits of Purpose Built Super Efficient Video Servers

Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network

Proactive Video Assurance through QoE and QoS Correlation

The necessity of multicast for IPTV streaming

QOS Requirements and Service Level Agreements. LECTURE 4 Lecturer: Associate Professor A.S. Eremenko

Clearing the Way for VoIP

Per-Flow Queuing Allot's Approach to Bandwidth Management

Requirements of Voice in an IP Internetwork

Applications that Benefit from IPv6

VoIP QoS. Version 1.0. September 4, AdvancedVoIP.com. Phone:

Application Notes. Introduction. Contents. Managing IP Centrex & Hosted PBX Services. Series. VoIP Performance Management. Overview.

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs

Preparing Your IP Network for High Definition Video Conferencing

The Feasibility of Supporting Large-Scale Live Streaming Applications with Dynamic Application End-Points

A Semi-Distributed Fast Channel Change Framework for IPTV Networks

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

IPTV hit primetime. Main Topic

IP-Telephony Real-Time & Multimedia Protocols

MDI / QoE for IPTV and VoIP

Three Key Design Considerations of IP Video Surveillance Systems

Octoshape s Multicast Technology Suite:

ADVANTAGES OF AV OVER IP. EMCORE Corporation

Traffic Monitoring in a Switched Environment

Microsoft TV Test. Technology Background. ICC Technology. Application Note. by John Williams

Implementation of a Video On-Demand System For Cable Television

Network administrators must be aware that delay exists, and then design their network to bring end-to-end delay within acceptable limits.

Voice over IP (VoIP) Overview. Introduction. David Feiner ACN Introduction VoIP & QoS H.323 SIP Comparison of H.323 and SIP Examples

Customer White paper. SmartTester. Delivering SLA Activation and Performance Testing. November 2012 Author Luc-Yves Pagal-Vinette

Application Notes. Introduction. Sources of delay. Contents. Impact of Delay in Voice over IP Services VoIP Performance Management.

Quality of Service Testing in the VoIP Environment

Traffic load and cost analysis for different IPTV architectures

MULTICAST AS A MANDATORY STEPPING STONE FOR AN IP VIDEO SERVICE TO THE BIG SCREEN

Chapter 7 outline. 7.5 providing multiple classes of service 7.6 providing QoS guarantees RTP, RTCP, SIP. 7: Multimedia Networking 7-71

Efficient and low cost Internet backup to Primary Video lines

Latency on a Switched Ethernet Network

Integrate VoIP with your existing network

LARGE-SCALE INTERNET MEASUREMENTS FOR DIAGNOSTICS AND PUBLIC POLICY. Henning Schulzrinne (+ Walter Johnston & James Miller) FCC & Columbia University

International Journal of Advanced Research in Computer Science and Software Engineering

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Influence of Load Balancing on Quality of Real Time Data Transmission*

Multicast Instant Channel Change in IPTV Systems

Integration Guide. EMC Data Domain and Silver Peak VXOA Integration Guide

Classes of multimedia Applications

White Paper Content Delivery Networks (CDN) for Live TV Streaming

Network Simulation Traffic, Paths and Impairment

Analog vs. Digital Transmission

Bandwidth Control in Multiple Video Windows Conferencing System Lee Hooi Sien, Dr.Sureswaran

Discussion Paper Category 6 vs Category 5e Cabling Systems and Implications for Voice over IP Networks

Analysis and Simulation of VoIP LAN vs. WAN WLAN vs. WWAN

Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross

LTE BACKHAUL REQUIREMENTS: A REALITY CHECK

Preparing Your IP network for High Definition Video Conferencing

White Paper. Enterprise IPTV and Video Streaming with the Blue Coat ProxySG >

Wideband: Delivering the Connected Life

An Introduction to VoIP Protocols

Experiences with Interactive Video Using TFRC

Delivering reliable VoIP Services

Application Note. IPTV Services. Contents. TVQM Video Quality Metrics Understanding IP Video Performance. Series. Overview. Overview...

The changing face of global data network traffic

Region 10 Videoconference Network (R10VN)

IPTV and Internet Television

VOIP QOS. Thomas Mangin. ITSPA - Autumn Seminar 11th October 2012 LEEDS. Technical Director IXLeeds AND THE IXP THE CORE THE EDGE

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

WHITE PAPER. Ad Insertion within a statistical multiplexing pool: Monetizing your content with no compromise on picture quality

A NETWORK CONSTRUCTION METHOD FOR A SCALABLE P2P VIDEO CONFERENCING SYSTEM

Business Case for the Brocade Carrier Ethernet IP Solution in a Metro Network

Voice over IP Basics for IT Technicians

Mobile VoIP: Managing, scheduling and refining voice packets to and from mobile phones

REAL TIME VISIBILITY OF IPTV SUBSCRIBER EXPERIENCE AND VIEWING ACTIVITY. Alan Clark CEO, Telchemy Incorporated

A generic monitoring architecture for assuring the QoS in Mobile TV platforms

16/5-05 Datakommunikation - Jonny Pettersson, UmU 2. 16/5-05 Datakommunikation - Jonny Pettersson, UmU 4

An Analysis of Error Handling Techniques in Voice over IP

Performance Analysis of a Finite Duration Multichannel Delivery Method in IPTV

Internet Services & Protocols Multimedia Applications, Voice over IP

Increasing cable bandwidth to retain high-value customers

PART II. OPS-based metro area networks

Frequently Asked Questions

QoS issues in Voice over IP

White paper. Latency in live network video surveillance

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Faculty of Engineering Computer Engineering Department Islamic University of Gaza Network Chapter# 19 INTERNETWORK OPERATION

HOW PUBLIC INTERNET IS FINALLY READY FOR HD VIDEO BACKHAUL

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

Unit 23. RTP, VoIP. Shyam Parekh

Glossary of Terms and Acronyms for Videoconferencing

White Paper Three Simple Ways to Optimize Your Bandwidth Management in Video Surveillance

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

A Passive Method for Estimating End-to-End TCP Packet Loss

Troubleshooting Common Issues in VoIP

Monitoring Conditional Access Systems

Using Multicast Call Admission Control for IPTV Bandwidth Management

Distributed Systems 3. Network Quality of Service (QoS)

Evaluating Data Networks for Voice Readiness

Transcription:

Proceedings of 212 IEEE 19th International Packet Video Workshop May 1-11, 212, Munich, Germany Scaling Server-Based Channel-Change Acceleration to Millions of IPTV Subscribers Marc Mignon Belgacom, Brussels, Belgium marc.mignon@belgacom.be Koen Bouckhout, Joshua Gahm and Ali C. Begen Cisco, San Jose, CA 95134 USA {kbouckho, jgahm, abegen}@cisco.com Abstract Channel-change times in multicast-based IPTV distribution networks are often slow compared to analog video networks where the channel change is almost instant. One way to overcome these relatively slower channel change is server-based channel-change acceleration based on the rapid acquisition method standardized by the IETF. Concerns about potentially large backbone traffic and bandwidth requirements, however, have prevented operators from deploying such a solution on a large scale. In this paper, we explain techniques to overcome this limit and present data from a real-life deployment. We show how this solution is deployed in a scalable way to almost a million of IPTV set-top boxes while limiting the required backbone traffic and server capacity. I. INTRODUCTION There have been several proposals from industry and academia to accelerate the channel changes (also known as zapping) for IPTV. These fall into several broad categories. There are solutions based on video coding and processing as well as solutions at the network level. Hybrid approaches that combine video and network-level solutions have also been proposed. For an extensive list of references, refer to [1, 2]. Although the variety of solutions proposed so far shows that the problem of accelerating the channel changes in IPTV can be approached in many ways, the scalability and the operational aspects of the solution become crucial in large-scale IPTV deployments. IPTV providers require a scalable channel-change acceleration solution that minimizes their capital and operational expenditures while providing a high quality of experience, thus reducing subscriber churn [3, 4]. With this goal in mind, a standard solution for reducing the acquisition delay in generic multicast applications has been developed in the Audio/Video Transport (AVT) Working Group of the Internet Engineering Task Force (IETF) [5]. This solution benefits from the same protocol machinery used for repairing packet losses through retransmissions in multicast applications. To date several independent implementations of [5] have been developed. However, adoption by IPTV operators has been rather slow and cautious. Operators have been concerned by the potentially large backbone traffic involved and the corresponding size of the needed investment in additional infrastructure, the complexity to address the variety of access network technologies and subscriber configurations, and the risk of adding a new component of critical importance in the service delivery chain. Belgacom has been one of the first European telcos to deploy this technology for channel-change acceleration and retransmission-based loss repair on a large scale, on almost a million IPTV set-top boxes. The key feature that made this deployment feasible was the development of a solution to flexibly configure, on a per set-top box level, the amount of overhead bandwidth usable for rapid channel An open-source implementation is available at ftp://ftpeng.cisco.com/ftp/vqec/. change and retransmission, in order to keep the total traffic to a manageable level while being able to address the variety of access line bandwidths and in-home configurations. In this paper we explain why configurable overhead bandwidth - on a per set-top box basis - is essential to enable mass deployment in a telco environment, and what solution Belgacom developed in order to achieve this. We then show, based on data measured on its operational platform, that Belgacom was able to reduce the required server capacity by a factor of eight, thus reaping substantial savings in the cost of the solution and making it highly scalable. II. PRINCIPLES OF RAPID ACQUISITION Emerging IPTV deployments use the Real-time Transport Protocol (RTP) to deliver media content. The RTP protocol [6] provides sequence numbers that can be used to maintain packet ordering and detect losses, as well as timing information which can be used to measure network jitter and synchronize timing between sender(s) and receiver(s). RTP also defines a control protocol, RTP Control Protocol (RTCP), which is used for signaling among senders and receivers. RTCP signaling provides, among other things, maintenance of session state between senders and receivers and reporting of sender statistics to receivers and receiver statistics to senders. Let us illustrate the flow diagram of the rapid acquisition in Fig. 1, where we have an RTP source, RTP receiver (a set-top box that might employ one or more IPTV clients) and a retransmission server that is also a feedback target and provides the support for rapid acquisition. The retransmission server semi-permanently joins the multicast sessions and receives the RTP streams it wishes to cache temporarily to perform the retransmission-based loss-repair functions. For the rapid acquisition support, it also parses the incoming streams, looking for certain information to facilitate accelerated channel changes. The rapid acquisition process can be summarized as follows. The encoding of the protocol messages are detailed in [5]. Some performance results have been previously presented in [1, 2]. 1) The RTP receiver sends a Multicast Leave message to its upstream multicast router to leave the current multicast session and end the currently viewed channel. 2) The RTP receiver sends a feedback message to the feedback target asking for rapid acquisition for the session it wants to join. In this feedback message, the RTP receiver can specify its specific requirements. 3) The retransmission server receives the feedback message and decides whether or not to accept the request. If the retransmission server accepts the request, it sends a message to the RTP receiver that describes the burst that the retransmission server will generate and send, including the indication when the RTP receiver should join the new multicast session. The 978-1-4673-31-9/12/$31. 212 IEEE 17 PV 212

Retransmission Server 3 2 1 4 5 6 Fig. 1. Multicast (RTP) Source IP Set-top Box (RTP) Receiver Core Router IP/MPLS Core Network Metro Aggregation and Distribution Network Aggregation Router Access Network Multicast flows Join/Leave messages Unicast messages Unicast burst Flow diagram for rapid acquisition. unicast burst stream consists of retransmission of a specific block of packets which will allow the RTP receiver to begin displaying video without waiting for a certain random access point to arrive in the multicast flow. If the retransmission server denies the request, it informs the RTP receiver immediately. 4) If the retransmission server accepts the request, it transmits the unicast burst data and any additional message(s) needed to carry the key information about the new channel. The key information is used to prime the decoder inside the IPTV settop box so that it can start decoding sooner. The unicast burst continues at a higher than natural rate until the unicast burst catches up with the real-time multicast flow. The sustainable burst rate depends on the access network link characteristics. 5) At the appropriate moment (as indicated or computed from the burst parameters), the RTP receiver joins the new multicast session by sending a Multicast Join message to its upstream multicast router. 6) The RTP receiver starts receiving the multicast flow and completes the rapid acquisition. From a deployment perspective it is vital to note that the steps described above for rapid acquisition use the existing protocols [6 8] that are already being widely adopted for providing loss-repair capabilities to IPTV distribution networks. With this approach, rapid channel change support can be deployed quickly, while maintaining interoperability with the existing protocols and infrastructure. It should be noted that the RTP receiver does not, generally, immediately join the multicast session at the time that it makes a request for rapid acquisition. The reason for this is that if the RTP receiver were to join the multicast session for the new channel immediately, there would, in many cases, not be enough available bandwidth left over on the access link towards the RTP receiver to allow the unicast burst sent from the retransmission server to catch up with the multicast flow in a reasonable amount of time. To see this, imagine that the access link to the RTP receiver provides up to 5 Mbps of available bandwidth and the new channel has a native rate of 4 Mbps. If the RTP receiver were to join the multicast session immediately, and if the selected starting point for the unicast burst stream is one second behind the current position of the multicast flow, then it would take the RTP receiver four full seconds using the remaining 1 Mbps of bandwidth to receive all of the unicast burst stream to eventually catch up with the multicast flow. During this time, the decoder could not begin to play out the video until nearly four seconds had elapsed because the playout at 4 Mbps would quickly overtake the buffer fill at 1 Mbps leading to a buffer underrun. In order to prevent the buffer underrun scenario described above, the RTP receiver delays joining the multicast session for a period of time, leaving the full access link bandwidth available for the retransmission server to send the unicast burst. Continuing with the example from above, the retransmission server sends the unicast burst using the full 5 Mbps bandwidth of the access link, thus rapidly filling the decoder buffer to the point where it may begin playout without danger of a buffer underrun. Much of the complexity of the rapid acquisition method comes from the need for the RTP receiver to eventually join the multicast session and to do so in a way that the total consumed bandwidth on the access network never exceeds the available bandwidth, while at the same time ensuring that the RTP receiver receives at least one copy of every RTP packet starting with the selected random access point and ending with the first packet received from the multicast flow. Complicating the problem is the fact that the time from when the RTP receiver asks to join the multicast session to when the multicast flow actually begins to flow on the access link can be quite variable and unpredictable. There are a number of different rapid acquisition algorithms that can accomplish a clean splice of the unicast burst from the retransmission server with the multicast flow for the new channel without overrunning the available access link bandwidth. One such algorithm, and the one used in the deployment that is the subject of this paper, works as follows: After the retransmission server has been sending the unicast burst using the full available access link bandwidth for a period of time, the server reduces the sending rate to use only the excess link bandwidth (the bandwidth left over after the multicast session is joined, i.e., 1 Mbps in the above example) and at the same time the RTP receiver requests to join the multicast session. The timing of these two events is coordinated via the signaling between the RTP receiver and retransmission server at the initiation of the rapid acquisition. By the time of these coordinated events, the RTP receiver will have built up its buffer to the point where there will not be an underrun even in the event that the multicast join should take its maximum time to complete. The amount of time, and the size of the data that is required to prefill the decoder s buffer to the point where there is no danger of an underrun during the multicast join is precomputed by the retransmission server, and is the subject of the next section. III. CAPACITY ESTIMATION BASED ON THE e-factor The total size of the data sent from the retransmission server to the IPTV client as part of the unicast burst, together with the peak per-client channel-change rate, determines how much total backbone network bandwidth is required to support accelerated channel changes in an IPTV deployment. As described in the previous section, the size and duration of the unicast burst is computed by the retransmission server, and is calculated to ensure that the IPTV client does not experience a buffer underrun during the period when it is attempting to join the multicast session for the new channel. It turns out that the required size and duration of the unicast burst, and therefore the required backbone network bandwidth, depends greatly on the fraction of excess bandwidth that is available on the IPTV client s access link, as compared with the bandwidth that is needed on the access link for the primary multicast flow of the channel. In the example given above, where an IPTV client with a 5 Mbps access link is attempting to join a 4 Mbps channel, this excess bandwidth fraction (also referred to as the e-factor) is (5 4)/4 =25%. 18

The relationship between the e-factor, denoted by e, and the total backbone network bandwidth required to support rapid acquisition can be derived based on Fig. 2. Stream Time Real Time Fig. 2. The time plot for an IPTV client doing rapid acquisition showing real time vs. stream time. Suppose the burst stream starts at t =at a normalized rate of (1 + e) and continues at that rate until t = S while the decoder s normalized consumption rate is 1. Thus, at t = S, the amount of buffer on the IPTV client is es. Let A(S) be the maximum time gap that the unicast burst can repair without encountering an underrun, if we stop the burst at t = S. This assumes that starting at t = S, we will have a maximum of A(S) consecutive packets missing after the built-up buffer, and that these are being filled in at a normalized rate e while the buffer is being played out at a rate of 1. This means that until we close the gap entirely, we are heading towards an underrun at a rate of (1 e). Since the buffer size is es, the underrun occurs after a duration of rate is e. Thus, es 1 e. During this period, the normalized repair A(S) = e2 S 1 e. (1) Assume that the random access point we are starting the unicast burst from is L time units behind the multicast flow, and the retransmission server has buffered all the packets from that point to the current time. If we start sending the unicast burst from time at rate (1 + e), then we run out of data in the buffer at t = S max = L e. (2) At this point, we can no longer send the unicast burst at a rate higher than 1, so the maximum available gap that can be filled in does not increase as S increases. Thus, we have A(S) = e 2 S, 1 e S Smax el, otherwise. (3) 1 e Now suppose we stop the unicast burst at t = S and the IPTV client joins the multicast session sometime between S and S + J. The worst case occurs when the multicast join happens at the latest time, i.e., S + J. The gap in this case is L es + J, S S max X(S) = (4) J, otherwise. In the following derivations, we assume e < 1. The corresponding derivations when e 1 are omitted due to lack of space. Solving (3) and (4) for S to find when there is exactly enough buffering to do a full repair in the worst-case scenario, we get S Join = 1 e (L + J), (5) e and this only works if S Join S max. In other words, 1 e (L + J) L e e. (6) Thus, for the selected random access point to be feasible as a starting point for the unicast burst, the following condition must be satisfied: L 1 e J (7) e The total size of the data sent as part of the rapid acquisition is (1 + e)s Join + X(S Join) = L+J. If we denote the GoP duration e by t GoP, the mean value for L becomes its minimum value plus t GoP /2. That is, L 1 e J + tgop e 2. (8) Thus, the mean size of the data sent during the rapid acquisition equals J e + tgop (9) 2 2e A key feature of (9) is that the total size of the data sent during rapid acquisition at low e-factor values can be very large. Fig. 3 illustrates this relationship for a set of realistic values of the variables, where it can be seen that for low values of the e-factor the bandwidth required for rapid acquisition can become prohibitive. From this example, it can be seen that when planning for the deployment of rapid acquisition on a provider network it is critical to ensure that the excess bandwidth available to each IPTV client be made as large as possible. If we denote the average channel bitrate with R and the channelchange request arrival rate with λ, the required server capacity becomes Required Server Capacity per Client (Mbps) 5 4 3 2 1 ( J e + tgop 2 2e ) Rλ. (1) 2 4 6 8 1 e-factor (%) Fig. 3. Influence of e-factor on the server capacity required per IPTV client for a standard-definition (SD) channel encoded (R) at 2.6 Mbps and with a GoP duration (t GoP ) of 8 ms. The Multicast Join latency is 5 ms and channel-change rate (λ) is.2 per second per IPTV client. IV. DEPLOYMENT IN A TELCO ENVIRONMENT A. Constraints on the Access Line Bandwidth and Subscriber Configuration The implementation of rapid acquisition rests entirely on the guaranteed availability of an additional bandwidth - on top of the 19

bandwidth already used by the multicast flow(s). The amount of extra bandwidth usable has a critical impact on the traffic generated by the retransmission server in the backbone network, as well as an impact on the amount of achieved channel-change acceleration. While this bandwidth is used for only short amounts of time (from a fraction of a second up to several seconds), it must be there when it is needed, and furthermore at a QoS level compatible with the quality expectations of broadcast TV services. Typically, it is stolen from low QoS services, because impact on these services is unnoticeable in practice if it is limited to a few seconds only. While this sounds simple in principle, the practical implementation in a telco environment faces a major obstacle: the amount of bandwidth available varies greatly among the IPTV clients, because of the multiplicity of access line technologies and subscriber-specific home configurations. Telco operators typically use a mix of access line technologies to deliver broadband services to their subscribers: copper-based DSL technologies, mainly ADSL2+ from the central office or VDSL2 from the street cabinet, and fiber-based technologies (FTTH). Since the maximum bandwidth available to the IPTV client will depend on the access technology used and on the distance between the home and the street cabinet or the central office, this will typically result in a set of possible access bandwidth values, with subscribers spread over this set in function of the technical characteristics of the operator s access network. This multiplicity of access bandwidth values will then result in different service profiles offered to the subscribers. The main components of this service profile will be the amount of bandwidth that can be guaranteed with QoS (which will determine the number of simultaneous HD and SD streams that can be delivered to that home), and the amount of besteffort bandwidth (used mostly for fast Internet access). The implementation of any bandwidth allocation mechanism for rapid channel change must take these constraints into account. Furthermore, it must also be able to cope with multiple possible in-home subscriber configurations including but not limited to (i) existence of one or several IPTV set-top boxes, (ii) availability of newer or legacy IPTV set-top boxes (with or without rapid channel change support), (iii) existence of set-top boxes with or without local recording functionality (thus, with or without multiple IPTV clients in a single IPTV set-top box), (iv) HD-capable vs. SD-only set-top boxes, (v) different types of home gateways (e.g., hybrid, bridged/routed or fully routed). B. Dimensioning and Configuration of Unicast Bursts The simplest of all solutions to cope with the above complexity would be to use one fixed e-factor value, i.e., the minimum value that can be supported by all possible access line bandwidths, service profiles and in-home configurations. However, in practice this would mean a very low e-factor value: e.g., an e-factor as low as 1%, or maybe even less. As discussed in Section III, this will lead to an extremely high unicast traffic, which will itself require considerable investments in backbone network capacity and server infrastructure, therefore making the deployment economically less feasible. In order to benefit from the largest possible e-factor values, thereby minimizing the resulting traffic, we implemented a flexible per set-top box configuration of the e-factor values. Below, the general principles we use to determine the per set-top box e-factor values are explained. C. Calculation of System and Set-Top Box Configuration Parameters The allowed overhead bandwidth, temporary available on the access link, is calculated on the basis of the following user configuration information, which is automatically provisioned in the middleware backend system: (i) subscriber s service profile (maximum number and types of TV streams allowed for this subscriber), (ii) additional temporary bandwidth available with QoS, (iii) status of retransmission and rapid channel-change functions (enabled/disabled). This bandwidth is then statically split among all the IPTV set-top boxes within the home, in proportion to each set-top box s service profile characteristics. Ideally, this should be a truly dynamic mechanism, which would also be able to take into account the respective set-top boxes activity states (standby vs. active), and update the values at each change of activity state. However, the required extra complexity and operational risks often outweigh the relative additional benefits compared to a static solution, which already captures significant gains. After this static allocation, for each set-top box within a home, separate e-factor values are calculated for HD and SD channels, and for retransmission and rapid channel change. When several streams are allowed on a multi-tuner set-top box, this bandwidth is further statically split among the different channels in proportion of their types. For a unicast burst, the set-top box is, however, allowed to overbook the entire overhead, because it is assumed that only one tuner will do a channel change at any time on a given set-top box. If the above overhead bandwidths are under a given threshold for a given set-top box, the rapid channel-change feature will be simply disabled to avoid the generation of excessive traffic. The calculation of the set-top box specific parameters is partly done in the middleware backend and in the set-top box itself. In order to be effectively used by the retransmission server, the e-factor value is passed within the RTCP feedback message sent by the IPTV set-top box for every retransmission and rapid channel-change request. V. RESULTS AND OBSERVATIONS In this section we present data gathered from a live deployment by Belgacom of the rapid channel-change solution. At the time of writing this paper, there are close to a million IPTV clients that have enabled this solution in the Belgacom s network. The purpose is to measure the effective performance of the solution explained above. In particular, this data will allow us to estimate the potential gains achieved in the backbone network and retransmission server capacity. A. Channel-Change Rates Fig. 4 shows a graph of the number of IPTV clients effectively tuned on a channel, measured on one retransmission server among Belgacom s retransmission server pool, over four days; Friday through Monday. The graph plots the average value over each fiveminute interval. As the graph shows, at the peak hour (around 8pm) the maximum number of IPTV clients tuned on a channel and thus are sending rapid channel-change requests was around 2, on this particular server. Fig. 5 shows, for the same server and the same time period, the maximum arrival rates of the rapid channel-change requests. The graph plots the maximum value that was observed over each five-minute period. Specifically, the monitoring platform samples five real-time values per minute, thus 25 real-time values over five minutes, and keeps the maximum value observed. The maximum value observed over the four days was around 92 requests per second, on Sunday at 1pm. There were 18,3 IPTV clients active at that time. If we divide this maximum rapid channel-change request arrival rate by the average number of IPTV clients that were active at that time, we get a value of.5 requests per second per client. In order to get a sense of the statistics of the request arrival rates, we have calculated the cumulative distribution of the values measured 11

Total Number of Tuned Clients 2.5 x 14 2 1.5 1.5 Percentage of Clients 7 6 5 4 3 2 1 HD Channels SD Channels Time of the Day (Friday through Monday) Fig. 4. Average number of active IPTV clients tuned to a channel, measured on one retransmission server in the Belgacom server pool, during four consecutive days. Each point of the graph is the average number during a five-minute interval. 1% 11-25% 26-5% 51-1% 11-2%21-5% Range of e-factor Values Fig. 6. Histogram of the e-factor values configured on the set-top boxes for HD and SD channels. The plot shows the percentage of set-top boxes that have an e-factor within the range indicated under each bin. Arrival Rate for Channel Changes (per Second) 1 8 6 4 2 Time of the Day (Friday through Monday) C. Traffic Output by the Retransmission Server Fig. 7 shows the downstream traffic (from the retransmission server towards the IPTV set-top boxes) that was measured on the retransmission server during the same period as above. The graph plots the maximum value measured in each five-minute interval. Here, again the monitoring platform samples five real-time values per minute, thus 25 real-time values over five minutes, and keeps the maximum value observed. In Fig. 7, one immediately observes the same peak that was observed on Sunday around 1pm in Fig. 5. The maximum traffic observed was 55 Mbps, while there were 18,3 active IPTV clients. Fig. 7 enables us to directly quantify the gain that we obtained by Fig. 5. Arrival rate of rapid channel-change requests coming from the IPTV clients, measured on the same server as Fig. 4, during four consecutive days. Each point of the graph is the maximum value among 25 measurements made during a five-minute interval. for each five-minute interval on Sunday between 6 to 11pm (when there were more than 14,5 IPTV clients active). The distribution shows that 85% of the time the rapid channel-change request arrival rate was below half of the maximum value experienced during the whole measurement period. Since Belgacom IPTV service includes both HD and SD channels, we also measured the request arrival rates separately for HD and SD channels. We observed that 15% of the requests were towards an HD channel and this value was extremely stable over time. This value also corresponded to the ratio of the number of HD channels in the offered service. B. e-factor Statistics Fig. 6 shows histograms of the e-factor values configured for HD (encoded at 9 Mbps) and SD (encoded at 2.6 Mbps) channels. These histograms show the wide range of values generated by the flexible configuration mechanism of the e-factor; from 1% up to 5%. It also shows the difference in the e-factor values used for SD channels vs. HD channels. For HD channels, the majority of the IPTV clients are using an e-factor value lower than 1% while for SD channels, the majority of the IPTV clients are using an e-factor above 1%. Traffic out of the Retransmission Server (Mbps) 6 5 4 3 2 1 Time of the Day (Friday through Monday) Fig. 7. Downstream traffic from the retransmission server towards the IPTV set-top boxes during the four consecutive days. Each point of the graph is the maximum value among 25 measurements made during a five-minute interval. The maximum value observed was approximately 55 Mbps. designing a flexible e-factor configuration mechanism. If we had been constrained to use one single e-factor value for all IPTV clients, the minimum value that covers all cases should have been used, which was 1%. Using the following values that characterize Belgacom s deployment Multicast Join latency = 1 ms, t GoP = 8 ms, R HD = 9 Mbps, R SD = 2.6 Mbps, 15% of channel changes are towards an HD channel (Thus, R =.85R SD +.15R HD =3.56 Mbps), and a peak channel-change request arrival rate of 92 requests per second, (1) results in a maximum backbone traffic of 4.6 Gbps. 111

However, as Fig. 7 shows, we actually observed a maximum backbone traffic of 55 Mbps. This means that our flexible provisioning of the e-factor values enabled us to achieve a reduction of the server traffic by a factor of eight. This reduction implies savings in terms of not only capital expenditures such as investment in retransmission servers and network deployment but also operational expenditures. We estimate that this solution has allowed us to reduce the total number of retransmission servers by half. An interesting observation has happened when Belgacom put a new generation of HD encoders in service that allowed reducing the encoding bitrate from 9 Mbps to 6.5 Mbps without losing from the quality. Fig. 8 shows the evolution of the backone traffic on one server during the week in which the migration took place (which happened on the night from the 3rd day to the 4th day of the graph). Traffic out of the Retransmission Server (Mbps) 6 5 4 3 2 1 Time of the Day (Friday through Thursday) Fig. 8. Change observed in the downstream traffic from the retransmission server towards the IPTV set-top boxes as a result of the reduction in HD channel bitrates from 9 Mbps to 6.5 Mbps. The change took place during the night from the 3rd to the 4th day of the graph. The result is roughly halving of the traffic. In Fig. 8, we observe that the downstream traffic from the retransmission server was roughly halved when the HD channel encoding bitrate was reduced. However, according to (1), such a bitrate reduction should have yielded a saving of merely 1%, since only 15% of the channel changes are towards an HD channel (R decreased by 1% from 3.56 Mbps to 3.18 Mbps). This can be explained by the fact that the reduction in the HD channel encoding bitrate caused a large shift in the histogram of e-factor values for HD streams towards higher values, yielding further considerable reduction in traffic, due to the considerable sensitivity of the traffic to the e-factor. This shift can be observed comparing Fig. 6 and Fig. 9, which shows the new histogram after the reduction in HD channel bitrates. E-factors belonging to all set-top boxes are now above 5%, while previously almost half of their values were lower than 25%. The dramatic impact of e-factor values on the traffic generated by a retransmission server had been previously derived in Section III and demonstrated in Fig. 3. VI. CONCLUDING REMARKS We examined the issues faced by a large-scale deployment of server-based channel-change acceleration, and more specifically quantified the relationship between the configurable overhead bandwidth needed for the channel-change acceleration and the resulting backbone network bandwidth and required server capacity. First, we theoretically derived the relationship between the e-factor and the network traffic, and showed that the sensitivity of the traffic to the e-factor value was very large for low e-factor values. We then proposed a solution based on the flexible configuration of the e-factor Percentage (%) 7 6 5 4 3 2 1 1% 11-25% 26-5% 51-1% 11-2%21-5% Range of e-factor Values Fig. 9. The new histogram of the e-factor values configured on the set-top boxes for HD channels after the reduction in HD channel bitrates. Values have shifted to 5% and up. per set-top box, which minimizes this issue when deploying serverbased channel-change acceleration on a bandwidth-constraint access network and across multiple subscriber configurations. Based on reallife measurements on Belgacom s deployment, we showed that this solution helped to reap important savings in network traffic and server capacity, compared to a fixed e-factor approach, which made the overall solution scalable and suitable for large deployments. Further optimizations and gains could be achieved with a further increase in flexibility of the solution, by dynamically adapting the value of the e-factor against the activity status of the IPTV clients, especially in the case of subscribers with multiple IPTV set-top boxes. This could be according to us an interesting path to pursue for further improvements of the solution. Acknowledgments: We would like to thank our colleagues (specifically Nicolas Richir, Joris Dewit, Christophe Dyzers and Chris Bottledoorn) at Belgacom and Cisco for their support in the design and deployment of the solution, and their constructive suggestions and contributions to this study. REFERENCES [1] A. C. Begen, N. Glazebrook, and W. V. Steeg, A unified approach for repairing packet loss and accelerating channel changes in multicast IPTV, in IEEE Consumer Communications and Networking Conf. (CCNC), 29. [2], Reducing channel-change times in IPTV with real-time transport protocol, IEEE Internet Comput., vol. 13, no. 3, pp. 4 47, May/June 29. [3] R. Kooij, K. Ahmed, and K. Brunnstrm, Perceived quality of channel zapping, in IASTED Int. Conf. Communication Systems and Networks, 26. [4] A. Takahashi, D. Hands, and V. Barriac, Standardization activities in the ITU for a QoE assessment of IPTV, IEEE Commun. Mag., vol. 46, no. 2, pp. 78 84, Feb. 28. [5] RFC 6285, Unicast-based rapid acquisition of multicast RTP sessions. [Online]. Available: http://www.ietf.org/rfc/rfc6285.txt [6] RFC 355, RTP: A transport protocol for real-time applications. [Online]. Available: http://www.ietf.org/rfc/rfc355.txt [7] RFC 4588, RTP retransmission payload format. [Online]. Available: http://www.ietf.org/rfc/rfc4588.txt [8] RFC 576, RTP control protocol (RTCP) extensions for singlesource multicast sessions with unicast feedback. [Online]. Available: http://www.ietf.org/rfc/rfc576.txt 112