Delving into Internet Streaming Media Delivery: A Quality and Resource Utilization Perspective

Size: px
Start display at page:

Download "Delving into Internet Streaming Media Delivery: A Quality and Resource Utilization Perspective"

Transcription

1 Delving into Internet Streaming Media Delivery: A Quality and Resource Utilization Perspective Lei Guo, Enhua Tan, Songqing Chen 2, Zhen Xiao 3, Oliver Spatscheck 4, and Xiaodong Zhang Department of Computer Science and Engineering The Ohio State University Columbus, OH 432, USA {lguo, etan, zhang}@cse.ohio-state.edu 3 IBM T. J. Watson Research Center 9 Skyline Drive Hawthorne, NY 532, USA xiaozhen@us.ibm.com 2 Department of Computer Science George Mason University Fairfax, VA 223, USA sqchen@cs.gmu.edu 4 AT&T Labs-Research 8 Park Ave. Florham Park, NJ 7932, USA spatsch@research.att.com ABSTRACT Modern Internet streaming services have utilized various techniques to improve the quality of streaming media delivery. Despite the characterization of media access patterns and user behaviors in many measurement studies, few studies have focused on the streaming techniques themselves, particularly on the quality of streaming experiences they offer end users and on the resources of the media systems that they consume. In order to gain insights into current streaming services and thus provide guidance on designing resource-efficient and high quality streaming media systems, we have collected a large streaming media workload from thousands of broadband home users and business users hosted by a major ISP, and analyzed the most commonly used streaming techniques such as automatic protocol switch, Fast Streaming, MBR encoding and rate adaptation. Our measurement and analysis results show that with these techniques, current streaming systems tend to over-utilize CPU and bandwidth resources to provide better services to end users, which may not be a desirable and effective way to improve the quality of streaming media delivery. Motivated by these results, we propose and evaluate a coordination mechanism that effectively takes advantage of both Fast Streaming and rate adaptation to better utilize the server and Internet resources for streaming quality improvement. Categories and Subject Descriptors C.2 [Computer Communication Networks]: Distributed Systems General Terms Measurement Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. IMC 6, October 25 27, 26, Rio de Janeiro, Brazil. Copyright 26 ACM /6/...$5.. Keywords Traffic analysis, Multimedia streaming. INTRODUCTION The Internet has witnessed the surge of multimedia content from many application areas such as education, medical research and practice, news media, and entertainment industries [3]. Although the majority of media traffic on the Internet is delivered via downloading, pseudo streaming, and P2P techniques, streaming service is superior in handling thousands of concurrent streams simultaneously, flexible responses to network congestion, efficient bandwidth utilization, and high quality performance [7]. Different from downloading or pseudo streaming small sized video clips from a Web site such as YouTube [9], streaming long duration and high quality media objects on the Internet has several unique challenges. First, streaming services usually require high and stable end-to-end bandwidth between a media server and its clients. Due to the lack of Quality of Service (QoS) guarantee on the packet switching based Internet, the quality of Internet media streaming may significantly degrade due to bandwidth fluctuations during a streaming session, especially for delivering high quality video such as HDTV. Second, the connection speed of Internet end users ranges from slow dial-up connections to T or high speed cable network services. Thus, a fixed encoding rate for a media object is not desirable for clients with diverse network connections. Third, streaming media users always expect a small startup latency. However, due to the dynamics on the media server load and network bandwidth, a client may experience a prolonged startup delay. In addition, the filling of the client play-out buffer, which is used to smooth jitter caused by network bandwidth fluctuations, further increases the user s waiting time. With these challenges, a lot of studies have been conducted on the effective utilization of server and Internet resources to deliver high quality streaming media. Today, more than 9% of streaming media traffic on the Internet is delivered either through Windows media services or RealNetworks media services [7]. These commercial streaming services have adopted various techniques to ad-

2 dress the above challenges and to satisfy the ever-increasing quality demands of users, such as TCP and HTTP based streaming, Fast Streaming, multiple bit rate (MBR) encoding and rate adaptation. Due to the wide deployment of Network Address Translation (NAT) routers and firewalls that often prevent UDP packet transversal, TCP-based streaming has been widely used and now accounts for the majority of Internet streaming traffic [7, 26]. Fast Streaming [2] is a group of techniques supported by the Windows media service, which aggressively utilizes the Internet bandwidth by delivering a media object at a rate much higher than its encoding rate, in order to minimize the user perceived startup latency and guard against potential network bandwidth fluctuations. MBR encoding is a technique that encodes a media object with multiple bit rates so that the streaming server can deliver the same content with different quality to clients with different network connections. MBR encoding also enables dynamic stream switch among streams of different rates encoded in the object during a user session, in order to adapt to the current bandwidth, which is called Intelligent Streaming [4] in the Windows media service and SureStream [7] in the RealNetworks media service. In spite of the wide deployment of these techniques, existing measurement studies of Internet streaming media mainly focus on the characterization of media access patterns and user behaviors, such as [, 2, 3, 5, 3], which is helpful to the design of media delivery systems such as server clusters and media proxies [, 3]. However, the mechanisms of the commonly and practically used streaming techniques themselves and their effects on improving Internet streaming quality have not yet been thoroughly studied, which is necessary to understand state-of-the-art of Internet streaming media and to provide guidance on future Internet streaming services. Despite several experimental studies in lab environments on the Windows and RealNetworks media systems [4, 22], to the best of our knowledge, to date, there is no comprehensive study on the delivery quality and resource utilization of these streaming techniques in the Internet environment. It is highly desirable for both streaming service providers and system designers to be guided with an insightful understanding of existing Internet streaming techniques. In order to investigate Internet streaming quality and the efficiency of resource utilization with the deployment of these techniques, in this work, we have collected a 2-day streaming media workload from a large ISP in the United States. The workload covers thousands of broadband home users and hundreds of business users who access both ondemand and live streaming media. Through extensive analysis of the majority of TCP-based streaming traffic on the Internet, we have the following observations: We found that the overhead of protocol rollover plays an important role in user perceived startup latency, and thus may have affected the way that media is served by content providers. When UDP is not supported, the overhead of protocol rollover from UDP to TCP contributes a non-trivial delay to the client startup latency. More than 22% of protocol rollover is longer than 5 seconds. By aggressively utilizing the Internet network bandwidth, Fast Streaming shows both positive and negative features. Although Fast Streaming can help smooth re-buffering jitter, it over-supplies media data to end users by about 55%, and consumes more CPU resources, which leads to a longer server response time. MBR-encoding is widely used in media authoring, and nearly half of streaming video and audio objects on the Internet are MBR-encoded. However, the rate adaptation functionality of MBR is poorly utilized, particularly when Fast Streaming is used. Overall, on the Internet, about 3% of home and 4% of business streaming sessions suffer various quality degradations, such as rebuffering, thinning, or switching to a lower quality stream. Our measurement and analysis results show that with these techniques, current streaming services tend to overutilize CPU and bandwidth resources to provide better service to end users, which may not be a desirable and effective way to improve the quality of streaming media delivery. Furthermore, the Fast Streaming technique does not work with rate adaptation, resulting in even worse user experiences than normal TCP-based streaming upon long-term network congestion. Motivated by these results, we propose Coordinated Streaming, a mechanism that effectively coordinates caching and rate adaptation in order to improve streaming quality with an efficient utilization of the server and Internet resources. The potential of such a mechanism in streaming quality improvement is evaluated accordingly. The remainder of this paper is organized as follows. Section 2 describes our trace collection and processing methodology. Section 3 presents an overview of our collected workload. The measurement and analysis of the delivering quality and resource utilization of streaming media services are performed in Sections 4, 5, and 6. The coordinating caching and rate adaptation mechanism is discussed in Section 7. Some related work is outlined in Section 8. Finally, we make concluding remarks in section TRACE COLLECTION AND PROCESS- ING METHODOLOGY The prevailing streaming protocols on the Internet are RTSP [25] and MMS [5]. In RTSP streaming, the client and the server exchange streaming commands via RTSP, running on TCP. The media data packets and streaming control/feedback packets are delivered via RTP/RTCP [24] (such as Windows and QuickTime media services) or RDT [6] (RealNetworks media services), running on UDP or TCP. In MMS streaming, all streaming commands and control packets between a client and a server are exchanged via MMS in the same TCP connection, and the media data can be delivered over UDP or TCP. For both RTSP and MMS streaming, when TCP is used to deliver media data, the media and control packets are interleaved with RTSP or MMS commands in a single TCP connection, instead of using two separate TCP connections. In addition to RTSP and MMS, media can also be streamed through HTTP [3]. Different from HTTP downloading (also known as pseudo streaming [7]), HTTP streaming uses the HTTP protocol to deliver both RTSP commands and media data. In Microsoft HTTP streaming, the RTSP headers are embedded in the Pragma headers of HTTP messages. In RealNetworks and QuickTime HTTP streaming, the RTSP commands are embedded in HTTP message bodies with the base64 encoding format.

3 Table : User Workload Overview Content Product Number of Traffic (GB) Type Type Requests TCP/UDP WM 28,2 5.86/9 audio RM 9,39.79/2.26 on QT 244./.82 demand WM 67,2 5.2/24 video RM 2,7 6.25/7.3 QT 3./.34 WM, /6.69 audio RM,64 5/2.39 live QT 4./ media WM /2.85 video RM /3.9 QT 6./.3 Table 2: Business User Workload Overview Content Product Number of Traffic (GB) Type Type Requests TCP/UDP WM 9, /. audio RM, /.4 on QT 5./. demand WM 5, /3.3 video RM, /2 QT 8./. WM /. audio RM /.5 live QT media WM 5 5/. video RM 7 /. QT In this study, we collected streaming media packets in a data center of a major ISP from : (Friday) to :3 (Tuesday), using the Gigascope appliance [6]. The data center hosts servers for thousands of business companies, and provides Internet access services for a large cable company. The Gigascope is running on a site close to the end users (broadband home users and business users). To collect streaming packets of RTSP/MMS requests, Gigascope captures all TCP packets from/to ports , 77-77, 97, and 755. According to a recent measurement study that collects RTSP/MMS packets based on keyword matching [7], our port number selection covers 97.3% of the RTSP/MMS streaming requests. Meanwhile, we also collected UDP streaming traffic via ports 54-55, , and 7-7, which are the most popular ports for UDP streaming. For UDP streaming traffic over other port numbers due to network address translation (NAT), we calculate the traffic volume based on the summary information that a client reports to its server when a streaming session is terminated. Compared to existing studies that are based on server logs [2, 5, 26, 27, 3], in which only the summary information of streaming sessions is available, our study is conducted at the packet level, which facilitates more detailed analysis on the quality and the resource utilization of various Internet streaming techniques. The initial trace processing is as follows. We first grouped TCP packets by TCP connections, based on the IP address, port number, and TCP SYN/FIN/RST flag. Then we extracted the RTSP/MMS commands from each streaming request. Based on the analysis of these commands, we identified and parsed media data and streaming control packets from the TCP or corresponding UDP streams, and dumped the corresponding RTP/RTCP, RDT, and MMS packet headers. Finally, we identified home users and business users in our traces based on IP prefix matching. Our trace collection and processing methodology have been validated by extensive experiments on various media server and player products, including Windows Media Player and Windows Server 23, Real Player and Helix Server, QuickTime Player and Darwin Server. All these products have extensions to the standard RTSP and RTP protocols. Due to the lack of documentation, we reverseengineered proprietary protocols by capturing and analyzing media traffic under different environments, with the help of There are about 2.6% RTSP/MMS/HTTP streaming requests using port 8 or 88, which are hard to distinguish from regular HTTP downloading traffic. We exclude this traffic in this study. tools such as tcpdump/windump, ethereal, and NIST Net network emulator. 3. TRAFFIC OVERVIEW We have captured 26 GB of streaming data (compressed in gzip format) during the 2-day period. In our workloads, there are 7,59 home users accessing,898 servers in 2,9 requests, and there are 29 business users accessing 9 servers in 8,742 requests. Both users and servers are identified by their public IPs, and the real number of business users would be much larger due to the usage of NAT (a business IP may host up to 64 users as shown in Section 4.2). 3. Streaming traffic by user communities Table and 2 show the traffic breakdowns based on the content types (audio/video, live/on-demand) and media service products in the home user and business user workloads, respectively. In these tables, WM, RM, and QT denote Windows, RealNetworks, and QuickTime media services, respectively. In our workloads, most streaming traffic is delivered over Windows media services (8.7% and 85.5% of the requests in the home and business user workloads, respectively), and RealNetworks is next (9.% and 4.4% of the requests in the home and business user workloads, respectively). Only a small fraction of streaming traffic is delivered over QuickTime. These tables also indicate that TCP is responsible for the majority of streaming traffic on the Internet, confirming previous studies such as [7, 26]. In the home user workload, 2.2% and 3.5% of the requests access live and on-demand audio objects, and.32% and 65.43% of the requests access live and on-demand video objects, respectively. Video is responsible for the majority of streaming media requested by home users. In contrast, in the business user workload, 4.5% and 58.77% of the requests access live and on-demand audio, while.3% and 36.43% of the requests access live and on-demand video, respectively. Audio is responsible for the majority of streaming media requested by business users. In addition, although the volume of live media traffic is far less than that of ondemand media traffic in both workloads, compared to home users, business users are more likely to access live media, most of which is audio. Figure (a) and Figure 2(a) show the distribution of file length (in terms of playback duration) of on-demand audio and video objects in each streaming session requested by home and business users, respectively. These figures indicate that business users tend to request audio and video

4 ... File Length (sec) Playback Duration (sec) Playback Duration (sec) (a) On-demand audio file length (b) On-demand audio playback duration (c) Live audio playback duration Figure : On-demand and live audio distributions in the home and business user workloads.... File Length (sec) Playback Duration (sec) Playback Duration (sec) (a) On-demand video file length (b) On-demand video playback duration (c) Live video playback duration Figure 2: On-demand and live video distributions in the home and business user workloads objects with longer file lengths. Specifically, for audio objects, as shown in Figure (a), more than 7% of sessions in the business user workload request objects with a file length between 2 4 seconds, the typical duration of a pop song; in contrast, for home users, more than 5% of audio sessions request files with a length around 3 seconds, most of which are music preview samples. Figure (b) and Figure 2(b) further compare user playback durations of on-demand audio/video sessions in the home and business user workloads. As shown in Figure (b), for more than half of on-demand audio sessions in the business user workload, the user playback duration is about 2 4 seconds, corresponding to the length of a typical pop song as in Figure (a). Figures (c) and 2(c) show the playback duration of live audio/video sessions in the home and business user workloads. From these figures, we can see that for both live and on-demand audio/video sessions, the playback duration of business users is much longer than that of home users (note that the x-axis is in log scale). The above results indicate that business users tend to listen more audio on the Internet and tend to stick to the media content being played longer than home users. However, looking into the URLs and the Referer headers of RTSP commands in the trace, we found that the majority of streaming media accessed by home users and business users are both from news and entertainment sites. Thus, the different access pattern of business users is not due to the accesses of business related media content, but rather due to the working environments of business users audio is more preferred possibly because it attracts less attention when they are working, and the long playback duration might be because their business work prevents them from frequently changing media programs. Table 3: Traffic by different hosting services Hosting Content User Business User Service Type Vol. (GB) Req. Vol. (GB) Req. Third audio , ,8 Party video , ,82 Self audio.4 2, ,56 Hosting video , , Streaming traffic by media hosting services In general, there are two approaches that a content provider can use to deliver its streaming media. The first is that a content provider can host the streaming server itself. We call this self-hosting. The second is that a content provider can ask help from a third party, such as a commercial content delivery network (CDN) or media delivery network (MDN), to avoid service management and hardware/software investments. We call this third-party hosting. For both self-hosting and third-party hosting, the portals of streaming media are usually on the Web sites of their content providers. We identified service providers in our workloads by their host names and IP addresses. The host names are extracted from the URLs of media objects encoded in RTSP/MMS packets. We have identified 24 third-party media services, including 22 CDNs/MDNs and 2 media services hosted by ISPs (we anonymize the company names due to customer privacy concerns). Table 3 shows the volume and number of requests of streaming traffic served by third-party media services and self-hosting media services in the home and business user workloads, respectively. In the home user workload, thirdparty media services serve 56.8% of the traffic and 67.7%

5 Traffic (GB)... e Domain Rank (a) Third-party media services Traffic (GB)... e-4 e-5 e Domain Rank (b) Self-hosting media services in the home user workload Traffic (GB)... e-4 e-5 e Domain Rank (c) Self-hosting media services in the business user workload Figure 3: Service providers ranked by traffic volume of the requests. In the business user workload, third-party media services serve 5.3% of the traffic and 72.6% of the requests. The percentages of audio traffic served by thirdparty services are similar for home and business users, but business users request more video from self-hosting media services. Our further investigation finds that a substantial amount of such video traffic comes from a news site and a sports site outside United States, which might be due to the foreign employees in these business companies. In general, more than half streaming traffic is served by third-party hosting services in both workloads. Figure 3(a) and Figures 3(b), 3(c) further show the rank of traffic volume served by different service providers in the home and business user workloads, for third-party hosting services and self-hosting services, respectively. Most streaming traffic is served by the top five CDNs/MDNs and the top two self-hosting commercial sites (one video site and one well-known search engine site). 4. PROTOCOL ROLLOVER AND USER STARTUP LATENCY Although traditionally UDP is the default transport protocol for streaming media data transmission, in practice, UDP is often shielded at the client side. Therefore, today streaming media data are often delivered over TCP or even HTTP. Among the three options, generally UDP is tried first upon a client request. If UDP is not supported due to either server side or client side reasons, TCP can be used instead. If TCP is not supported either, HTTP will be used. Such a procedure is called protocol rollover, which is conducted automatically by the media player. Due to the wide deployment of NAT routers/firewalls in home user networks and small business networks, protocol rollover plays an important role in the user perceived startup latency and may have affected the way that media is served by content providers. In this section, we first analyze the impact of protocol rollover on the user perceived startup latency, then investigate rollover avoidance in these streaming media services. 4. Startup latency due to protocol rollover Protocol rollover in RTSP works as follows (protocol rollover in MMS is similar). Upon a client request, the media player sends a SETUP command with a Transport header, specifying the transport protocol it prefers. If UDP is supported, the port numbers for receiving data and sending feedback are also specified. Then in the Transport header of the SETUP reply message, the streaming server returns the protocol it selects. If it selects UDP, the port numbers for sending data and receiving feedback are also returned. If the player requests UDP but the server does not support it, the server responds with the protocol it supports (i.e., TCP) directly, and the protocol switches without additional overhead. However, if the server supports UDP but the player is shielded by a NAT router/firewall, the incoming UDP traffic may not be able to go through the router. After a timeout, the player has to terminate the current RTSP connection and sends a new RTSP request in a new TCP connection, specifying TCP as the transport protocol in the SETUP command. As a result, such a negotiation procedure for protocol rollover takes a non-trivial time. Thus, the startup latency of a user session can be further decomposed into three parts: () protocol rollover time is the duration from the beginning of the first RTSP/MMS request to the beginning of the last RTSP/MMS request in the user session; (2) transport setup time is the duration from the the beginning of the last RTSP/MMS request (or the first request if no protocol rollover) to the time when the transport protocol setup succeeds; (3) startup buffering time is the time to fill the play-out buffer of the media player, starting from the transport setup success time to the playback start time. In our workloads, we have found that although most user sessions with protocol rollover try UDP only once, some sessions may try UDP up to 3 times before switching to TCP. As the negotiation process of protocol rollover may take a non-trivial time, the protocol rollover increases the startup latency a user perceives. Assuming a five-second play-out buffer [] is used, Figure 4(a) and Figure 4(b) show the distribution of startup latency in Windows and RealNetworks media services, respectively, for RTSP sessions with protocol rollover in the home user workload (the distribution for business user workload is similar). In Windows media services, more than 22% of the streaming sessions have a rollover time longer than 5 seconds, in addition to the normal startup latency due to the transport setup and the initial buffering. Figure 4(b) shows that RealNetworks media services have an even longer rollover time more than 67% of the streaming sessions have a rollover time longer than 5 seconds. We also observe that in general the Windows Media service has a much shorter buffering time than the RealNetworks Media service. This is probably due to the higher buffering rate of Windows media streaming (see Fast Start in Section 5), since the object

6 Rollover Rollover+setup Rollover+setup+buffer Time (sec) (a) Startup time of rollover sessions in Windows media services Rollover Rollover+setup Rollover+setup+buffer Time (sec) (b) Startup time of rollover sessions in RealNetworks media services Rollover+setup (session w/ rollover) Setup (session w/o rollover) Time (sec) (c) Startup comparison of sessions with and without rollover in Windows media services Figure 4: Protocol rollover increases startup latency of streaming sessions encoding rates of Windows and RealNetworks media in our workloads are comparable. Figure 4(c) further compares the delay from the session beginning time to the transport setup completion time for sessions with and without protocol rollover in Windows media services in the home user workload. As shown in the figure, about 37% of the sessions with protocol rollover have a delay longer than 5 seconds. In contrast, only about 3% of the sessions without protocol rollover have a delay longer than 5 seconds. 4.2 Protocol selection and rollover avoidance In most client side media players, although the transport protocol of streaming media can be specified by a user manually, the default protocol is usually UDP. In Section 3, we have also found that the majority of streaming traffic in our workloads is delivered over TCP. Thus, it is reasonable to expect that a large portion of the user sessions experience protocol rollover. However, in the home user workload, we found that there are only about 7.37% of streaming sessions trying UDP first and then switching to TCP. In the business user workload, only about 7.95% streaming sessions switch from UDP to TCP. These results imply that TCP is directly used without protocol rollover in most streaming sessions, despite the default protocol setting in the player. We analyze this phenomenon as follows. The Windows streaming service allows the specification of the transport protocol in the URL modifier at either the client side or the server side. In Windows media streaming, the URL passed to a media player is either input by a user (client side action) or exported from a media meta file stored on or dynamically generated by a Web server (server side action). For example, rtspt means using TCP as the transport protocol while rtspu means using UDP. We extracted URL modifiers from the summary of media playing information, which is sent by the client in a RTSP/MMS command when a session is terminated. In both home user and business user workloads, we found that for more than 7% of the Windows streaming sessions, TCP is specified as the transport protocol by content providers. This explains why TCP-based media streaming is so prevalent on the Internet. Study [26] suggests that the NAT and firewall deployment constrained the usage of UDP in streaming media applications. Our conjecture is that as content providers are generally aware of the wide deployment of NAT and firewalls, they actively use TCP to avoid any possible shielding or protocol rollover to end users. With such a configuration, even if UDP is supported at both the client side and the server side, the streaming media will still be delivered over TCP directly. To validate our conjecture, we further investigate the NAT usage of home users and business users with the MMS streaming in our workloads. Different from RTSP, in MMS streaming, a client reports its local IP address to its server in clear text. Extracting this information from the MMS workload, we found that most MMS users in the home and business user workloads report private IPs (such as ), indicating that they access the Internet through NAT. In the home user workload, about 98.3% of the MMS requests are initiated from clients shielded by NAT, and about 99.5% of the MMS clients are shielded by NAT. These two numbers are 89.5% and 88.% in the business user workload, respectively. A NAT router hosts up to 3 MMS clients in the home user workload, and up to 64 MMS clients in the business user workload. Thus, for these clients, the TCP transmission specified on the server side effectively avoids protocol rollover, and significantly reduces user perceived startup latency. On the other hand, RealNetworks media services try to avoid protocol rollover by using NAT transversal techniques. By reverse-engineering of the protocol, we find that different from the Windows media service, in which a server sends UDP packets to its client first through the port that the client reports in the SETUP command, in the RealNetworks media service, a client sends UDP packets to its server first, so that the server can figure out the client s external UDP port number converted by NAT. To distinguish different user sessions shielded by the same NAT, the server uses different UDP port numbers to listen to UDP packets coming from different sessions, which are generated by the server dynamically and sent to its clients in the replies of SETUP commands. As a result, UDP accounts for the majority of streaming traffic in the RealNetworks media service, and protocol rollover is less frequent than that in the Windows media service. However, as indicated by Figure 4, once a protocol rollover happens, the rollover time in the RealNetworks media service is generally much longer than that in the Windows Media service. Furthermore, this solution somehow violates the standard RTSP specification because the UDP port number that a client reports to its server in the SETUP command is intentionally discarded.

7 File Length (sec) (a) File length Encoding Rate (Kbps) (b) Encoding rate Figure 5: Features of media delivered by and normal TCP streaming 5. FAST STREAMING In early streaming media services, a media object is streamed at its encoding rate and a small play-out buffer is used to smooth the streaming jitter. However, in practice, the play-out buffer may be exhausted, since the available bandwidth between a client and its server may fluctuate from time to time. This is particularly important for TCP-based streaming, in which the congestion control mechanism constrains the streaming rate. As we have shown in Section 4, content providers of Windows media services often use TCP-based streaming directly to avoid protocol rollover. In order to provide high quality streaming experience to end users, Windows Media services use Fast Streaming techniques [2], including Fast Start,, Fast Recovery, and Fast Reconnect 2. Both Fast Start and Fast Cache transmit media data at a rate higher than the media encoding rate 3. Fast Start can run over both TCP and UDP while always runs over TCP. Fast Start is enabled by almost all Windows media servers in order to reduce the startup buffering time for clients. Basically, Fast Start transmits data to the client as fast as possible until the play-out buffer is filled. After the Fast Start period, streams media data until the entire object is delivered or the session is terminated by the user. In order to smooth out network bandwidth fluctuations, transmits media data to a client at a speed usually up to 5 times the media encoding rate and the client maintains a growing buffer for the early arrived data. Table 4 shows the total volume of streaming traffic delivered over (FC), normal TCP streaming excluding FC (TCP), and UDP streaming (UDP) in our workloads. As the table shows, is widely used by both thirdparty hosting services and self-hosting services, accounting for 5.% and 2.% of the streaming traffic in the home and business user workloads, respectively. There is less streaming traffic delivered over in the business user workload than in the home user workload, because the access pattern of business users is different from that of home users. Business users access more audio and live media than home users. Audio media objects have low bit rates and usually do not need, while live media objects 2 As Fast Recovery and Fast Reconnect events are rare in our workloads, we do not include them in this study. 3 Similarly, RealNetworks media services can also stream a media object at a rate higher than its encoding rate. Due to page limits, we only present the analysis results of Windows streaming services. Table 4: Streaming Traffic with Delivery home user business user Method third-party self-hosting third-party self-hosting FC 55. GB GB 3.75 GB 6.24 GB TCP 49.9 GB 5.27 GB 5.92 GB 6.79 GB UDP 35.5 GB GB 4.23 GB 2 GB cannot be streamed over at all. The on-demand video they access is different too. Figure 5 compares the distribution of file length and encoding rate of objects delivered over supported streaming and normal TCP streaming in on-demand Windows video sessions of the home user workload. As shown in the figure, is more widely used for objects with longer file lengths and higher encoding rates. This is reasonable because these objects are more sensitive to network bandwidth fluctuations and thus can help more. Due to page limit, in the remainder of this section, we only present our analysis results of Windows media streaming for TCP-based on-demand video sessions with a playback duration longer than 3 seconds in the home user workload, if not specified particularly (the results for the business user workload are similar). 5. smoothes bandwidth fluctuation As mentioned before, supported streaming smoothes the fluctuation of network bandwidth by maintaining a growing buffer for the early arrived data. To understand how utilizes the network bandwidth, we extract the Bandwidth header in a client s PLAY command, and the Speed header in this command and that in the server s reply. The Bandwidth header contains the client advertised bandwidth, which is either a default value set by a user in the media player, or measured in real time before media streaming with the so called packet pair technique. The Speed header in a client s PLAY command specifies the delivery speed that the client requests, in multiples of the media encoding rate, which is usually equivalent to the client available bandwidth. The speed that a server agrees to offer is specified in the Speed header of the reply to the PLAY command, which is usually not greater than 5 times of the media encoding rate. However, the actual streaming speed may be smaller than the speed that the server claims in the PLAY reply. Thus, we computed the average of actual streaming rate during each user session based on the packet level information in our workload (the startup buffering is excluded).

8 e+7 e+6 Actual streaming rate Request streaming rate Advertised Bandwidth e+7 e+6 Actual streaming rate Advertised Bandwidth Kbps Kbps Sessions (%) Sessions (%) (a) supported streaming (b) streaming Figure 6: The client advertised bandwidth, client requesting rate, and server streaming rate Figure 6(a) shows the distribution of the client advertised bandwidth, the client requested streaming rate (i.e., the product of the Speed value and the media encoding rate), and the actual streaming rate for streaming sessions with support. Figure 6(b) shows the advertised bandwidth and the actual streaming rate for normal TCP streaming. Comparing Figures 6(a) and 6(b), we find that the actual streaming rate in supported streaming is much closer to the client advertised bandwidth than that in normal TCP streaming 4. So exploits the unutilized network bandwidth under the constraint of TCP congestion control. In other words, for normal TCP streaming, it is possible to deliver the same media at a higher transmission rate, or to deliver the media with a higher encoding rate. In Windows media streaming, the default play-out buffer size accounts for five seconds media playback []. Upon network fluctuations, if the play-out buffer is empty, the client has to stop to buffer data, and a playback jitter occurs. With, the early buffered data could afford a smooth playback much longer before rebuffering is necessary. We define the rebuffering ratio of a streaming session as the total time for rebuffering over the total playback duration, which reflects the streaming quality that a user experiences. Figure 7(a) shows the rebuffering ratio of sessions streamed with and without support (i.e. and normal TCP streaming in the figure). To make a fair comparison, we only consider sessions requesting video objects with an encoding rate between 2 4 Kbps. About 5% of the normal TCP streaming sessions suffer rebuffering while only about 8.5% of the supported streaming sessions suffer rebuffering. Thus, can effectively eliminate rebuffering in streaming sessions. We also observe that for supported streaming, there are about.8% of the sessions with a rebuffering ratio larger than 5%, while for normal TCP streaming, there are only about.9% of the sessions with a rebuffering ratio larger than 5%. The reason is that when rebuffering happens, normal TCP streaming may switch to a stream of lower rate if the media object is MBR encoded (see Section 6.), thus avoiding further rebuffering. In contrast, stream switch is disabled in supported streaming, thus the rebuffering may happen repeatedly, resulting a large rebuffering ratio (see Section 6.3). supported streaming also decreases the possi- 4 The 2 Gbps client advertised bandwidth corresponds to the player s connection speed setting Mbps and above. bility of encountering network congestion by reducing the data transmission time. Figure 7(b) shows the distribution of data transmission duration for supported streaming sessions and normal TCP streaming sessions, respectively. We can see that although in general the media objects streamed with have higher file lengths and encoding rates as shown in Figure 5, the transmission time of supported streaming is much shorter than that of normal TCP streaming (note that the x-axis is in log scale). 5.2 produces extra traffic continuously delivers a media object at a rate higher than (usually up to five times) its encoding rate. In reality, the entire media object can be delivered completely to the user in the middle of the user s playback. If the user stops in the middle, the pre-arrived data for the remaining part are wasted. Considering the well known fact that most sessions only access the initial part of a video object [3], the extra delivered traffic by would be non-trivial, especially for large media objects such as movies. In this study, we compute the over-supplied traffic as follows. The packet size and the time that a packet should be rendered can be extracted directly from the RTP packet header. Based on the timestamps of PLAY, PAUSE, and TEARDOWN commands, we get the user playback duration for each session. Assuming a default five-second play-out buffer [] on the client side, we compute the extra traffic for each session. Figure 8 shows the extra traffic caused by Fast Cache supported streaming and normal TCP streaming, for all Windows streaming sessions in the home user workload. On average, over-supplies about 54.8% of media data to clients due to clients early terminations, while the over-supplied traffic is only about 4.6% for normal TCP streaming sessions. Figure 9 shows the of the average transmission speed (in multiples of the media encoding rate) of supported and normal TCP streaming, respectively. In the computation of the average tranmission speed, we only consider the data transmission after the Fast Start buffering period. The high data transmission speed in supported streaming indicates the reason for the over-supplied data. 5.3 Server response time of In addition to over-utilizing the network bandwidth, by transmitting media at a rate much higher than its encoding rate, a streaming server running may also consume more CPU, memory, disk I/O, and other resources

9 Rebuffer Ratio (rebuffer time / play time) (a) Rebuffering ratio Transmission Duration (sec) (b) Data transmission duration Figure 7: Bandwidth fluctuation smoothed by Over transferred bytes/total played bytes Figure 8: Extra traffic produced by Speed Figure 9: Average data transmission speed. Server Response Time (sec) (a) Third-party media service. Server Response Time (sec) (b) Self-hosting media service Figure : The server response time to streaming requests than a streaming server not running. As a result, a request may have to wait for a longer time to be served when a burst of requests arrive at the server. We define the server response time of a RTSP/MMS request to be the duration from the time instant when a server receives the first command from a client, to the time instant when the server sends its reply. Since our workloads are collected by Gigascope at a site very close to end users, the timestamp of a captured packet can be regarded as the time instant when the client sends or receives that packet. We use the timestamps of TCP handshake packets to estimate the packet round trip time (RTT), and then compute the server response time. Figure (a) shows the distribution of the server response time for streaming requests served by third-party media services. We compare the response time of requests to servers with and without running. For servers running, about 43% of the requests have a response time longer than. second, while for servers not running Fast Cache, only about 9% of the requests have a response time longer than. second. Figure (b) shows the corresponding distribution of the server response time for streaming requests served by self-hosting media services. For servers running, about 2% of the requests have a response time longer than. second, while for servers not running, only about 5% of the requests have a response time longer than. second. These results indicate that the response time on servers running is statistically longer than that on servers not running Fast Cache. We also observe that the response time of the thirdparty hosting service is larger than that of the self-hosting service. As a commercial company, a third-party hosting service may want to fully utilize its server resources with many service subscribers. In contrast, self-hosting services are more dedicated and thus are often less heavily loaded. 5.4 Server load of To further investigate the system resources consumed by, we conducted experiments with the Windows server 23 and the Windows media load simulator [8]. We ran Windows server 23 on a machine with 2 GHz Pentium- 4 CPU and 52 MB memory, and ran a Windows media load simulator on a Windows XP client machine. The server machine and the client machine are connected through a Mbps fast Ethernet switch. We generated two streaming video files with the Windows media encoder, one is encoded with 282 Kbps and the other is encoded with.28 Mbps, both of which have a 2-minute playback duration. We duplicated each file with 5 copies, and saved each copy with a different name. We first ran 5 normal TCP streaming sessions for 5 minutes using the simulator, each of which requests a different copy of the 282 Kbps video file simultaneously. Since the simulator does not support, we ran 5 normal TCP streaming sessions requesting the.28 Mbps video using the simulator, in order to simulate the streaming with support for the 282 Kbps video (with 4 speed). This experiment was also conducted for 5 minutes, with each session requesting a different file copy simultaneously. In each experiment, the simulator recorded the CPU and memory usage reported by the server every second. The average bandwidth usage was recorded in the server logs. We repeated each experiment times. Figure shows the average usage of CPU and bandwidth of the server over the entire duration of the simulation (the

10 2 x 5 Processor Time (%) (a) CPU consumption of all 5 sessions Bandwidth (bps) (b) Average bandwidth consumption per session Rebuffer Ratio (rebuffer time / play time) Figure 2: Effectiveness of resource over-utilization Figure : Server load comparison between Fast Cache and normal TCP streaming memory usages of and normal TCP streaming are very close and thus are not presented). The bandwidth usage of is 3.67 times of that of the normal TCP streaming, while the CPU load of is 3.57 times of that of normal TCP streaming. This indicates that the CPU consumed by is approximately proportional to the streaming delivery rate. Given that could deliver a media object at a rate 5 times of its encoding rate, increases server load significantly, and thus limits the scalability of a streaming server. In our workloads, the Windows media servers in the second largest media delivery network and the largest self-hosting media service (a well known search engine site) do not support at all (we anonymize their domain names due to customer privacy concerns), which might be due to the concerns of high resource demands of. 5.5 Effectiveness of resource over-utilization delivers a media object to a client faster than the playing speed by over-utilizing the bandwidth and CPU resources. However, streaming a media object at a rate higher than its encoding rate is only possible when the available bandwidth between a client and its server is large enough. Intuitively, when a media object is streamed at its encoding rate, the higher the average bandwidth between a client and its server over its encoding rate, the lower possibility at which performance degradation occurs during the playback. To understand whether performs better than normal TCP-based streaming when the average bandwidth between a client and its server is large enough, we plot the of rebuffering ratio for based streaming sessions and normal TCP-based streaming sessions in the home user workload in which the media encoding rate of each stream is 2 32 Kbps and the client advertised bandwidth (extracted from the Bandwidth header) is at least 5 Kbps greater than the media encoding rate, as shown in Figure 2. Compared with Figure 7(a), the two curves in Figure 2 are very close, which means that, although temporary network congestion may occur from time to time, a small play-out buffer performs well enough to smooth out bandwidth fluctuation during streaming, when the average bandwidth is large enough. Thus, aggressively over-utilizing the server and Internet resources is neither performance-effective nor cost-efficient under a high bandwidth condition. The higher speed at which can stream a media object, the lower necessity is this speed for a client. Furthermore, even if no extra traffic generated (as- sume the media object is played completely in each session), the number of concurrent streams on a server is constrained by the streaming speed, and thus limits the server s capacity to service bursty requests. 6. RATE ADAPTATION In order to adapt to bandwidth fluctuations, major media services such as Windows media and RealNetworks media support three kinds of techniques for rate adaptation. Stream switch enables a server to dynamically switch among streams with different encoding rates for the same object, based on the available network bandwidth. This technique is called Intelligent Streaming in the Windows media service [4] and SureStream in the RealNetworks media service [7]. Stream thinning enables a server to only send key frames to the client, when no lower bit rate stream is available. If the current bandwidth is not sufficient to transmit key frames, a server can only send audio to client, which is called video cancellation. 6. MBR encoding and stream switch To enable stream switch, the media object must be encoded with multiple bit rates (MBR): the encoder generates multiple streams with different bit rates for the same media content, and encapsulates all these streams together. Figures 3(a), 3(b), 3(c) and Figures 3(d), 3(e), 3(f) show the distribution of the number of streams encoded in on-demand and live media objects in the home user and business user workloads, respectively. For video objects, we show the number of audio streams and video streams in a file separately (Figures 3(b), 3(c) for on-demand objects and Figures 3(e), 3(f) for live objects). Because there are only a small amount of live video objects in the business user workload, we do not present them in Figures 3(e), 3(f). These figures show that about 42% of the on-demand video objects in the home user workload are encoded with at least two video streams. The number of video streams in video objects is up to 2, and the number of video and audio streams together in a video object is up to 2. The number of streams in live audio objects is relatively small, but there are still 3% and 28% of the objects in home user and business user workloads encoded with at least two streams, respectively. These results indicate that the MBR encoding technique has been widely used in media authoring, which enables the rate adaptation dynamically switching among streams based on the available bandwidth. The stream switch in RTSP protocols works as follows (stream switch in MMS has a similar procedure). When a RTSP session is established upon a client request, the media

11 Number of Streams (a) On-demand audio objects Number of Streams (d) Live audio objects Number of Streams (b) Audio in on-demand video objects Number of Streams (e) Audio in live video objects Number of Streams (c) Video in on-demand video objects Number of Streams (f) Video in live video objects Figure 3: MBR encoding in the home and business user workloads. Stream Switch Handoff Latency (sec) (a) Stream switch latency Figure 4: Stream switch. Stream Switch: Duration of Low Quality (sec) (b) Low quality duration. Thinning Duration (sec) (a) Thinning duration Figure 5: Stream thinning Thinning Interval (sec) (b) Thinning interval player sends a DESCRIBE command to the server, asking for the description of the requested media object. In the reply to the DESCRIBE, the server sends the media description using SDP [9], including the description of each video/audio stream encapsulated in the media object. Then the client specifies the stream that it desires in the SETUP (Windows media service) or SET PARAMETER (RealNetworks media service) command, based on its current available bandwidth. The server delivers the requested stream upon receiving the PLAY command from the client. If during playback, the available bandwidth drops below the media encoding rate, the play-out buffer will be drained off. In this case, the media player may send a request to ask the server to switch to a lower rate stream. In Intelligent Streaming (Windows media service), the media player sends a SET PARAMETER with a SSEntry message body via RTSP, specifying the current stream and the stream to switch to. In SureStream (RealNetworks media service), the client sends a SET PARAMETER command with an UnSubscribe header to cancel the current stream and a Subscribe header to switch to the new stream. We extracted all related information from RTSP/MMS commands, and analyzed these stream switches. To characterize the overhead and frequency of stream switches, we define the switch latency as the freezing duration between the end of the old stream and the beginning of the new stream, during which a user has to wait for buffering. We also define the low quality duration of a streaming session as the total playback time of streams with lower rates (relative to the highest encoding rate that the content is transmitted in this session). Assuming a five-second play-out buffer [], Figure 4(a) and Figure 4(b) show the distribution of stream switch latency and low quality duration in the home and business user workloads, respectively. As shown in Figure 4(a), about 3% 4% of the stream switches have a switch latency greater than 3 seconds, and about % 2% of the stream switches have a switch latency greater than 5 seconds, which is non-trivial for end users. In Figure 4(b), we observe that about 6% of the sessions have a low quality duration less than 3 seconds, and 85% of the low quality stream durations are shorter than 4 seconds. 6.2 Stream thinning and video cancellation Stream thinning works in a similar way as stream switch. To characterize the quality degradation due to stream thinning, we define the thinning duration as the time duration from the thinning command to the un-thinning command or the stop playing command, which reflects the quality degradation time that a user suffers. We also define the thinning interval as the interval between two consecutive stream thinning events, which reflects the frequency of such quality degradations. Figure 5(a) and Figure 5(b) show the thinning duration and the interval for video sessions longer than 3 seconds in the home and business user workloads, respectively. As shown in Figure 5(a), more than 7% of the thinning durations are shorter than 3 seconds. Figure 5(b) shows most (7% in the home user workload

12 . Rebuffering Duration (sec) ( user workload only) All Sessions Degradation Sessions Smooth Sessions Playback duration (sec) (a) user workload All Sessions Degradation Sessions Smooth Sessions Playback duration (sec) (b) Business user workload Figure 6: Rebuffering duration for and normal TCP streaming sessions with rebuffering Figure 7: Comparison of playback durations for sessions with and without quality degradations Table 5: Summary of streaming quality streaming duration > 3 sec duration > 3 sec quality home biz home biz smooth playback 87.6% 59.96% 56.82% 9.73% rebuffering only 8.65% 8.9% 32.87% 3.97% stream switch 3% 6.3%.4% 37.42% stream thinning.94% 3.42% 4.73% 6.8% video cancellation.52%.4% 4.8% 4.8% and 82% in the business user workload) thinning intervals are longer than 3 seconds. When bandwidth is too low to transmit the key frame of video stream, the client may send a TEARDOWN command to cancel the video stream, then the server sends audio only. When the bandwidth increases, the client may set up and request the video stream again. 6.3 Summary of Internet streaming quality According to our extensive trace analysis and real experiments, does not support rate adaptation in practice. In a streaming session with enabled, the client never requests to switch streams after the initial stream selection in the SETUP command, even if there is a more suitable stream matching the decreased/increased bandwidth during playback. Thinning and video cancellation are also disabled when is enabled. As a result, when the bandwidth drops below the encoding rate, supported streaming performs like pseudo streaming [7]: the player stops to buffer data for a while, then continues to play the media for about five seconds (the play-out buffer size), and this procedure repeats. With such a configuration, if a sudden network congestion happens and lasts for a long time, the streaming quality of supported streaming could be even worse than that of normal TCP streaming. Figure 6 shows that when rebuffering happens, the rebuffering duration of supported streaming is much longer than that of normal TCP streaming in the home user workload, because it cannot switch to a lower rate stream upon network congestion. Figure 7(a) and Figure 7(b) show the of playback duration of TCP-based video streaming sessions that are longer than 3 seconds in the home and business user workloads, respectively. The three curves in each figure denote all sessions, sessions without quality degradations, and sessions with quality degradations (including rebuffering, stream switch, stream thinning, and video cancellation), respectively. We can see that for sessions with longer durations, degradation happens with a higher probability. For example, in the business user workload, 88% of the sessions with quality degradations have a duration longer than seconds, while 58% of the sessions without quality degradations have a duration longer than seconds. Table 5 further shows the breakdowns of sessions with and without quality degradations for TCP-based video streaming sessions that are longer than 3 seconds and longer than 3 seconds, in the home and business user workloads, respectively. We can see that quality degradation happens less frequently in the home user workload than in the business user workload, which may be due to the longer playback duration of business users as shown in Figure 7. For sessions longer than 3 seconds, 3% 4% of the video sessions still have quality degradation due to the rebuffering, stream switch, stream thinning, and video cancellation. For sessions longer than 3 seconds, the quality is getting worse. Further investigation shows that in a significant amount of video sessions with rebuffering, the requested media objects are MBR encoded, and the lack of stream switch is largely due to the usage of, which disables rate adaptation. In conclusion, the quality of media streaming on the Internet leaves much to be improved, especially for those sessions with longer durations. 7. DISCUSSION: COORDINATING CACHING AND RATE ADAPTATION and rate adaptation are two commonly and practically used techniques that improve the experience of streaming media users from different perspectives. Fast Cache aggressively buffers media data in advance at a rate higher than the media encoding rate, aiming to absorb the streaming jitter due to network congestion. In contrast, rate adaptation conservatively switches to a lower bit rate stream upon network congestion. As shown in our analysis, both techniques have their merits and limits. has its problems such as increasing server load and producing extra traffic. On the other hand, the latency of stream switch is non-trivial in most sessions, due to the small size of play-out buffer. Combining the merits of both techniques, in this section, we discuss Coordinated Streaming, a mechanism that coordinates caching and rate adaptation. In this scheme, an

13 Coordinated Streaming Rebuffer Ratio (rebuffer time / play time) (a) Rebuffering ratio Coordinated Streaming Over-transferred bytes/total played bytes (b) Over-supplied data Figure 8: Evaluation of Coordinated Streaming Coordinated Streaming. Stream Switch Handoff Latency (sec) (c) Switch handoff latency upper bound and a lower bound are applied to the play-out buffer of the client player. The upper bound setting prevents aggressive data buffering while the lower bound setting eliminates the stream switch latency. When a streaming session starts, the server transmits data to the client as fast as possible until the lower bound is reached. Then the playback begins and the client continues to buffer data with the highest possible rate until the buffer reaches its upper bound. With a full buffer, the client buffers data at the media encoding rate, and the buffer is kept full. When network congestion occurs, the client may receive data at a rate lower than the object encoding rate, and the buffer is drained off. If the network bandwidth increases before the buffer drops below its lower bound, the client will request data at a higher rate to fill the buffer. Otherwise, the client will switch to a lower rate stream. The selection of the lower rate stream should be based on the following: in a typical bandwidth fluctuation period, the current bandwidth should be able to maintain normal playback of this lower rate stream and transmit extra data to fill the buffer to its upper bound. When the network bandwidth is increased, the client may switch to a higher encoding rate stream. We conducted an ideal experiment as a proof of concept of this scheme. We set the lower bound of the buffer size as 5 seconds to cover the normal stream switch latency as well as the initial buffering duration, as the default play-out buffer size is 5 seconds, and the average stream switch latency is also about 5 seconds. The upper bound of the buffer size is set to 3 seconds, considering the typical network fluctuation periods that may affect streaming quality, such as low quality duration, thinning duration, and thinning interval (Figures 4(b), 5(a), and 5(b)). In a practical system, the lower and upper bound of buffer size should be adaptively tunable based on these quality degradation events. However, we will show that even with the above simple configuration, the streaming quality can be effectively improved and the over-supplied traffic can be significantly reduced. We simulated the Coordinated Streaming scheme based on the packet level information of supported streaming sessions, and compared the quality and bandwidth usage of this scheme with that of supported streaming and normal TCP-based streaming. To have a fair comparison, we only consider video sessions that request objects with 2 4 Kbps encoding rates for a duration longer than 3 seconds in the home user workload. Figure 8(a) shows the rebuffering ratio in supported streaming, normal TCP streaming, and Coordinated Streaming. As shown in this figure, the rebuffering ratio of Coordinated Streaming is close to zero. The fraction of normal TCP streaming sessions with large rebuffering ratios is close to or even smaller than that of supported streaming sessions because many of them use rate adaptation to avoid rebuffering. Figure 8(b) shows the overtransfered data in the above three schemes, and we can see that Coordinated Streaming reduces 77% over-supplied traffic produced by, although not as good as normal TCP streaming. Figure 8(c) shows that the switch handoff latency of Coordinated Streaming is nearly zero, much less than that of normal TCP streaming. Furthermore, the number of stream switches in our scheme is only 33.4% of that in normal TCP-based streaming. 8. RELATED WORK Existing measurement studies have analyzed the Internet streaming traffic in different environments and from different perspectives. Li et al. [2] characterized the streaming media stored on the Web, while Mena et al. [2] and Wang et al. [29] presented an empirical study of Real audio and Real video traffic on the Internet, respectively. These studies characterized the packet size, data rate, and frame rate patterns of streaming media objects. Almeida et al. [] and Chesire et al. [3] studied the client session duration, object popularities and sharing patterns based on the workload collected from an educational media server and an university campus network, respectively. Cherkasova et al. [2] characterized the locality, evolution, and life span of accesses in enterprise media workloads. Yu et al. [3] studied the user behavior of large scale video-on-demand systems. Padhye et al. [23] and Costa et al. [5] characterized the client interactivity in educational and entertainment media sites, while Guo et al. [8] analyzed the delay of jump accesses for video playing on the Internet. Live streaming media workloads have also been studied in recent years. Veloso et al. [27] characterized a live streaming media workload in three increasingly granular levels, named clients, sessions, and transfers. Sripanidkulchai et al. [26] analyzed a live streaming workload in a large content delivery network. However, these on-demand and live streaming media measurements mainly concentrated on the characterization of media content, access pattern, and user activities, etc. So far, few studies have focused on the mechanism, quality, and resource utilization of streaming media delivery on the Internet. Chung et al. [4] and Nichols et al. [22] conducted an experimental study in a lab environment on the responsiveness of RealNetworks and Windows streaming media, respectively. Wang et al. [28] proposed a model to study the

14 TCP-based streaming. In contrast to these studies, we analyzed the delivery quality and resource utilization of streaming techniques based on a large scale Internet streaming media workload. 9. CONCLUSION In this study, we have collected a 2-day streaming media workload from a large ISP, including both live and ondemand streaming for both audio and video media. We have characterized the streaming traffic requested by different user communities (home users and business users), served by different hosting services (third-party hosting and self-hosting). We have further analyzed several commonly used techniques in modern streaming media services, including protocol rollover, Fast Streaming, MBR, and rate adaptation. Our analysis shows that with these techniques, current streaming services tend to over-utilize the CPU and bandwidth resources to provide better services to end users, which may not be a desirable and effective way to improve the quality of streaming media delivery. A coordination mechanism that combines the advantages of both Fast Streaming and rate adaptation techniques is proposed to effectively utilize the server and Internet resources for building a high quality streaming service. Our trace-driven simulation study demonstrates its effectiveness. Acknowledgments We thank the appreciations and constructive comments from the anonymous referees. William Bynum and Matti Hiltunen made helpful suggestions on an early draft of this paper. This work is partially supported by the National Science Foundation under grants CNS-4599 and CNS- 5954/596.. REFERENCES [] Buffer settings in Windows media player. [2] Fast Streaming with Windows Media 9 Series. [3] HTTP streaming protocol. [4] Microsoft Windows media - Intelligent Streaming. [5] MMS streaming protocol. [6] Real data transport (RDT). [7] RealProducer user guide. [8] Windows media load simulator. [9] YouTube - broadcast yourself. [] J. M. Almeida, J. Krueger, D. L. Eager, and M. K. Vernon. Analysis of educational media server workloads. In Proc. of ACM NOSSDAV, June 2. [] S. Chen, B. Shen, S. Wee, and X. Zhang. Designs of high quality streaming proxy systems. In Proc. of IEEE INFOCOM, Mar. 24. [2] L. Cherkasova and M. Gupta. Characterizing locality, evolution, and life span of accesses in enterprise media server workloads. In Proc. of ACM NOSSDAV, May 22. [3] M. Chesire, A. Wolman, G. Voelker, and H. Levy. Measurement and analysis of a streaming media workload. In Proc. of USITS, Mar. 2. [4] J. Chung, M. Claypool, and Y. Zhu. Measurement of the congestion responsiveness of RealPlayer streaming video over UDP. In Proc. of the Packet Video Workshop, Apr. 23. [5] C. Costa, I. Cunha, A. Borges, C. Ramos, M. Rocha, J. Almeida, and B. Ribeiro-Neto. Analyzing client interactivity in streaming media. In Proc. of WWW, May 24. [6] C. Cranor, T. Johnson, and O. Spatscheck. Gigascope: a stream database for network applications. In Proc. of ACM SIGMOD, June 23. [7] L. Guo, S. Chen, Z. Xiao, and X. Zhang. Analysis of multimedia workloads with implications for Internet streaming. In Proc. of WWW, May 25. [8] L. Guo, S. Chen, Z. Xiao, and X. Zhang. DISC: Dynamic interleaved segment caching for interactive streaming. In Proc. of IEEE ICDCS, June 25. [9] M. Handley and V. Jacobsen. SDP: Session description protocol. RFC 2327, Apr [2] M. Li, M. Claypool, R. Kinicki, and J. Nichols. Characteristics of streaming media stored on the Web. Nov. 25. [2] A. Mena and J. Heidemann. An empirical study of Real audio traffic. In Proc. of IEEE INFOCOM, Mar. 2. [22] J. Nichols, M. Claypool, R. Kinicki, and M. Li. Measurements of the congestion responsiveness of Windows streaming media. In Proc. of ACM NOSSDAV, June 24. [23] J. Padhye and J. Kurose. An empirical study of client interactions with a continuous media courseware server. In Proc. of ACM NOSSDAV, July 998. [24] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson. RTP: A transport protocol for real-time applications. RFC 889, Jan [25] H. Schulzrinne, A. Rao, and R. Lanphier. Real time streaming protocol (RTSP). RFC 2326, Apr [26] K. Sripanidkulchai, B. Maggs, and H. Zhang. An analysis of live streaming workloads on the Internet. In Proc. of ACM SIGCOMM IMC, Oct. 24. [27] E. Veloso, V. Almeida, W. Meira, A. Bestravos, and S. Jin. A hierarchical characterization of a live streaming media workload. IEEE/ACM Transactions on Networking, Sept. 24. [28] B. Wang, J. Kurose, P. Shenoy, and D. Towsley. Multimedia streaming via TCP: An analytic performance study. In Proc. of ACM Multimedia, Oct. 24. [29] Y. Wang, M. Claypool, and Z. Zuo. An empirical study of RealVideo performance across the Internet. In Proc. of the ACM SIGCOMM IMW, Nov. 2. [3] K. Wu, P. S. Yu, and J. Wolf. Segment-based proxy caching of multimedia streams. In Proc. of WWW, May 2. [3] H. Yu, D. Zheng, B. Y. Zhao, and W. Zheng. Understanding user behavior in large scale video-on -demand systems. In Proc. of EuroSys, Apr. 26.

Lecture 33. Streaming Media. Streaming Media. Real-Time. Streaming Stored Multimedia. Streaming Stored Multimedia

Lecture 33. Streaming Media. Streaming Media. Real-Time. Streaming Stored Multimedia. Streaming Stored Multimedia Streaming Media Lecture 33 Streaming Audio & Video April 20, 2005 Classes of applications: streaming stored video/audio streaming live video/audio real-time interactive video/audio Examples: distributed

More information

Streaming Stored Audio & Video

Streaming Stored Audio & Video Streaming Stored Audio & Video Streaming stored media: Audio/video file is stored in a server Users request audio/video file on demand. Audio/video is rendered within, say, 10 s after request. Interactivity

More information

Network Simulation Traffic, Paths and Impairment

Network Simulation Traffic, Paths and Impairment Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating

More information

PSM-throttling: Minimizing Energy Consumption for Bulk Data Communications in WLANs

PSM-throttling: Minimizing Energy Consumption for Bulk Data Communications in WLANs PSM-throttling: Minimizing Energy Consumption for Bulk Data Communications in WLANs Xiaodong Zhang Ohio State University in Collaborations with Enhua Tan, Ohio State Lei Guo, Yahoo! Songqing Chen, George

More information

Advanced Networking Voice over IP: RTP/RTCP The transport layer

Advanced Networking Voice over IP: RTP/RTCP The transport layer Advanced Networking Voice over IP: RTP/RTCP The transport layer Renato Lo Cigno Requirements For Real-Time Transmission Need to emulate conventional telephone system Isochronous output timing same with

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network

More information

networks Live & On-Demand Video Delivery without Interruption Wireless optimization the unsolved mystery WHITE PAPER

networks Live & On-Demand Video Delivery without Interruption Wireless optimization the unsolved mystery WHITE PAPER Live & On-Demand Video Delivery without Interruption Wireless optimization the unsolved mystery - Improving the way the world connects - WHITE PAPER Live On-Demand Video Streaming without Interruption

More information

Digital Audio and Video Data

Digital Audio and Video Data Multimedia Networking Reading: Sections 3.1.2, 3.3, 4.5, and 6.5 CS-375: Computer Networks Dr. Thomas C. Bressoud 1 Digital Audio and Video Data 2 Challenges for Media Streaming Large volume of data Each

More information

Voice-Over-IP. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Voice-Over-IP. Daniel Zappala. CS 460 Computer Networking Brigham Young University Voice-Over-IP Daniel Zappala CS 460 Computer Networking Brigham Young University Coping with Best-Effort Service 2/23 sample application send a 160 byte UDP packet every 20ms packet carries a voice sample

More information

Voice over IP: RTP/RTCP The transport layer

Voice over IP: RTP/RTCP The transport layer Advanced Networking Voice over IP: /RTCP The transport layer Renato Lo Cigno Requirements For Real-Time Transmission Need to emulate conventional telephone system Isochronous output timing same with input

More information

Classes of multimedia Applications

Classes of multimedia Applications Classes of multimedia Applications Streaming Stored Audio and Video Streaming Live Audio and Video Real-Time Interactive Audio and Video Others Class: Streaming Stored Audio and Video The multimedia content

More information

White paper. Latency in live network video surveillance

White paper. Latency in live network video surveillance White paper Latency in live network video surveillance Table of contents 1. Introduction 3 2. What is latency? 3 3. How do we measure latency? 3 4. What affects latency? 4 4.1 Latency in the camera 4 4.1.1

More information

An Introduction to VoIP Protocols

An Introduction to VoIP Protocols An Introduction to VoIP Protocols www.netqos.com Voice over IP (VoIP) offers the vision of a converged network carrying multiple types of traffic (voice, video, and data, to name a few). To carry out this

More information

Network monitoring for Video Quality over IP

Network monitoring for Video Quality over IP Network monitoring for Video Quality over IP Amy R. Reibman, Subhabrata Sen, and Jacobus Van der Merwe AT&T Labs Research Florham Park, NJ Abstract In this paper, we consider the problem of predicting

More information

Final for ECE374 05/06/13 Solution!!

Final for ECE374 05/06/13 Solution!! 1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -

More information

internet technologies and standards

internet technologies and standards Institute of Telecommunications Warsaw University of Technology 2015 internet technologies and standards Piotr Gajowniczek Andrzej Bąk Michał Jarociński multimedia in the Internet Voice-over-IP multimedia

More information

D. SamKnows Methodology 20 Each deployed Whitebox performs the following tests: Primary measure(s)

D. SamKnows Methodology 20 Each deployed Whitebox performs the following tests: Primary measure(s) v. Test Node Selection Having a geographically diverse set of test nodes would be of little use if the Whiteboxes running the test did not have a suitable mechanism to determine which node was the best

More information

IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD

IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD R.Dhanya 1, Mr. G.R.Anantha Raman 2 1. Department of Computer Science and Engineering, Adhiyamaan college of Engineering(Hosur).

More information

Towards Streaming Media Traffic Monitoring and Analysis. Hun-Jeong Kang, Hong-Taek Ju, Myung-Sup Kim and James W. Hong. DP&NM Lab.

Towards Streaming Media Traffic Monitoring and Analysis. Hun-Jeong Kang, Hong-Taek Ju, Myung-Sup Kim and James W. Hong. DP&NM Lab. Towards Streaming Media Traffic Monitoring and Analysis Hun-Jeong Kang, Hong-Taek Ju, Myung-Sup Kim and James W. Hong Dept. of Computer Science and Engineering, Pohang Korea Email: {bluewind, juht, mount,

More information

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29. Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

More information

Live and On-Demand Video with Silverlight and IIS Smooth Streaming

Live and On-Demand Video with Silverlight and IIS Smooth Streaming Live and On-Demand Video with Silverlight and IIS Smooth Streaming Microsoft Corporation February 2010 Contents Contents...2 Introduction...3 The Challenges Associated with Current Online Video Delivery

More information

Encapsulating Voice in IP Packets

Encapsulating Voice in IP Packets Encapsulating Voice in IP Packets Major VoIP Protocols This topic defines the major VoIP protocols and matches them with the seven layers of the OSI model. Major VoIP Protocols 15 The major VoIP protocols

More information

Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme Chapter 2: Representation of Multimedia Data Chapter 3: Multimedia Systems Communication Aspects and Services Multimedia Applications and Communication Protocols Quality of Service and Resource Management

More information

Cisco Application Networking for Citrix Presentation Server

Cisco Application Networking for Citrix Presentation Server Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

WebEx. Network Bandwidth White Paper. WebEx Communications Inc. - 1 -

WebEx. Network Bandwidth White Paper. WebEx Communications Inc. - 1 - WebEx Network Bandwidth White Paper WebEx Communications Inc. - 1 - Copyright WebEx Communications, Inc. reserves the right to make changes in the information contained in this publication without prior

More information

Per-Flow Queuing Allot's Approach to Bandwidth Management

Per-Flow Queuing Allot's Approach to Bandwidth Management White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth

More information

Region 10 Videoconference Network (R10VN)

Region 10 Videoconference Network (R10VN) Region 10 Videoconference Network (R10VN) Network Considerations & Guidelines 1 What Causes A Poor Video Call? There are several factors that can affect a videoconference call. The two biggest culprits

More information

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu Dilip Kumar Krishnappa, Samamon Khemmarat, Lixin Gao, Michael Zink University of Massachusetts Amherst,

More information

Video Streaming Without Interruption

Video Streaming Without Interruption Video Streaming Without Interruption Adaptive bitrate and content delivery networks: Are they enough to achieve high quality, uninterrupted Internet video streaming? WHITE PAPER Abstract The increasing

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Clearing the Way for VoIP

Clearing the Way for VoIP Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc (International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan dr.khalidbilal@hotmail.com

More information

Cisco Integrated Services Routers Performance Overview

Cisco Integrated Services Routers Performance Overview Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,

More information

The Problem with TCP. Overcoming TCP s Drawbacks

The Problem with TCP. Overcoming TCP s Drawbacks White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,

More information

High-Speed TCP Performance Characterization under Various Operating Systems

High-Speed TCP Performance Characterization under Various Operating Systems High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,

More information

Network Performance Monitoring at Small Time Scales

Network Performance Monitoring at Small Time Scales Network Performance Monitoring at Small Time Scales Konstantina Papagiannaki, Rene Cruz, Christophe Diot Sprint ATL Burlingame, CA dina@sprintlabs.com Electrical and Computer Engineering Department University

More information

Understanding the Performance of an X550 11-User Environment

Understanding the Performance of an X550 11-User Environment Understanding the Performance of an X550 11-User Environment Overview NComputing's desktop virtualization technology enables significantly lower computing costs by letting multiple users share a single

More information

Question: 3 When using Application Intelligence, Server Time may be defined as.

Question: 3 When using Application Intelligence, Server Time may be defined as. 1 Network General - 1T6-521 Application Performance Analysis and Troubleshooting Question: 1 One component in an application turn is. A. Server response time B. Network process time C. Application response

More information

Test Methodology White Paper. Author: SamKnows Limited

Test Methodology White Paper. Author: SamKnows Limited Test Methodology White Paper Author: SamKnows Limited Contents 1 INTRODUCTION 3 2 THE ARCHITECTURE 4 2.1 Whiteboxes 4 2.2 Firmware Integration 4 2.3 Deployment 4 2.4 Operation 5 2.5 Communications 5 2.6

More information

Analysis and modeling of YouTube traffic

Analysis and modeling of YouTube traffic This is a pre-peer reviewed version of the following article: Ameigeiras, P., Ramos-Munoz, J. J., Navarro-Ortiz, J. and Lopez-Soler, J.M. (212), Analysis and modelling of YouTube traffic. Trans Emerging

More information

White Paper. Enterprise IPTV and Video Streaming with the Blue Coat ProxySG >

White Paper. Enterprise IPTV and Video Streaming with the Blue Coat ProxySG > White Paper Enterprise IPTV and Video Streaming with the Blue Coat ProxySG > Table of Contents INTRODUCTION................................................... 2 SOLUTION ARCHITECTURE.........................................

More information

MICROSOFT. Remote Desktop Protocol Performance Improvements in Windows Server 2008 R2 and Windows 7

MICROSOFT. Remote Desktop Protocol Performance Improvements in Windows Server 2008 R2 and Windows 7 MICROSOFT Remote Desktop Protocol Performance Improvements in Windows Server 2008 R2 and Windows 7 Microsoft Corporation January 2010 Copyright This document is provided as-is. Information and views expressed

More information

Testing & Assuring Mobile End User Experience Before Production. Neotys

Testing & Assuring Mobile End User Experience Before Production. Neotys Testing & Assuring Mobile End User Experience Before Production Neotys Agenda Introduction The challenges Best practices NeoLoad mobile capabilities Mobile devices are used more and more At Home In 2014,

More information

AT&T Connect Video Conferencing Functional and Architectural Overview. v9.5 October 2012

AT&T Connect Video Conferencing Functional and Architectural Overview. v9.5 October 2012 AT&T Connect Video Conferencing Functional and Architectural Overview v9.5 October 2012 Video Conferencing Functional and Architectural Overview Published by: AT&T Intellectual Property Product: AT&T Connect

More information

Design, Implementation and Evaluation of Traffic Shaping Proxies for Power Saving in Mobile Audio Streaming

Design, Implementation and Evaluation of Traffic Shaping Proxies for Power Saving in Mobile Audio Streaming AALTO UNIVERSITY School of Science and Technology Faculty of Information and Natural Sciences Department of Computer Science and Engineering Design, Implementation and Evaluation of Traffic Shaping Proxies

More information

Dissertation Title: SOCKS5-based Firewall Support For UDP-based Application. Author: Fung, King Pong

Dissertation Title: SOCKS5-based Firewall Support For UDP-based Application. Author: Fung, King Pong Dissertation Title: SOCKS5-based Firewall Support For UDP-based Application Author: Fung, King Pong MSc in Information Technology The Hong Kong Polytechnic University June 1999 i Abstract Abstract of dissertation

More information

ETM System SIP Trunk Support Technical Discussion

ETM System SIP Trunk Support Technical Discussion ETM System SIP Trunk Support Technical Discussion Release 6.0 A product brief from SecureLogix Corporation Rev C SIP Trunk Support in the ETM System v6.0 Introduction Today s voice networks are rife with

More information

Network Considerations for IP Video

Network Considerations for IP Video Network Considerations for IP Video H.323 is an ITU standard for transmitting voice and video using Internet Protocol (IP). It differs from many other typical IP based applications in that it is a real-time

More information

co Characterizing and Tracing Packet Floods Using Cisco R

co Characterizing and Tracing Packet Floods Using Cisco R co Characterizing and Tracing Packet Floods Using Cisco R Table of Contents Characterizing and Tracing Packet Floods Using Cisco Routers...1 Introduction...1 Before You Begin...1 Conventions...1 Prerequisites...1

More information

IP-Telephony Real-Time & Multimedia Protocols

IP-Telephony Real-Time & Multimedia Protocols IP-Telephony Real-Time & Multimedia Protocols Bernard Hammer Siemens AG, Munich Siemens AG 2001 1 Presentation Outline Media Transport RTP Stream Control RTCP RTSP Stream Description SDP 2 Real-Time Protocol

More information

Fragmented MPEG-4 Technology Overview

Fragmented MPEG-4 Technology Overview Fragmented MPEG-4 Technology Overview www.mobitv.com 6425 Christie Ave., 5 th Floor Emeryville, CA 94607 510.GET.MOBI HIGHLIGHTS Mobile video traffic is increasing exponentially. Video-capable tablets

More information

Protocols. Packets. What's in an IP packet

Protocols. Packets. What's in an IP packet Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets

More information

Mixer/Translator VOIP/SIP. Translator. Mixer

Mixer/Translator VOIP/SIP. Translator. Mixer Mixer/Translator VOIP/SIP RTP Mixer, translator A mixer combines several media stream into a one new stream (with possible new encoding) reduced bandwidth networks (video or telephone conference) appears

More information

Cisco WAAS Express. Product Overview. Cisco WAAS Express Benefits. The Cisco WAAS Express Advantage

Cisco WAAS Express. Product Overview. Cisco WAAS Express Benefits. The Cisco WAAS Express Advantage Data Sheet Cisco WAAS Express Product Overview Organizations today face several unique WAN challenges: the need to provide employees with constant access to centrally located information at the corporate

More information

Understanding Slow Start

Understanding Slow Start Chapter 1 Load Balancing 57 Understanding Slow Start When you configure a NetScaler to use a metric-based LB method such as Least Connections, Least Response Time, Least Bandwidth, Least Packets, or Custom

More information

BroadCloud PBX Customer Minimum Requirements

BroadCloud PBX Customer Minimum Requirements BroadCloud PBX Customer Minimum Requirements Service Guide Version 2.0 1009 Pruitt Road The Woodlands, TX 77380 Tel +1 281.465.3320 WWW.BROADSOFT.COM BroadCloud PBX Customer Minimum Requirements Service

More information

Receiving the IP packets Decoding of the packets Digital-to-analog conversion which reproduces the original voice stream

Receiving the IP packets Decoding of the packets Digital-to-analog conversion which reproduces the original voice stream Article VoIP Introduction Internet telephony refers to communications services voice, fax, SMS, and/or voice-messaging applications that are transported via the internet, rather than the public switched

More information

5. DEPLOYMENT ISSUES Having described the fundamentals of VoIP and underlying IP infrastructure, let s address deployment issues.

5. DEPLOYMENT ISSUES Having described the fundamentals of VoIP and underlying IP infrastructure, let s address deployment issues. 5. DEPLOYMENT ISSUES Having described the fundamentals of VoIP and underlying IP infrastructure, let s address deployment issues. 5.1 LEGACY INTEGRATION In most cases, enterprises own legacy PBX systems,

More information

15-441: Computer Networks Homework 2 Solution

15-441: Computer Networks Homework 2 Solution 5-44: omputer Networks Homework 2 Solution Assigned: September 25, 2002. Due: October 7, 2002 in class. In this homework you will test your understanding of the TP concepts taught in class including flow

More information

Mobile Computing/ Mobile Networks

Mobile Computing/ Mobile Networks Mobile Computing/ Mobile Networks TCP in Mobile Networks Prof. Chansu Yu Contents Physical layer issues Communication frequency Signal propagation Modulation and Demodulation Channel access issues Multiple

More information

Experimentation with the YouTube Content Delivery Network (CDN)

Experimentation with the YouTube Content Delivery Network (CDN) Experimentation with the YouTube Content Delivery Network (CDN) Siddharth Rao Department of Computer Science Aalto University, Finland siddharth.rao@aalto.fi Sami Karvonen Department of Computer Science

More information

Experiences with Interactive Video Using TFRC

Experiences with Interactive Video Using TFRC Experiences with Interactive Video Using TFRC Alvaro Saurin, Colin Perkins University of Glasgow, Department of Computing Science Ladan Gharai University of Southern California, Information Sciences Institute

More information

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,

More information

Blue Coat Security First Steps Solution for Streaming Media

Blue Coat Security First Steps Solution for Streaming Media Blue Coat Security First Steps Solution for Streaming Media SGOS 6.5 Third Party Copyright Notices 2014 Blue Coat Systems, Inc. All rights reserved. BLUE COAT, PROXYSG, PACKETSHAPER, CACHEFLOW, INTELLIGENCECENTER,

More information

Sage 300 ERP Online. Mac Resource Guide. (Formerly Sage ERP Accpac Online) Updated June 1, 2012. Page 1

Sage 300 ERP Online. Mac Resource Guide. (Formerly Sage ERP Accpac Online) Updated June 1, 2012. Page 1 Sage 300 ERP Online (Formerly Sage ERP Accpac Online) Mac Resource Guide Updated June 1, 2012 Page 1 Table of Contents 1.0 Introduction... 3 2.0 Getting Started with Sage 300 ERP Online using a Mac....

More information

Sage ERP Accpac Online

Sage ERP Accpac Online Sage ERP Accpac Online Mac Resource Guide Thank you for choosing Sage ERP Accpac Online. This Resource Guide will provide important information and instructions on how you can get started using your Mac

More information

Cisco Application Networking for IBM WebSphere

Cisco Application Networking for IBM WebSphere Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

Evaluating Data Networks for Voice Readiness

Evaluating Data Networks for Voice Readiness Evaluating Data Networks for Voice Readiness by John Q. Walker and Jeff Hicks NetIQ Corporation Contents Introduction... 2 Determining Readiness... 2 Follow-on Steps... 7 Summary... 7 Our focus is on organizations

More information

A Performance Study of VoIP Applications: MSN vs. Skype

A Performance Study of VoIP Applications: MSN vs. Skype This full text paper was peer reviewed by subject matter experts for publication in the MULTICOMM 2006 proceedings. A Performance Study of VoIP Applications: MSN vs. Skype Wen-Hui Chiang, Wei-Cheng Xiao,

More information

Computer Networks Homework 1

Computer Networks Homework 1 Computer Networks Homework 1 Reference Solution 1. (15%) Suppose users share a 1 Mbps link. Also suppose each user requires 100 kbps when transmitting, but each user transmits only 10 percent of the time.

More information

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying

More information

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS? 18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic

More information

MINIMUM NETWORK REQUIREMENTS 1. REQUIREMENTS SUMMARY... 1

MINIMUM NETWORK REQUIREMENTS 1. REQUIREMENTS SUMMARY... 1 Table of Contents 1. REQUIREMENTS SUMMARY... 1 2. REQUIREMENTS DETAIL... 2 2.1 DHCP SERVER... 2 2.2 DNS SERVER... 2 2.3 FIREWALLS... 3 2.4 NETWORK ADDRESS TRANSLATION... 4 2.5 APPLICATION LAYER GATEWAY...

More information

SIP Trunking and Voice over IP

SIP Trunking and Voice over IP SIP Trunking and Voice over IP Agenda What is SIP Trunking? SIP Signaling How is Voice encoded and transported? What are the Voice over IP Impairments? How is Voice Quality measured? VoIP Technology Confidential

More information

SiteCelerate white paper

SiteCelerate white paper SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance

More information

HOSTING A LIFEWAY SIMULCAST

HOSTING A LIFEWAY SIMULCAST HOSTING A LIFEWAY SIMULCAST What is a Simulcast? A simulcast is not much different than a broadcast of your favorite weekly sitcom or sporting event. The main difference is how the broadcast is sent out

More information

Segmented monitoring of 100Gbps data containing CDN video. Telesoft White Papers

Segmented monitoring of 100Gbps data containing CDN video. Telesoft White Papers Segmented monitoring of 100Gbps data containing CDN video Telesoft White Papers Steve Patton Senior Product Manager 23 rd April 2015 IP Video The Challenge The growth in internet traffic caused by increasing

More information

Bandwidth Security and QoS Considerations

Bandwidth Security and QoS Considerations This chapter presents some design considerations for provisioning network bandwidth, providing security and access to corporate data stores, and ensuring Quality of Service (QoS) for Unified CCX applications.

More information

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

More information

Performance of Cisco IPS 4500 and 4300 Series Sensors

Performance of Cisco IPS 4500 and 4300 Series Sensors White Paper Performance of Cisco IPS 4500 and 4300 Series Sensors White Paper September 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of

More information

DNS (Domain Name System) is the system & protocol that translates domain names to IP addresses.

DNS (Domain Name System) is the system & protocol that translates domain names to IP addresses. Lab Exercise DNS Objective DNS (Domain Name System) is the system & protocol that translates domain names to IP addresses. Step 1: Analyse the supplied DNS Trace Here we examine the supplied trace of a

More information

(Refer Slide Time: 01:46)

(Refer Slide Time: 01:46) Data Communication Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture - 38 Multimedia Services Hello viewers, welcome to today's lecture on multimedia

More information

Measure wireless network performance using testing tool iperf

Measure wireless network performance using testing tool iperf Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,

More information

Analysis of IP Network for different Quality of Service

Analysis of IP Network for different Quality of Service 2009 International Symposium on Computing, Communication, and Control (ISCCC 2009) Proc.of CSIT vol.1 (2011) (2011) IACSIT Press, Singapore Analysis of IP Network for different Quality of Service Ajith

More information

Configuring and Monitoring the Client Desktop Component

Configuring and Monitoring the Client Desktop Component Configuring and Monitoring the Client Desktop Component eg Enterprise v5.6 Restricted Rights Legend The information contained in this document is confidential and subject to change without notice. No part

More information

Network Connection Considerations for Microsoft Response Point 1.0 Service Pack 2

Network Connection Considerations for Microsoft Response Point 1.0 Service Pack 2 Network Connection Considerations for Microsoft Response Point 1.0 Service Pack 2 Updated: February 2009 Microsoft Response Point is a small-business phone solution that is designed to be easy to use and

More information

SwiftBroadband and IP data connections

SwiftBroadband and IP data connections SwiftBroadband and IP data connections Version 01 30.01.08 inmarsat.com/swiftbroadband Whilst the information has been prepared by Inmarsat in good faith, and all reasonable efforts have been made to ensure

More information

Troubleshooting VoIP and Streaming Video Problems

Troubleshooting VoIP and Streaming Video Problems Using the ClearSight Analyzer to troubleshoot the top five VoIP problems and troubleshoot Streaming Video With the prevalence of Voice over IP and Streaming Video applications within the enterprise, it

More information

Using email over FleetBroadband

Using email over FleetBroadband Using email over FleetBroadband Version 01 20 October 2007 inmarsat.com/fleetbroadband Whilst the information has been prepared by Inmarsat in good faith, and all reasonable efforts have been made to ensure

More information

District of Columbia Courts Attachment 1 Video Conference Bridge Infrastructure Equipment Performance Specification

District of Columbia Courts Attachment 1 Video Conference Bridge Infrastructure Equipment Performance Specification 1.1 Multipoint Control Unit (MCU) A. The MCU shall be capable of supporting (20) continuous presence HD Video Ports at 720P/30Hz resolution and (40) continuous presence ports at 480P/30Hz resolution. B.

More information

Quality Estimation for Streamed VoIP Services

Quality Estimation for Streamed VoIP Services Quality Estimation for Streamed VoIP Services Mousa Al-Akhras and Hussein Zedan STRL, De Montfort University, Leicester, UK makhras@dmu.ac.uk, hzedan@dmu.ac.uk http://www.cse.dmu.ac.uk/strl/index.html

More information

Performance of Host Identity Protocol on Nokia Internet Tablet

Performance of Host Identity Protocol on Nokia Internet Tablet Performance of Host Identity Protocol on Nokia Internet Tablet Andrey Khurri Helsinki Institute for Information Technology HIP Research Group IETF 68 Prague March 23, 2007

More information

VoIP QoS on low speed links

VoIP QoS on low speed links Ivana Pezelj Croatian Academic and Research Network - CARNet J. Marohni a bb 0 Zagreb, Croatia Ivana.Pezelj@CARNet.hr QoS on low speed links Julije Ožegovi Faculty of Electrical Engineering, Mechanical

More information

CONNECTING TO LYNC/SKYPE FOR BUSINESS OVER THE INTERNET NETWORK PREP GUIDE

CONNECTING TO LYNC/SKYPE FOR BUSINESS OVER THE INTERNET NETWORK PREP GUIDE CONNECTING TO LYNC/SKYPE FOR BUSINESS OVER THE INTERNET NETWORK PREP GUIDE Engineering Version 1.3 June 3, 2015 Table of Contents Foreword... 3 Current Network... 4 Understanding Usage/Personas... 4 Modeling/Personas...

More information

Getting Started with. Avaya TM VoIP Monitoring Manager

Getting Started with. Avaya TM VoIP Monitoring Manager Getting Started with Avaya TM VoIP Monitoring Manager Contents AvayaTM VoIP Monitoring Manager 5 About This Guide 5 What is VoIP Monitoring Manager 5 Query Endpoints 5 Customize Query to Filter Based

More information

Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER

Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER Introduction Many VMware customers have virtualized their ROBO (Remote Office Branch Office) offices in order to

More information

VegaStream Information Note Considerations for a VoIP installation

VegaStream Information Note Considerations for a VoIP installation VegaStream Information Note Considerations for a VoIP installation To get the best out of a VoIP system, there are a number of items that need to be considered before and during installation. This document

More information

Using the ClearSight Analyzer To Troubleshoot the Top Five VoIP Problems And Troubleshooting Streaming Video

Using the ClearSight Analyzer To Troubleshoot the Top Five VoIP Problems And Troubleshooting Streaming Video Using the ClearSight Analyzer To Troubleshoot the Top Five VoIP Problems And Troubleshooting Streaming Video With the prevalence of Voice over IP applications within the enterprise, it is important to

More information

Analysis of QoS parameters of VOIP calls over Wireless Local Area Networks

Analysis of QoS parameters of VOIP calls over Wireless Local Area Networks Analysis of QoS parameters of VOIP calls over Wireless Local Area Networks Ayman Wazwaz, Computer Engineering Department, Palestine Polytechnic University, Hebron, Palestine, aymanw@ppu.edu Duaa sweity

More information