SOME SOFTWARE ISSUES OF A REAL-TIME MULTIMEDIA NETWORKING SYSTEM



Similar documents
sndio OpenBSD audio & MIDI framework for music and desktop applications

PRODUCTIVITY ESTIMATION OF UNIX OPERATING SYSTEM

An Introduction to VoIP Protocols

Simple Voice over IP (VoIP) Implementation

ADVANTAGES OF AV OVER IP. EMCORE Corporation

How to Send Video Images Through Internet

Transport Layer Protocols

4.1 Threads in the Server System

High-Speed Thin Client Technology for Mobile Environment: Mobile RVEC

A Slow-sTart Exponential and Linear Algorithm for Energy Saving in Wireless Networks

Design and Realization of Internet of Things Based on Embedded System

Scheduling Video Stream Transmissions for Distributed Playback over Mobile Cellular Networks

Have both hardware and software. Want to hide the details from the programmer (user).

Quality of Service for Streamed Multimedia over the Internet

PERFORMANCE ANALYSIS OF VIDEO FORMATS ENCODING IN CLOUD ENVIRONMENT

Objectives of Lecture. Network Architecture. Protocols. Contents

Classes of multimedia Applications

A Conference Control Protocol for Highly Interactive Video-conferencing

Sample Project List. Software Reverse Engineering

Protocols and Architecture. Protocol Architecture.

Maximizing the number of users in an interactive video-ondemand. Citation Ieee Transactions On Broadcasting, 2002, v. 48 n. 4, p.

CREW - FP7 - GA No Cognitive Radio Experimentation World. Project Deliverable D7.5.4 Showcase of experiment ready (Demonstrator)

Chapter 3. Internet Applications and Network Programming

Why SSL is better than IPsec for Fully Transparent Mobile Network Access

Design Issues in a Bare PC Web Server

AN ANALYSIS OF DELAY OF SMALL IP PACKETS IN CELLULAR DATA NETWORKS

point to point and point to multi point calls over IP

Dynamic Thread Pool based Service Tracking Manager

Per-Flow Queuing Allot's Approach to Bandwidth Management

Digital Audio and Video Data

INITIAL TOOL FOR MONITORING PERFORMANCE OF WEB SITES

Multimedia Transport Protocols for WebRTC

Bandwidth Control in Multiple Video Windows Conferencing System Lee Hooi Sien, Dr.Sureswaran

A Novel Framework for Improving Bandwidth Utilization for VBR Video Delivery over Wide-Area Networks

Clearing the Way for VoIP

technology standards and protocol for ip telephony solutions

Research on Video Traffic Control Technology Based on SDN. Ziyan Lin

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

ACN2005 Term Project Improve VoIP quality

Seamless Handover of Streamed Video over UDP between Wireless LANs

A Scalable Video-on-Demand Service for the Provision of VCR-Like Functions 1

Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network

MLPPP Deployment Using the PA-MC-T3-EC and PA-MC-2T3-EC

Quick Start. Guide. The. Guide

Mobile Operating Systems Lesson 05 Windows CE Part 1

Understanding Latency in IP Telephony

Quality of Service in ATM Networks

Chapter 3 ATM and Multimedia Traffic

Computer Networks/DV2 Lab

Disfer. Sink - Sensor Connectivity and Sensor Android Application. Protocol implementation: Charilaos Stais (stais AT aueb.gr)

Network Attached Storage. Jinfeng Yang Oct/19/2015

NETWORK REQUIREMENTS FOR HIGH-SPEED REAL-TIME MULTIMEDIA DATA STREAMS

Multimedia Requirements. Multimedia and Networks. Quality of Service

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Real Time Programming: Concepts

Enlarge Bandwidth of Multimedia Server with Network Attached Storage System

Design and implementation of IPv6 multicast based High-quality Videoconference Tool (HVCT) *

Note monitors controlled by analog signals CRT monitors are controlled by analog voltage. i. e. the level of analog signal delivered through the

Bandwidth-limited systems. Chapter 8 Bandwidth-Limited Systems. Two flavors of limited bandwidth - 1. Outline of the chapter

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

EQUITABLE QUALITY VIDEO STREAMING OVER DSL. Barry Crabtree, Mike Nilsson, Pat Mulroy and Steve Appleby

Overlapping Data Transfer With Application Execution on Clusters

A Robust Multimedia Contents Distribution over IP based Mobile Networks

IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD

LIVE VIDEO STREAMING USING ANDROID

Encapsulating Voice in IP Packets

Native ATM Videoconferencing based on H.323

Multimedia Communications Voice over IP

MP3 Player CSEE 4840 SPRING 2010 PROJECT DESIGN.

FOMA (R) /Wireless LAN Dual Terminals, Wireless LAN Technology

Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:

Megapixel Surveillance

IP Data Over Satellite To Cable Headends And A New Operation Model With Digital Store And Forward Multi-Media Systems

A Network Performance Application for Modeling, Simulation, and Characterization of Packet Network Behavior

An Embedded Based Web Server Using ARM 9 with SMS Alert System

Performance analysis and comparison of virtualization protocols, RDP and PCoIP

Quality of Service Management for Teleteaching Applications Using the MPEG-4/DMIF

Performance Monitoring on Networked Virtual Environments

Stream Processing on GPUs Using Distributed Multimedia Middleware

Question: 3 When using Application Intelligence, Server Time may be defined as.

CDMA-based network video surveillance System Solutions

ICOM : Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM

IP Video Rendering Basics

FPGA Implementation of IP Packet Segmentation and Reassembly in Internet Router*

Measure wireless network performance using testing tool iperf

ReactOS is (not) Windows. Windows internals and why ReactOS couldn t just use a Linux kernel

Impact of Control Theory on QoS Adaptation in Distributed Middleware Systems

Narrow Bandwidth Streaming Video Codec

CMPE 150 Winter 2009

Performance Analysis of Network Subsystem on Virtual Desktop Infrastructure System utilizing SR-IOV NIC

Improved Digital Media Delivery with Telestream HyperLaunch

Design of a SIP Outbound Edge Proxy (EPSIP)

Kernel comparison of OpenSolaris, Windows Vista and. Linux 2.6

Why Threads Are A Bad Idea (for most purposes)

Getting Started with RemoteFX in Windows Embedded Compact 7

Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet

NOVEL PRIORITISED EGPRS MEDIUM ACCESS REGIME FOR REDUCED FILE TRANSFER DELAY DURING CONGESTED PERIODS

ICT SEcurity BASICS. Course: Software Defined Radio. Angelo Liguori. SP4TE lab.

Linux Driver Devices. Why, When, Which, How?

Transcription:

SOME SOFTWARE ISSUES OF A REAL-TIME MULTIMEDIA NETWORKING SYSTEM Gabriel-Miro Muntean, Liam Murphy Performance Engineering Laboratory, School of Electronic Engineering, Dublin City University, Dublin 9, Ireland munteang@eeng.dcu.ie, murphyl@eeng.dcu.ie Abstract The transmission of related multimedia data causes problems because of their very large size and continuous nature. Unlike the majority of the existing solutions for transmitting continuous media, which use connectionless protocols, we propose one that uses a connection-oriented protocol (TCP/IP). An object-oriented approach is used to build both server and client, allowing easier system debugging and expansion. We implemented a buffering mechanism which allows us to continue playing for a period when the network load increases. Multithreading is used to solve some problems which require concurrent solutions. 1. Introduction The process of transmission of related audio, video, images, text and/or data among networked computers is known as multimedia networking. Both audio and video streams have timing requirements as well as a need to play out them in a continuous manner at the receiver. Besides these individual timing requirements, a combined multimedia stream requires synchronization at the receiver to be understandable. Another problem with multimedia streams is their large filesize. A lot of work has already been done [1] to reduce the size of multimedia files by using different compression algorithms, thereby reducing the necessary bandwidth for transmission and thus the network load. For the system we have built, we have chosen MPEG format for compressing audio and video because of its high compression factor and its ability to adjust the size of the compressed stream depending on the requested quality. The requested and expected quality of transmission depends on where the multimedia stream is used. In some applications a high quality of service (QoS) is important. In other cases the expectations for QoS are lower, but a smooth flow of the multimedia stream is desirable. There were some proposals to improve multimedia transmission quality like traffic control and dynamic bandwidth renegotiation at the server [2], dynamic bandwidth allocation and flow/congestion control [3], rate shaping [4] or a feedback mechanisms [5], [6]. The majority of them were either theoretical studies or simulations. Our client-server system implements a real-time multimedia streaming over the network. The current paper discusses some of the software solutions used.

2. The System Overview Our client-server system has two main parts: the client and the server application (Fig 1). The implementation of the both client and server applications required the presence of a unit in charge of establishing and controlling the connections between them. Another one acquires information (e.g. video, audio) at the server and to play or display it at a client. A third unit is responsible for encoding (at the server) and decoding (at the client) of the multimedia stream while a fourth one implements the feedback mechanism at both sides (client and server). Figure 1: The structure of the Client-Server application An object-oriented approach has been used in the design of the system, which has been decomposed into subsystems in order to meet the system goals. The implementation uses parallel processing threads which are in charge of particular tasks. When an event occurs (e.g. a connection request arrives, data is received, a user command is sent), the system is informed about it by a message (as in any other windows application) and a special thread is assigned to handle it. In order to reduce the quantity of data to be sent, we have used MPEG compression for audio and video data [7]. Since it is computationally very complex [8], and we intend to handle real-time transmissions, we have chosen a hardware encoding solution. Because the feedback mechanism needs access to the MPEG stream components, we decided to implement our own software MPEG decoder for video, audio (layer 1, 2 and 3) and system streams. 3. Software Implementation Details With the system architecture from the Fig. 1, we now discuss in more detail some of the software problems occurred and the solutions we propose. Client-Server Communication We have used the services offered by the Windows Sockets 2 API to bind to a particular port and IP address on a host, initiate and accept a connection, send and receive data, and close a connection. A bi-directional connection, implemented and controlled by "Connection Managers", is needed for data transmission and the implementation of the feedback algorithm. Our "Connection Managers" improve the standard communication scheme by

using the Windows operating system message pump to call the correct function to build the connection or read incoming data, saving the time usually spent polling the sources for possible interruptions. Ideally the encoding, the transmission over the network, decoding and playing or displaying multimedia should be done at the same rate. Unfortunately, under heavier loading, it is possible that the network could delay different audio or video packets by different amounts. A buffer is necessary at each receiver (i.e. client) to store the arrived data until it is processed, and again after decoding until it is played or displayed. Our feedback mechanism also makes use of receiver-side buffers and therefore a relatively complex buffering mechanism has been developed. Buffering Mechanism A circular system of buffers (Fig. 2) has been implemented at the client side, allowing data to be received and stored into a FREE buffer, which will then be marked as FULL. The Decoder Unit takes FULL buffers and decompresses the data, storing the ready-to-display or ready-to-play data into READY buffers. Figure 2: The circular buffer system at a client Separate threads are playing or displaying data which is contained in buffers. These buffers are marked ACTIVE while playing and immediately after the process is over are marked FREE. Thus we allocate a number of buffers and reuse them, without losing the time to allocate and free them one-by-one as needed. One of the disadvantages of this scheme is the high memory space usage all the time, even if there is no data to store in all the buffers; but this is less of a problem with today's memory capacities. By introducing different pointers for the first FREE, FULL, READY and ACTIVE marked buffers respectively, we reduce to the minimum the time spent on sequential parsing of the circular buffer structure. Such a circular system of buffers is allocated to each type of MPEG stream (audio, video and system), both at server and at client side. Multithreading Our system uses multithreading for both server and client applications. At the server side, different threads are assigned to read data from a file and to send them to the client as is shown in Fig. 3. FillThread is in charge of filling the server buffer (SrvCircBuff) with the encoded data at the same time as they are being read and sent for decoding by a local decoder thread. TxThread transmits data from SrvCircBuff to the client every time a signal comes from a timer. At the client side (Fig. 3), there are three threads for each instance of an audio or video stream. FillThread takes the encoded data received from the server and stores it in a circular buffer (CliCircBuff). DecThread decodes data taken from the CliCircBuff and sends the decoded data to the audio or video driver, which puts it in the DriverBuff.

PlayThread plays or displays the decoded data already sent to the Play/Display Driver, according to the stream's properties. Figure 3: flow through threads and buffers both at server and at client sides Critical sections are used in order to protect the shared resources (e.g. circular buffers) from being accessed by more than one thread at a time. Events are used to signal threads (e.g. a new buffer was filled and is waiting to be decoded). The Windows message system wakes up a thread if an important event occurs, such as receiving a packet with data from the server. The system stream decoding needs to be able to decode both audio and video encoded data. Thus an extra number of threads have to be launched. Unfortunately, the CPU time slice given by the operating system to all the threads was not enough for a smooth play (especially of the audio stream). To keep up with the playing, the decoding thread had to be granted a higher priority than the other threads. This in turn meant that the system decoding thread also had to be higher priority. Streams Synchronization The Synchronization Unit is not active unless a system stream is transmitted. The MPEG system decoder sends data to the video and audio circular buffer systems at irregular intervals. With the help of the decoding time stamps (DTS) and the presentation time stamps (PTS), the system, audio and video stream decoders are able to decode received data and play/display them at the appropriate times. Audio Video Audio r Audio Play Event Synchro Event Video Play Event Video r System Clock Figure 4: Synchronization Unit System Clock Reference A System Clock is created by the Synchronization Unit at the client side (Fig. 4) when the server sends the first part of the system file requested. The initialization and adjusting is done when the client decodes the System Clock Reference (SCR) components of the system stream. DTS and PTS are sent along with the decoded data to the play/display drivers. The System Clock repeatedly sends synchronization events (Synchro Event) to the audio playing thread and to the video displaying thread. If the

two streams are not played or displayed simultaneously, by using DTS, PTS and the Synchro Event sent by the System Clock, the skew can be reduced to zero by adjusting the displaying rate of the video. We didn't interfere with the audio playing thread because from experience we noticed that any change in the rate the audio data are played badly affects the general quality. 4. Preliminary Results The server was built using multi-template and multi-document approach, so it can open more than one type of document template and, for each document template, more than one document at a time. Thus the server application can accept client requests for connections while playing one or more MPEG or AVI audio, video and system files. The following charts show the buffers occupancy for both audio and video, in both cases: elementary streams and system stream. Next we will discuss the results obtained allocating a Circular Buffer of the type described earlier, with 5 equal size buffers. The size of the buffers depends on the case: for audio we have used 496 bytes large buffers and for video 16384 bytes large buffers. 6 Audio Stream: Circular Buffer Occupancy vs 6 Video Stream: Circular Buffer Occupancy vs 5 5 4 3 2 EM PTY DECODED READY 4 3 2 EM PTY DECODED READY 1 151 31 451 61 751 91 151 121 Figure 5: Audio Stream: Buffer Occupancy vs. 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Figure 6: Video Stream: Buffer Occupancy vs. Looking at figures 5 and 6, in which the number of different marked buffers against the time were plotted, we can see that, when the stream the user wants to be played has been chosen, the incoming data is decoded and some of the EMPTY marked buffers are filled. Thus an increasing trend of the DECODED marked buffers can be remarked. When the streams started to be played, the number of buffers sent to the playing driver increases. In the case of audio stream whose decoding is not so CPU time-consuming, the increase is sharper than the one observed in the case of video elementary stream. To be able to play the stream correctly to the end, we have to decode, in a timely fashion, enough data to submit to the play or display driver. When the decoding threads finish their jobs, the playing or displaying ones continue their jobs until the data they have received is finished. We notice that the number chosen for the available buffers (5) is big enough to cover the requirements of both streams. To save memory, 25 buffers for audio decoding and 2 buffers for video decoding solve the problem; even a smaller number of buffers may be enough. The system stream has a much harder job to be done in the same time, because its streams have to be split before decoding is performed on each of them. Thus, looking at Fig. 7 and Fig. 8 which show the audio and video buffer occupancy respectively, we notice a more gradual increase of the decoded data filled buffers and of the ready-to-play

or ready-to-display one. In the system case as well, the number of the used buffers does not tend to exceed the number allocated. 14 System Stream: Audio Occupancy 35 System Stream: Video Occupancy 12 8 6 4 2 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 Figure 7: System Stream: Audio Buffer Occupancy vs. Encode d Decode d Sent To Drive r 3 25 2 15 5 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 Figure 8: System Stream: Video Buffer Occupancy vs. Encode d Sent To Dr iver 5. Conclusion and Further Development The prototype system is still under development. The "Connection Managers" are running as required. They have been tested by sending files of both MPEG and AVI types and have yielded satisfactory results. The MPEG Decoding Unit successfully decodes MPEG-1 system, audio and video streams, with the help of the buffering scheme implemented. Some improvements may be necessary regarding system performance. Streaming has been realized with the help of buffers. Thus a multimedia stream can be decoded even though the whole file is not available yet. References [1] W. T. Ooi, B. Smith, S. Mukhopadhyay, H. H. Chan, S. Weiss and M. Chiu, The Dalí Multimedia Software Library, SPIE Multimedia Computing and Networking 1999, San Jose, CA, January 25-27, http://www.cs.cornell.edu/zeno/papers/#dali [2] B. Zheng and M. Atiquzzaman, Traffic Management of Multimedia over ATM Networks, IEEE Communications Magazine, vol. 37, no. 1, Jan. 1999, pp. 33-38 [3] M. Krunz, Bandwidth Allocation Strategies for Transporting Variable-Bit-Rate Video Traffic, IEEE Communications Magazine, vol. 37, no. 1, Jan. 1999, pp. 4-46 [4] B. Sheu and A. Ortega, "Microsystems Technology for Multimedia Applications", IEEE Press, May, 1995 [5] P. Bindal and L. Murphy, Multimedia Feedback Control in ATM Local Area Networks, Proceedings of the 35 th ACM Southeast Conference, Murfreesboro, TN, April 2-4, 1997, pp. 243-25 [6] G. Muntean, L. Murphy, "An Object-Oriented Prototype System for Feedback Controlled Multimedia Networking", ISSC, University College of Dublin, Ireland, June 29-3, 2 [7] ISO/IEC International Standard 11172, MPEG-1 - Coding of Moving Pictures and Associated Audio for Digital Storage Media up to 1.5 Mbits/s, Nov. 1993. [8] D. LeGall, MPEG: A Video Compression standard for Multimedia Applications, Communications of the ACM, vol. 34, no. 4, April 1991, pp. 46-58