17th Polish Teletraffic Symposium Editors: Tadeusz Czachórski and Mateusz Nowak

Size: px
Start display at page:

Download "17th Polish Teletraffic Symposium 2012. Editors: Tadeusz Czachórski and Mateusz Nowak"

Transcription

1 17th Polish Teletraffic Symposium 2012 Editors: Tadeusz Czachórski and Mateusz Nowak

2 Technical Program Committee: Andrzej Bartoszewicz, Łódź University of Technology Wojciech Burakowski, Warsaw University of Technology Andrzej Chydziński, Silesian University of Technology, Gliwice Tadeusz Czachórski, Institute of Theoretical and Applied Informatics PAS, Gliwice Andrzej Duda, Laboratoire d Informatique de Grenoble Grzegorz Danilewicz, Poznań University of Technology Zbigniew Dziong, Université de Quebec Janusz Filipiak, AGH University of Science and Technology, Cracow Mariusz Gła bowski, Poznań University of Technology Adam Grzech, Wrocław University of Technology Andrzej Jajszczyk, AGH University of Science and Technology, Cracow Wojciech Kabaciński, Poznań University of Technology Sylwester Kaczmarek, Gdańsk University of Technology Jerzy Konorski, Gdańsk University of Technology Józef Lubacz, Warsaw University of Technology Wojciech Molisz, Gdańsk University of Technology Andrzej R. Pach, AGH University of Science and Technology, Cracow Zdzisław Papir, AGH University of Science and Technology, Cracow Krzysztof Pawlikowski, University of Canterbury Michal Pióro, Warsaw University of Technology Maciej Stasiak, Poznań University of Technology Krzysztof Walkowiak, Wrocław University of Technology Józef Woźniak, Gdańsk University of Technology Conference Chair: Tadeusz Czachórski Institute of Theoretical and Applied Informatics Polish Academy of Sciences (IITiS PAN) Organizing Committee: Chair: Mateusz Nowak, IITiS PAN Joanna Domańska, IITiS PAN Krzysztof Grochla, IITiS PAN Cover design: Agnieszka Świstak Editor: IITiS PAN Co-Editor: Pracownia Komputerowa Jacka Skalmierskiego ISBN

3 Foreword The Teletraffic Polish Symposium (PTS) is organised since 1994 on a rotational basis by Universities of Technology located in Warsaw, Poznań, Gdańsk, Wrocław, and Łódź, as well as AGH Cracow University of Science and Technology, and Institute of Theoretical and Applied Informatics Polish Academy of Sciences at Gliwice. The aim of the Symposium is to provide a regular forum for a discussion on the design, implementation and perfection of contemporary telecommunications and computer communications systems. Some international events as Polish-German Symposia and The First European Teletraffic Seminar originated from the activities of the Symposium. It brings together network operators and academic researchers to share their knowledge on the telecommunications techniques. The development of new network technologies which offer higher bandwidth, the deployment of mobile communications, modern services, multimedia, unlimited Internet access etc. create numerous problems and questions, e.g. how to interwork existing services with Next Generation Internet, how to manage and optimise the transmission of growing traffic volumes, how to enhance network s performance. Among the present PTS topics of interest, we mention teletraffic theory, modelling and optimisation, performance evaluation and modelling of communication protocols, traffic issues for Internet, mobile systems, and high speed packet-switched networks, mobility models, traffic models for optical networks, network virtualization, performance analysis methods and simulation, traffic measurements, intelligent routing protocols, broadcast and multicast traffic control and management, QoS/QoE issues. In the conference materials of the currrent 17th PTS edition we present 23 articles refereed by the members of the Program Committee. The articles are proceeded by abstracts of four invited talks. Tadeusz Czachórski

4 IV

5 Contents 1 Halina Tarasiuk, QoS guarantees in NGN networks 1 2 Mariusz Gł abowski, Modelling multi-service switching networks 3 3 Krzysztof Walkowiak, Modeling of Content-Oriented Networks 5 4 Andrzej Chydziński, Queues with dropping functions 7 5 Maciej Sosnowski, and Wojciech Burakowski, Analysis of the system with vacations under Poissonian input stream and constant service times 9 6 Yuan Li, Michał Pióro, and Mateusz Żotkiewicz, Robust Topology Design with Routing Optimization in Wireless Optical Networks 15 7 Mateusz Żotkiewicz, and Michał Pióro, Mixed integer programming model of MCCA-aware acknowledging in Wireless Mesh Networks 17 8 Sylwester Kaczmarek, Magdalena Młynarczuk, and Paweł Zieńko, Performance Evaluation of Control Plane Functions in ASON /GMPLS Architecture 23 9 Michał Czarkowski, Sylwester Kaczmarek, and Maciej Wolff, Traffic Type Influence on Performance of OSPF QoS Routing Sylwester Kaczmarek, and Maciej Sac, Message Inter-Arrival and Inter-Departure Time Distributions in IMS/NGN Architecture Krzysztof Gierłowski, Service and Path Discovery Extensions for Self-forming IEEE s Wireless Mesh Systems 45 V

6 12 Maciej Sobieraj, Maciej Stasiak, Piotr Zwierzykowski, and Joanna Weissenberg, Single Hysteresis Model for Multi-Service Networks with BPP Traffic Janusz Kleban, and Krzysztof Janecko, Packet Dispatching Scheme Employing Reguest Queues and Load Balancing for MSM Clos- Network Switches Małgorzata Langer, Quality Management in 4G Wireless Networking Technology Mariusz Głąbowski, and Michał Stasiak, Effective-availability method for blocking probability calculation in switching networks with overflow links and point-to-point selection Sławomir Hanczewski, Erlang s Ideal Grading with Multiplication of Links and BPP Traffic Marcin Dziuba, and Grzegorz Danilewicz, Performance evaluation of the MSMPS algorithm under different distribution traffic Bartłomiej Dabiński, Damian Petrecki, and Paweł Świątek, Performance of bandwidth limiting mechanisms in networks with multi-layered wirtualization Michał Kucharzak, and Krzysztof Walkowiak, A Combined Algorithm for Maximum Flow Trees Optimization in Overlay Multicast Maciej Szostak, and Krzysztof Walkowiak, Incremental Approach to Overlay Network Design Problems Maciej Demkiewicz, Andrzej Kozik, Radosław Rudek, Patryk Schauer, and Paweł Świątek, Enabling determination of packet scheduling algorithms in network processors Mariusz Fras, and Jan Kwiatkowski, Quality Aware Virtual Service Delivery System Krzysztof Walkowiak, Damian Bulira, and Davide Careglio, ILP Modeling of Many-to-many Transmissions in Computer Networks Grzegorz Rzym, and Krzysztof Wajda, Evaluation of Xen-based node for Future Internet IIP Initiative 129 VI

7 25 Jerzy Martyna, Coalition Formation Among Secondary Base Stations with Incomplete Information in Cognitive Radio Networks Jerzy Martyna, Quasi-offline Wireless Packet Scheduling for HDR Throughput Optimization Tadeusz Czachórski, Krzysztof Grochla, Tomasz Nycz, and Ferhan Pekergin, Duration of boot-up storm period 145

8 1 QoS guarantees in NGN networks (Keynote lecture) Halina Tarasiuk Institute of Telecommunications, Warsaw University of Technology, Poland ABSTRACT Nowadays, we observe many efforts to deploy Next Generation Networks (NGN) architecture in operational telecommunication networks. Objectives of NGN are to replace legacy telephone networks and to support multimedia services as VoIP, IPTV, VoD, and broadband Internet. Multimedia services require to assure end-to-end quality of service (QoS) guarantees. These guarantees are related to data packet transfer (packet delay, jitter, losses) as well as call setup/release delay and they need an implementation of mechanisms and algorithms at both transport stratum and service stratum of the NGN architecture. ITU-T recommendations of the architecture and IETF specifications of signalling protocols allow to implement functional modules of comprehensive signalling system to support QoS guarantees in the network. However, there are no detailed recommendations neither of complete procedures in the signalling system of NGN architecture nor definitions of detailed admission control algorithms required to assure QoS parameters. Since 10 years researchers have worked on the above algorithms and procedures. And now, they focus also on next topic which is an evolution from NGN to Future Internet. The talk will cover the main aspects related to QoS guarantees in the evolution from PSTN to NGN, and to Future Internet. In particular, research topics of the talk will correspond to proposals of end-to-end classes of service and admission control algorithms for selected group of applications with strict QoS requirements. The talk will focus on both aspects of QoS guarantees: data transfer and signalling system. For data transfer, algorithms and performance evaluation for QoS guarantees for so called real time and non-real time multimedia applications will be presented. Taking into account signalling system, signalling procedures and their performance evaluation as well as signalling overload control mechanisms will be discussed. BIOGRAPHY OF THE SPEAKER Halina Tarasiuk was born in Poland in She received Master s degree in computer science (Szczecin University of Technology) in 1996 and PhD degree in telecommunication (Warsaw University of Technology) in From 1998 she is with Telecommunication Network Technologies Group at the Institute of Telecommunications, Warsaw University of Technology. From 2004 she is assistant professor at the Warsaw University of Technology. From 1999 to 2003 she was collaborated with Polish Telecom R&D Centre. She was participated in COST 257, COST 279, 5FR IST-AQUILA ( ), and 6FR IST-EuQoS ( ) European projects working on quality of services guarantees in heterogeneous networks: architecture, signalling system, admission control algorithms, queueing systems. She also has been involved in several national projects supported by government as well as by telecom operators. Current activities: Future Internet Engineering project (2010- ), polish national project partially funded by EU, leader of the group IPv6 QoS Parallel Internet.

9 2

10 Modelling multi-service switching networks (Keynote lecture) 3 Mariusz Gła bowski Poznań University of Technology, Poland ABSTRACT Currently experienced increase in popularity of multimedia services indicates that in future networks the services that enable dynamic adjustment of admitted resources to the volume of available resources will be commonly and widely used. Services offered in present-day multi-service networks, both wired and wireless, that can adapt to the load of the networks, can be assigned to adaptive or elastic classes. The adaptive class allows for a decrease in admitted resources for new calls retaining, at the same time, their individual holding time. In the case of elastic class, along with an increase in the load of the system it is possible to decrease the volume of resources allocated to new calls with simultaneous extension of their holding time. The analysis of the effectiveness of resource allocation control algorithms in the case of elastic and adaptive services is based on the so-called threshold mechanisms. The group of resource management mechanisms includes also, for example, bandwidth reservation, compression mechanisms, traffic overflow. Any resource management solutions adopted by the operators to counteract the growing congestion require performing appropriate traffic analyses of the operating network systems and their optimal dimensioning. Until recently, the main emphasis has been put on the traffic analysis of links between telecommunication nodes. Increased capacity of these links and the introduction of technically advanced traffic control mechanisms have effected, however, in a situation in which switching networks of the nodes have again become topical and relevant. The aim of the talk is to present analytical models of the multi-service switching networks with different resource and traffic control mechanisms. The presented models makes it possible to evaluate properly the traffic load in multi-service switching networks and their efficient dimensioning and optimization. The presentation will include the comparative study of the switching networks with different resource management mechanisms. BIOGRAPHY OF THE SPEAKER Mariusz Gła bowski received the M. Sc., Ph. D. and D. Sc. (Habilitation) degrees in telecommunication from the Poznan University of Technology, Poland, in 1997, 2001, and 2010, respectively. Since 1997 he has been working in the Department of Electronics and Telecommunications, Poznan University of Technology. He is engaged in research and teaching in the area of performance analysis and modelling of multiservice networks and switching systems. Prof. Mariusz Gbowski is the author/co-author of 4 books, 7 book chapters and of over 100 papers which have been published in communication journals and presented at national and international conferences. He has refereed papers for many international conferences and magazines, including: IEEE Globecom, IEEE ICC, IEEE HPRS, IEEE Transactions on Communications, IEEE Communications Magazine, Computer Networks, IEEE Communications Letters, IEEE Transactions on Wireless Communications, Performance Evaluation, European Transactions on Telecommunications. Prof. Mariusz Gbowski is a Senior Member of Communications Society IEEE, a Member of Communications Society IEICE, and IARIA Fellow. He is also a vice-chair of the European Section of the IEICE.

11 4

12 5 Modeling of Content-Oriented Networks (Keynote lecture) Krzysztof Walkowiak Wrocław University of Technology, Poland ABSTRACT Multimedia content is currently becoming increasingly common and voluminous in the Internet. For instance, Cisco Visual Networking Index forecasts that by 2016 all forms of video (TV, video on demand, Internet, and P2P) will be approximately 86 percent of global consumer traffic. Remarkable examples of emerging multimedia content-oriented network services include: video streaming (e.g., YouTube), multicasting/broadcasting over IP networks (e.g., IPTV), Content Delivery Networks (e.g., Akamai). However, majority of existing network solutions is not fully prepared to provide the effective delivery of time-sensitive multimedia traffic, partially due to the currently visible dominance of unicast (i.e., pointto-point) transmission. In order to tackle this challenge, a concept of content-oriented networking (CON) has been emerged. This approach comprises a wide range of new solutions developed to provide delivery of various content (mostly multimedia) to requesting users. The talk will focus on one of the aspects of the CON idea, i.e., how to effectively deliver the content. Especially, the talk will describe how to model various network flows proposed to provide the effective content delivery including: multicasting, anycasting, Peer-to-Peer (P2P) systems, and overlays. BIOGRAPHY OF THE SPEAKER Krzysztof Walkowiak was born in He received the Ph.D. degree and the D.Sc. (habilitation) degree in computer science from the Wroclaw University of Technology, Poland, in 2000 and 2008, respectively. Currently, he is an Associate Professor at the Department of Systems and Computer Networks, Faculty of Electronics, Wroclaw University of Technology. His research interest is mainly focused on optimization of content-oriented networks; P2P systems; multicasting systems; Grid systems; network survivability; optimization of connection-oriented networks (MPLS, DWDM); elastic optical networks; application of soft-optimization techniques for design of computer networks. Prof. Walkowiak has been involved in many research projects related to optimization of computer networks. He received the Best Paper Award in the International Workshop on Design of Reliable Communication Networks (DRCN 2009). Moreover, he has been consulting projects for large companies including Ernst and Young, Skanska, TP SA, PZU, PKO BP, Energia Pro. Prof. Walkowiak published more than 150 scientific papers. He serves as a reviewer for many international journals and he is actively involved in many international conferences. Prof. Walkowiak is a member of IEEE and ComSoc.

13 6

14 7 Queues with dropping functions (Keynote lecture) Andrzej Chydziński Silesian University of Technology, Poland ABSTRACT We will deal with queueing systems in which a packet arriving to the queue is dropped with probability that depends on the queue size observed upon this packet arrival. The mapping of queue sizes into dropping probabilities is called dropping function and it can have any form. The analysis of such queues was initially motivated by the well-known RED algorithm, designed for the active queue management (AQM) in internet routers. The RED algorithm uses the linear dropping function, some other AQM schemes use different dropping functions. Apart from AQM applications, the queues with dropping functions have deep universal sense, as using the dropping functions we can control the loss ratio, throughput, queue size, queueing delay, queue size variance etc. These properties of queues with dropping function open up wide areas of application in traffic control. The analytical results for the queues with dropping functions will be shown, including formulas for the queue size distribution, loss ratio and queueing delay. Then, several examples demonstrating ability to control the traffic using dropping functions will be provided. Finally, a summary of the future work and open problems will be given. BIOGRAPHY OF THE SPEAKER Andrzej Chydziński received his MS (in applied mathematics), PhD (in computer science) and DSc (in computer science) degrees from the Silesian University of Technology, Gliwice, Poland, in 1997, 2002 and 2008, respectively. He is currently a professor in the Institute of Informatics of this university. His academic interests are with computer networking, in particular with performance evaluation of computer networks, Future Internet, active queue management, mathematical modelling, queueing theory and discrete-event network simulators. Prof. Chydzinski authored and co-authored 3 books, about 40 conference papers and about 40 journal articles, including papers in widely recognized journals like Telecommunication Systems, Performance Evaluation, Pattern Recognition, Microprocessors and Microsystems, Queueing Systems, Stochastic Models, Mathematical Problems in Engineering, Applied Mathematical Modelling and other. He is also reviewing articles for high-quality journals and networking conferences. Prof. Chydzinski has been a leader of several scientific projects founded by Polish state and European Union. Since January 2011 he has been an IEEE Senior Member.

15 8

16 9 Analysis of the system with vacations under Poissonian input stream and constant service times M. Sosnowski and W. Burakowski Institute of Telecommunications Warsaw University of Technology, Poland Abstract In the paper we show approximate formulas for the mean waiting times and the buffer dimensioning in the system with vacations fed by the stream of Poissonian type with constant service times. Furthermore, in the considered system the time intervals of the availability/not-availability of the service are constant and are run alternately according to the assumed cycle. More precisely, in our approach we start with derivation of the mean waiting times and, on the basis of this, we estimate the required buffer size for guaranteeing the losses less than predefined value. The accuracy of the presented analytical formulas is on a satisfactory level. The formulas were used for the System IIP [1] dimensioning. Index Terms system with vacations, approximation analysis, mean waiting times, buffer dimensioning I. INTRODUCTION In the paper we study the system with vacations fed by the stream of Poissonian type with constant service times. Furthermore, in the considered system the time intervals of the availability/not-availability of the service are constant and are run alternately according to the assumed cycle. The analysis of this system focuses on derivation of the analytical formulas to estimate the mean waiting times and next, on the basis thereof, to estimate required buffer size to satisfy assumed predefined level of losses. The considered system well models a part of the IIP System [1] based on virtualized network infrastructure that corresponds to the organization of virtual links established for particular Parallel Internets. These Parallel Internets should work in isolation. For establishing separate virtual links delegated to particular Parallel Internets, we manage access to a physical link by a cycle-based scheduler, as depicted on Fig. 1. According to the best knowledge of the authors, such a system was not analyzed in the literature. Most of papers, as e.g. [2], [3], [4], deal with TDMA systems, in which data are transmitted only in the chosen time-slots. A. The system with vacations II. THE SYSTEM The considered queuing system is depicted on Fig. 2. The system belongs to the family of systems with vacations. It means that periodically the system is in active and vacation periods. During the active periods (T A ) the packets are served while during the vacation periods (T V ) the service is not available. Moreover, we assume an infinite buffer size in the system. The queuing discipline is assumed to be FIFO. Fig. 2: Comparison of the Systems: λ - arrival rate (Poissonian stream), L - packet size, B and B V - buffers sizes, T A - active period, T V - vacation period, C and C V - the output links rates Additional assumptions of the system are the following: The packets arrive to the system according to the Poisson process with the rate λ; The active (T A) and vacation (T V ) periods are constant and they alternate; The link capacity is equal to C V bps; Packet size (L) and, as a consequence, service times (h v) of the packets are constant (h v = L/C V ); The length of the active period T A is multiples of h v (T A = nh v, n = 1, 2,...). In this system, the available capacity for the considered stream, denoted by C, is: Fig. 1: Cycle-Based scheduler for creating virtual links for 4 Parallel Internets working in isolation C = C V T A T A + T V. (1)

17 10 B. Fully available system In our analysis, we will refer to the fully available system with Poissonian input and deterministic service time, the system M/D/1. Especially, we will want to exploit the formula for mean waiting times in such a system when the arrival rate λ, packets size L, and output link C (see (1), equivalent to the available link rate in system with vacations) are the same for both considered system. It should be noted that the average load (in the system with vacation, during the active period) ρ in both systems is also the same. The difference between the system with and without vacations is the service time. In the fully available system the service time is h = L/C. For the M/D/1, we calculate the mean waiting time E[W F ] for well-known Pollachek-Khinchin formula: E[W F ] = ρh res 1 ρ, (2) where ρ = λh and h res is the residual service time (in the case of h = constant, h res = h/2). III. ANALYSIS A. Mean waiting time in the system with vacations A brief description of the approach to calculate mean waiting times for the considered system with vacations can be found in [5]. Let us start our analysis from the moment of the test packet arrives to the system. Thanks to the PASTA (Poisson Arrivals See Time Averages) principle, this test packet sees the system at a random moment. This packet can arrive when the system is on the active period or on the vacation period. When the packet arrives during the vacation period it cannot be served immediately even if there are no other packets in the system, but it should wait for a transmission at least (if no other packets are in the system) by the remaining time of the period T V. On the other hand, when the packet arrives during the active period it can be served immediately (when there are no other tasks in the system) when the remaining part of this period is not less than hv. We will call this period a pure active period. Let us define: P V = T V, P T V + T A = T A h h v, P hv =, (3) A T V + T A T V + T A where P V, P A, P hv denotes the probability that a packet arrives during the vacation period, the active period (without the last part equal h v ), and the last part (equal h v ) of the active period, respectively. Our approximate formula for the mean waiting time has the following form: E[W V ] = P A E[W F ] + P V (T Vres + E[W F ])+ +P hv (h vres + T V + E[W F ]), where E[W F ] is calculated by (2), T V res is the residual time of the vacation time (T V res = T V 2 ) and h vres is the residual (4) time of the service packet time (h vres = hv 2 ). The formula (4) can be simplified to the following form: E[W V ] = E[W F ] + (T V + h v ) 2 2(T A + T V ). (5) Let us point out that for the limit case, when T V = 0, the mean waiting time is the same as in the fully available system. On the other hand, when T V tends to infinity, the value of the mean waiting time also tends to infinity. Unfortunately, the formula (5) is not proved in a clear mathematical way, but it is only deduced. We assume that if the task arrives during the pure active period, it expects a similar delay as in the fully available reference system. It happens with probability P A. Furthermore, when the task arrives at the periods when it cannot be transmitted immediately (even when no other tasks are in the system), it should wait for its transmission when the active period starts. So, in this case, we deduce that the packet will wait by the time to the moment when the active period starts plus the service times of the packets being in the system already. Let us remark that the formula (4, 5) does not take into account the situation e.g. when a task should wait a number of active periods until it starts transmission. Therefore, we can expect that mean waiting times calculated from (5) will be lower than exact value. Nevertheless, the formula (5) is relatively simple and it takes into account in a direct way the impact of the length of active and vacation periods on the packet delay. Now, let us show some numerical results showing the correctness of the formula (5). In the Table I and II, we present the values of the mean waiting times for the systems with vacations differing in lengths of active and vacation periods under different traffic load ρ and various T A /T V relations. It can be observed that for cases presented in Table I, analytical results are very close to simulation results and the difference is only by few percent. For cases presented in Table II, we observe a bit less accuracy of the analytical results compared to the simulation, but the difference is still on the satisfactory level. The difference is about 15% for most of the studied cases. The accuracy of (5) was also verified for other values of cycle durations and the results were similar to the ones presented above. We can conclude that the formula (5) gives very accurate results for the system with vacations if at least one of TABLE I: Comparison OF MEAN WAITING TIMES (SHORT CYCLES) T A /T V = 2h v/4h v T A /T V = 10h v/20h v ρ anal sim diff anal sim diff 0,2 2,5 2,4 3% 7,7 7,9-2% 0,4 3,1 3,0 4% 8,4 8,6-3% 0,6 4,3 4,2 3% 9,6 9,8-2% 0,8 8,1 7,9 3% 13,4 13,3 0% 0,9 15,6 15,4 1% 20,9 20,6 1% 0,94 25,6 25,2 2% 30,9 30,6 1% 0,96 38,1 37,3 2% 43,4 43,5 0%

18 11 TABLE II: COMPARISON OF MEAN WAITING TIMES (LONG CYCLES) T A /T V = 50h v/100h v T A /T V = 100h v/200h v ρ anal sim diff anal sim diff 0,2 34,4 36,5-6% 67,7 72,2-6% 0,4 35,0 39,4-11% 68,3 77,8-12% 0,6 36,3 42,7-15% 69,6 84,4-18% 0,8 40,0 47,5-16% 73,3 92,3-21% 0,9 47,5 54,6-13% 80,8 100,1-19% 0,94 57,5 63,9-10% 90,8 110,0-17% 0,96 70,0 76,1-8% 103,3 122,2-15% these two conditions is met: cycle is short ( 15h max) or T A /T V ratio is small ( 1/4 max). The results are also accurate if both conditions are close to these borders (e.g T A /T V = 10h v /20h v ). Notice, that in practical systems, when we apply the cycle solution in the access to the shared link, we will rather design the short cycles than the long ones since we will try to have a low delay. Fig. 3: Queue state distribution obtained from simulation (n = 0..20), ρ = 0.9 B. Buffer dimensioning At present, we show an approximation for buffer dimensioning in the system with vacations. The target is to dimension the buffer size as small as possible to assure that packet losses are less than a predefined value P loss, e.g. P loss In order to do it in an exact way, we should know the queue size distribution. The presented approach assumes that we have only knowledge of the mean waiting times in the system, as calculated from (5). Of course, we can use the well-known Marcov s inequality (see (6)), but it was checked that it gives an essential over-dimensioning of the buffer size and, as a consequence, the results are not reported in the paper. Marcov s inequality: P (X n) E[X] n, (6) where E[X] is the mean value of the random variable X. Therefore, the approach investigated in the paper assumes an approximation of the queue size distribution, which is described by only one parameter. In this case, we assume that queue size distribution in the system with vacation can be approximated by a geometric distribution. 1) Queue state distribution for the system with vacations - simulation results: In this subsection we show the queue state distribution for selected system with vacations differing in the lengths of active and vacations periods. In the Fig. 3 and Fig. 4 one can observe queue state distribution in the system with vacations for different values of T A /T V. In the presented curves the confidence intervals at the 95% level are negligibly small and they are not depicted. The presented simulation results say that the approximation of the presented characteristics by the geometric distributions is justified although one can observe the differences in the Fig. 4: Queue state distribution obtained from simulation (n = ), ρ = 0.9 first phase of the curves. 2) Approximation by the geometric distribution: As it is known, the queue state distribution follows the geometric distribution in the M/M/1 system and in this case it has the following form: { P Q (0) = P (0) + P (1) (7) P Q (n) = P (n + 1), n > 0 where: P (n) = ρ n g (1 ρ g) - probability, that system is in the n state ρ g - server load. Therefore, { P Q (0) = 1 ρ 2 g P Q (n) = ρ n+1 g (1 ρ g ), n > 0 So, the mean queue state in the M/M/1 system is: E[n] = n=0 (8) np Q (n) = ρ2 g. (9) 1 ρ g

19 12 Then we calculate ρ g (the parameter of the geometric distribution) from the following formula: (E[n])2 + 4E[n] E[n] ρ g =, (10) 2 where E[n] = E[n V ] = λe[w V ] and E[W V ] is done by (5). In the Fig. 5, 6, 7 and 8 we show the comparisons between queue state characteristics obtained by the simulation and by approximation of geometric distribution. These results correspond with the exemplary system with vacations when T A /T V = 10h v /30h v with the load of ρ = 0.6 and ρ = 0.9. One can observe that the approximation by geometric distribution gives larger values for the tail of the distribution. It is important since we want to dimension buffer size for rather low values of losses, e.g or less. After some algebra, we have the final formula for buffer size dimensioning, which is the following: ln(ploss ) B = 1 (11) ln(ρ g ) where x denotes the minimum integral value greater or equal x. For comparison, the formula to dimension buffer size in the case of REM (Rate Envelope Multiplexing) multiplexing [6] is: 2B ρ = 2B ln(p loss ), (12) and it can be transformed to ln(p loss ) B = 2 2. (13) ρ Fig. 6: Queue state distribution obtained from simulation compared with geometric distribution (n = 0..20), ρ = 0.6 Fig. 7: Queue state distribution obtained from simulation compared with geometric distribution (n = 0..25), ρ = 0.9 C. Results Now, we present the results for buffer dimensioning when we have the system with vacations and we want to apply the REM multiplexing. In Table II we show the results (B) of required buffer size for the system without vacations and REM multiplexing. The buffer size B opt. is obtained by simulation Fig. 8: Queue state distribution obtained from simulation compared with geometric distribution (n = ), ρ = 0.9 Fig. 5: Queue state distribution obtained from simulation compared with geometric distribution (n = 0..10), ρ = 0.6 and B over. - indicates the relative error. Table IV presents the results of the required buffer size for the selected systems with vacations assuming P loss = The reported results say that our approach gives always the over estimation of the required buffer size. This over estimation is about 100%. Thus, the results are rather positive taking into account that the method is based on the approximation of the

20 13 mean waiting time value only. TABLE III: COMPARISON OF BUFFER SIZE FOR THE M/D/1 SYSTEM AND P LOSS = 10 3 ρ B opt. B B over. 0, % 0, % 0, % 0, % TABLE IV: MEASURED LOSS PROBABILITY IN THE SYSTEM WITH VACATIONS T A /T V = 4h v/4h v ρ E[n] E[n] ρ g B opt. B B over. P loss sim. anal. 0,6 0,92 0,92 0, % 4,30E-06 0,8 2,24 2,22 0, % 7,90E-06 0,9 4,76 4,75 0, % 1,87E-05 0,95 9,78 9,78 0, % 3,03E-05 T A /T V = 2h v/6h v ρ E[n] E[n] ρ g B opt. B B over. P loss sim. anal. 0,6 0,86 0,91 0, % 3,20E-06 0,8 2,12 2,21 0, % 1,63E-05 0,9 4,64 4,74 0, % 1,40E-05 0,95 9,27 9,75 0, % 2,24E-05 T A /T V = 20h v/20h v ρ E[n] E[n] ρ g B opt. B B over. P loss sim. anal. 0,6 2,49 2,1 0, % 0,00E+00 0,8 4,38 3,8 0, % 6,00E-07 0,9 7,18 6,53 0, % 5,10E-06 0,95 12,35 11,64 0, % 5,20E-06 T A /T V = 10h v/30h v ρ E[n] E[n] ρ g B opt. B B over. P loss sim. anal. 0,6 2,22 2,25 0, % 0,00E+00 0,8 3,87 4 0, % 0,00E+00 0,9 6,34 6,75 0, % 4,00E-06 0,95 12,06 11,88 0, % 3,40E-06 IV. SUMMARY In the paper we presented the analysis of the system with vacations fed by Poissonian stream, constant service times and constant length of active and vacation periods. For this system we have shown the analytical approximate formulas for the mean waiting times and the buffer dimensioning. The Analytical results were compared with the simulation. The accuracy of the approximation is satisfactory. The described methods were used to dimension virtual links in the IIP System build by the virtualization of the network infrastructure. REFERENCES [1] W. Burakowski et al., Virtualized network infrastructure supporting co-existence of Parallel Internets, 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2012), August 8-10, 2012 [2] S. S. Lam, Delay analysis of a Time Division Multiple Access (TDMA) channel, IEEE Trans. on Comm., vol. 25, pp , Dec.1977 [3] K. T. Ko and B. Davis, Delay analysis for a TDMA channel with contiguous output and Poisson message arrival, IEEE Trans. Commun., vol. COM-32, no. 6, pp , Jun [4] I. Rubin, Message delays in FDMA and TDMA communication channels, IEEE Transactions on Communications, vol. COM-27, pp , [5] M. Sosnowski, W. Burakowski Evaluation of mean waiting time in the system with vacations, Student Poster Session, 24th International Teletraffic Congress, September 4-7, 2012 [6] J.Roberts (ed.), Information technologies and science: COST 224: Performance evaluation and design of multiservice networks, EUR en, 1992 where: ρ g - parameter of geometric distribution calculated from (10) B opt. - the buffer size that provides the loss probability on the 10 3 level, result from the simulation B - the buffer size calculated from (11) B over. - percentage oversize of the buffer B P loss - measured loss probability for the buffer B (95% confidence intervals are on 10 7 level)

21 14

22 15 Robust Topology Design with Routing Optimization in Wireless Optical Networks Yuan Li, Michał Pióro and Mateusz Żotkiewicz Department of Electrical and Information Technology, Lund University, Lund, Sweden Institute of Telecommunications, Warsaw University of Technology, Warsaw, Poland Abstract Optical wireless communications (OWC) is a promising technology for next generation wireless networks. It is able to assure high bandwidth links that do not suffer from radio interference. However, a major problem faced by OWC is that optical laser beams are affected by whether conditions which may decrease the reliability of wireless optical networks. In this paper, we present an optimization model for designing robust topology for mesh networks which combines optical fiber and optical wireless technologies. I. INTRODUCTION During the past decade, several kinds of wireless networks based on radio frequency (RF) have been deployed and thoroughly studied. The deficiencies in the nature of RF, strong interference, and expensive frequency spectrum limit its further development. An alternative solution for certain applications (as for example MAN) is free space optics (FSO), also called optical wireless communications (OWC). FSO is a promising technology for the next generation wireless broadband networks [1]. An optical wireless link is a laser beam which needs two laser-photodetector transceivers on both ends of the link. The laser beam has a wavelength in the micrometer range not in the traditional electro-magnetic radio spectrum, so there is no need for any municipal license approvals. FSO capitalizes on existing technology to quickly establish optical links that supports Gb/s over a distance of few kilometers and can bring ultra-high data rates to the residential and enterprise areas. Additionally, FSO link is very cheap and easy to install as compared with fiber. In summary, with the property of license-free operation, interference immunity and cost effectiveness, FSO can be a good complementary for traditional RF based wireless technologies. FSO is a relatively new technology requiring multidisciplinary topics to be studied, involving a wide range of areas including: optical design, optoelectronics, electronics design, channel modeling, communications and information theory, modulation and equalization, wireless optical network architectures and topology design among many others [2]. The research in FSO networking and related design problems faces several challenges. For example: a pair of nodes can be connected only if they are within a line of sight; some methods should be explored to make two photodetectors directed to each other if they are far away or some disturbances occur; an FSO link is vulnerable for weather conditions like rain and fog. Certainly, some research has already been done in the field of the FSO networks design and optimization, i.e., in the field this report is focused on. For example, papers [3], [4] concentrate on how to design network topology to carry as much traffic as possible. Mathematical models are formulated and various algorithms are discussed. Paper [5] proposes a tiered architecture where the optical links form a backbone, and other links are radio based. The objective is to maximize network connectivity which is a measure of the robustness of network. In this report, we give a preliminary model for joint topology and routing optimization problem. We consider a hybrid network which includes both optical fiber (OF) links and optical wireless (OW) links. One reason to use OF links is that they operate using the wavelength similar to that used by OW links [6]. Compared with OF links, OW links are cheaper but not reliable since it can be easily affected by whether conditions. OF links can be used in the network to improve reliability. Additionally, costs of the nodes and building/roof rental costs are also considered. II. NOTATION The network is represented by an undirected graph G composed of the set of vertices (nodes) V and set of edges (links) E. A link e E represents an undirected pair of two distinct vertices {v, w} in V. The set of all edges in E incident with vertex v V is denoted by δ(v). A demand d D (where D V 2 \ {(v, v) : v V } is the set of demands) is represented by a directed pair (s d, t d ) of nodes. It is required that the flows are directed. Therefore, each link e = {v, w} is associated with two directed arcs e = (v, w) and e = (w, v) with the arc flows (related to individual demands d D) denoted by x e d and x e d. Consequently, A = {e, e : e E} will denote the set all the so defined directed arcs of graph G with directed flows x ad, a A, d D. For a node v V, δ + (v) will denote the set of all directed arcs a A outgoing from node v, and δ (v) is the set of directed arcs a A incoming to node v. A. Cost of nodes and links III. MODEL The cost function is defined as C = e E κ e u e + v V χ v U v (1)

23 16 where χ v is the cost of nodes (containing the rental cost of roofs of the buildings). κ e is the cost of FSO links. u = (u e : e E) indicates whether a link e E is installed (u e = 1) or not (u e = 0). U v is the binary variable indicates whether node v V is installed which can be guaranteed by constraint B. OF links and FSO links u e U v, v V, e δ(v). (2) Two kinds of links, OF links and FSO links, are used. The set of link E includes two sets F (FSO links) and R (OF links). The cost function becomes C = e E κ e u e + v V (χ F v U F v + χ r vu r v ). (3) χ F v is the cost of renting for the roof of the building and χ r v is the cost of installing a router in that building. Uv F, Uv r are binary variables indicating whether a node is incident to at least one FSO link (Uv F = 1) and to at least one link (FSO or OF) (Uv r = 1) respectively. The constraints are C. Degree limit u e U F v, v V, e δ(v) F (4a) u e U r v, v V, e δ(v) (4b) In practice, the number of optical interfaces installed in the network nodes is limited. Let b(v) (v V ) denote the upper limit of the number of optical interface for node v. Then, this limiting requirement is taken into account by adding constraint u e b(v), v V. (5) e δ(v) F D. Full two-connected formulations Assumes the failure of FSO links, the task is to install the links in the most cost effective way that assures twoconnectivity between the end nodes for the connection demands. Two-connectivity means that each demand d D is provided with two link-disjoint paths. The resulting formulation is as model 6. min C = e E κ eu e + v V (χf v Uv F + χ r vuv r ) (6a) a δ + (v) x ad a δ (v) x ad = 0, v V \ {s d, t d }, d D (6b) a δ + (s d ) x ad a δ (s d ) x ad = 2, d D (6c) x e d u e, x e d u e e E w (6d) x e d 2u e, x e d 2u e e E f (6e) e δ(v) F u e b(v), v V (6f) u e U F v, v V, e δ(v) F (6g) u e U r v, v V, e δ(v) (6h) x ad 0, a A (6i) u e {0, 1}, e E (6j) U F v 0, U r v 0 v V (6k) In solution, the flow allocation vector x = (x ad : a A, d D) will be binary and a single path is defined as p d (x) = {a A : x ad = 1} from s d to t d. Equations (6b)-(6e) make sure that each demand d D will be twoconnected, that is, a pair of link-disjoint paths p d (x) q d (x) = {a A : x ad = 1} from s d to t d. The so defined pairs of paths (p d (x), q d (x)), d D may contain loops (even isolated). The loops can be easily eliminated by postprocessing of the solution of the considered problem (6). This is done by solving the following flow allocation problem for the given (optimal) u : min d D x ad C = a A a δ + (v) x ad a δ (v) x ad = 0, (7a) v V \ {s d, t d }, d D (7b) a δ + (s d ) x ad a δ (s d ) x ad = 2, d D (7c) x e d u e, x e d u e e E (7d) x ad 0, a A. (7e) The obtained optimal paths p d (x ), q d (x ) will be elementary and disjoint for each given d D. Note that the pair paths (p d (x ), q d (x )) for any given d D do not have to be link-disjoint with respect to fixed fiber links e R since the fixed OF links are assumed to be 100% reliable. IV. CONCLUSIONS In the paper we have formulated a preliminary model for designing robust topologies for FSO networks. The network is composed of a mixture of optical wireless link and optical fiber links. The objective is the deployment cost minimization. The model can guarantee a certain degree of robustness by designing two connected network graphs. However, it will need to be extended to cope for example with heavy whether conditions (if the whether condition is bad, like a heavy fog, many OW links will fail). For that, more advanced models should be elaborated to achieve a higher degree of robustness. REFERENCES [1] S.V. Kartalopoulos. Free Space Optical Networks for Ultra-Broad Band Services. Springer-Verlag, [2] S. Hranilovoc. Wireless Optical Communication Systems. Springer, [3] A. Kashyap, k. Lee, m. Kalantari, s. Khuller, and m. Shayman. Integrated topology control and routing in wireless optical mesh networks. Computer Networks, 51(15): , October [4] X. Cao. An integer linear programming approach for topology design in owc networks. In IEEE GLOBECOM Workshops 2008, pages 1 5, dec [5] I.K. Son and S. Mao. Design and optimization of a tiered wireless access network. In IEEE INFOCOM 2010, pages 1 9, march [6] H. Willebrand and B.S. Ghuman. Free Space Optics: Enabling Optical Connectivity in Today s Networks. Sams Publishing, 2002.

24 17 Mixed integer programming model of MCCA-aware acknowledging in Wireless Mesh Networks Mateusz Żotkiewicz and Michał Pióro Institute of Telecommunications, Warsaw University of Technology Nowowiejska 15/19, Warszawa, Poland Institute of Telecommunications, Warsaw University of Technology Nowowiejska 15/19, Warszawa, Poland Department of Electrical and Information Technology, Lund University Box 118, S Lund, Sweden Abstract The paper investigates advantages of using an MCCA-aware acknowledging system in IEEE s Wireless Mesh Networks (WMN). The MCCA (MCF controlled channel access) protocol allows for reserving time slots for a point-topoint transmission over a wireless link. The protocol assures that the transmission will not be hindered by any other device that uses MCCA, by preventing all neighbors of both sender and receiver from accessing the medium or accepting reservations during the reserved time window. Communication in the reserved time window is bidirectional, as it consists of data frames sent by the sender and ACK frames sent by the receiver. In the paper, benefits of using separate time window reservations for the data frames and the ACK frames are presented and confirmed numerically. I. INTRODUCTION Mesh networking in wireless communications is a way to increase network capacity, for example with respect to cellular networks, through using short-distance radio links of relatively high capacity and establishing point-to-point connections on multi-hop routes. In this way packets from an origin are relayed by intermediate nodes (mesh routers) on their way to a destination. Today, wireless mesh networks (WMN) are mainly used for providing affordable Internet access to communities of users in metropolitan areas (see [1], [2]). A typical WAN is based on the IEEE family Wi-Fi standards and consists of a set of fixed mesh nodes routers and Internet gateways interconnected by radio links. The mesh routers serve mesh clients (who are either fixed or mobile) and at the same time perform the packet relay function. WMN is a cost-efficient off-the-shelf networking that provides bandwidth in the range of up to hundreds Mbps. For popular surveys on WMN see [5], [6]. The story of the IEEE Wireless Local Area Networks (WLAN) dates back to 1997 when the first (2 Mbps) standard was ratified. Two years later, in 1999, WLAN emerged thanks to the IEEE b standard that revolutionized the field. Although it provided only 11 Mbps of the raw capacity and exhibited some serious security threats, the so introduced WLAN concept joined the mainstream, and the IEEE based Wi-Fi brand started to earn its name. The concept had been gradually upgraded. In 2003, IEEE released the g standard that increased the raw capacity of WLAN to 54 Mbps. Next, in 2009, n standard was published and raw transmission capacity has been increased to 600 Mbps. Meanwhile, wireless mesh networking standards had been debated. In September 2003 a study group was formed to amend for mesh networking. The amendment was to define how wireless devices should interconnect to create a WLAN mesh network, which could be used for static topologies and ad-hoc networks. The standard IEEE s Mesh Networking was finally released in August It contains, among others, a definition of the MCCA mechanism. In this paper we show how an appropriate integration of the features of the two IEEE standards, s and n, can increase the capacity of a network. The paper is organized as follows. Section II presents a technical background by briefly describing features of s and n, relevant for this paper. In Section III, the main concept of the paper is described in detail. The concept is tested using the scenarios presented in Section IV. The scenarios are followed by a mathematical model described in Section V, used to verify the validity of the concept. In Section VI, numerical results are presented. They are followed by conclusions given in Section VII. II. TECHNOLOGY BACKGROUND Two IEEE standards are crucial for this research. The first of them, n, is the latest amendment to the WLAN standard that introduces improvements aimed at increasing the throughput. The second, s, is another amendment to the WLAN standard that specifies the way wireless devices should cooperate in order to create a wireless mesh network (WMN). A. IEEE n Enhancements for Higher Throughput Document [3], also referred to as IEEE n Enhancements for Higher Throughput, amends the IEEE WLAN standard. It aims at improving network throughput over the two previously used a and g standards.

25 18 the pure block acknowledging without applying any frame aggregation technics. The third aggregates a number of MAC frames and sends them in a row using only one procedure of medium accessing. The last approach aggregates data from a number of MAC frames and creates one bigger MAC frame. Details of the IEEE n standard can be found in [3]. They are summarized for instance in [17] and [10]. Fig. 1: Different acknowledgement strategies. The throughput is increased by approximately order of magnitude (from 54 Mbps to 600 Mbps) mostly thanks to two features: utilization of 40MHz channels and employing multipleinput multiple-output (MIMO) antennas. The former allows for increasing the number of subcarriers to 108, thus more than doubling the throughput. The latter specifies how up to four MIMO antennas can be used in order to transmit four spatial streams simultaneously, thus quadrupling the throughput. Other, less crucial improvements presented in n are: increasing the number of subcarriers in the basic 20 MHz channel from 48 to 52, increasing the FEC coding rate from 3 4 to 5 6, and reducing Guard Interval between frames from 800 ns to 400 ns. All those improvements are designed to increase the raw throughput. At the same time the standard modifies the MAC level in order to increase the higher layer throughput. The main modification in this aspect (the most important from the viewpoint of this research) is block acknowledgement. The idea was timidly introduced in e, and became obligatory in n. In legacy a/g equipment, each MAC frame is immediately acknowledged by a receiver. The n standard adds a block acknowledgement mechanism in which multiple frames can be acknowledged by a single frame. This limits the wait time between consecutive frame transmissions. Moreover, n can also send multiple frames per single access to the medium by combining frames together into one larger frame. There are two forms of frame aggregation: Aggregated MAC Protocol Data Unit (A-MPDU) and Aggregated MAC Service Data Unit (A-MSDU). The former queues a number of MAC frames and sends them in a row using a single PHY preamble while the latter merges a number of frames into one frame with a single MAC header. All the block-ack approaches are depicted in Figure 1. The approaches, from top to bottom, are: legacy mechanism, block acknowledging, A-MPDU, and A-MSDU. The first is the legacy mechanism of immediate acknowledging of each single MAC frame used in a/g. The second presents B. IEEE s Mesh Networking Document [4], also referred to as IEEE s Mesh Networking, amends the IEEE WLAN standard. It defines the MAC procedures that are necessary for wireless multi-hop communication to support wireless LAN mesh topologies. A typical IEEE s-based WMN architecture can be seen in Figure 2. The core components of the WMN that form its infrastructure/backbone are mesh stations (MPs). They are responsible for relaying traffic. They can relay traffic of either other MPs or legacy nodes (STAs). Obviously they can also send their own traffic. Legacy STAs are connected to MAPs (specialized MPs that also act as access points for STAs) and follow the principle of a generic wireless local area network node. Neither STAs nor MAPs are depicted in the figure, as this functionality is out of the scope of this research. Another specialized MPs are responsible for connecting a WMN to an external network (usually the Internet). They are called mesh point portals or gateways, and are abbreviated by MPP. In the figure they are depicted by squares, and are connected to the Internet using a different technology than , e.g., Ethernet (thick lines in the figure). All MPs in a network discover their neighbors using a beaconing mechanism. Two stations are considered neighbors, if they can successfully send frames to each other without using any relaying station. In such a situation we say that a mesh link exists between the MPs. In the figure, mesh links are depicted by lines, both solid and dashed. By the neighborhood of a station we understand a set of all its neighbors. IEEE s defines a default routing protocol which is Hybrid Wireless Mesh Protocol (HWMP). The protocol can operate in two modes: reactive and proactive. The first initiates route discovery mechanisms only if needed, while the second discovers routes periodically and maintains paths to all the roots in a network. In our research we assume that MPs do not communicate with each other directly, but solely with MPPs their are assigned to. The assignment is a result of setting MPPs as a root and running the HWMP protocol in the proactive mode. We also assume that network operating conditions are relatively stable, i.e., after a short period of time needed by HWMP to converge all the routes in the network are known and are not subject to changes. For the sake of clearness of further considerations, in Figure 2 the mesh links that have been selected by the routing protocol are depicted by the solid lines, while other mesh links are depicted by the dashed lines. IEEE s inherits from a specification of the mandatory channel access method EDCA. However, performance of EDCA decreases dramatically in a multi-hop environment due to the hidden stations effect [16]. In order to avoid

26 19 Fig. 2: Typical IEEE s-based WMN architecture. it, the RTS/CTS mechanism is introduced. The mechanism works effectively in a single hop environment because all stations hear CTS frames transmitted by the access point and collisions of the data frames are completely avoided. Still, in multi-hop networks there is no guarantee that the condition is met. Therefore, in this case the RTS/CTS exchange is not sufficient to protect radio transmissions. That is why the MCF controlled channel access (MCCA) protocol was introduced (MCF stands for mesh coordination function ) in s. It was crafted for operation in multi-hop networks by allowing mesh stations to reserve time intervals for data transmissions. The transmitting and receiving MPs advertise their reservations to their neighbors. The information is then relayed to neighbors of the neighbors in order to ensure that all MPs in the two-hop neighborhood of the transmitter and receiver become aware of this reservation. As a result the neighbors of a sender and a receiver should refrain from both transmitting and accepting reservations during the reserved time interval. Details of the IEEE s standard can be found in [4]. They are also summarized for instance in [11]. III. THE IDEA The main rule of MCCA is that the whole neighborhood of a transmitter and a receiver should refrain from both transmitting and accepting reservations during the reserved time interval. For example, consider the network topology depicted in Figure 2. If station A wants to transmit to station C, it should first reserve a time interval for the transmission. When A and C agree on a specific time interval, then they both send information about the reservation to their neighbors, i.e., stations G1 (neighbor of A) and stations B, E, and D (neighbors of C). From now on these stations should refrain from transmitting and accepting reservations during this particular time interval. Notice that, if any of those stations has already accepted a reservation for this particular time interval earlier, then either A or C would have been informed about the situation using the mechanism described above, thus they either would not initiate the reservation (if station A has been informed), or would not accept it (if station C has been informed). One may argue that the mechanism is too restrictive and there is no point in preventing neighbors of A from transmitting during a time interval when A is also transmitting. For instance, when station A is transmitting to station C, then station G1 should be allowed to simultaneously transmit to station B, because these transmission do not interfere, as station B does not hear station A and station C does not hear station G1. The same situation happens with a receiving node, i.e., there is no point in preventing neighbors of a receiving station from accepting other reservation for the reserved time interval. Reasons for such harsh restrictions are neatly explained in [13], where the authors show that although the data frames sent on one link do not interfere with the data frames sent on the other link, they still can be disrupted by ACK frames sent of the other link. For instance, if station A transmits to station C, and simultaneously station G1 transmits to station B, then ACK frames send from station C to station A will interfere with data frames sent from station G1 to station B. The main idea of this paper, called MCCA-aware acknowledging, is to use the notion of block acknowledging mentioned in Section II-A in order not to send ACK frames during a time interval reserved for a transmission from a sender to a receiver, but instead to send them during a time interval reserved for a transmission in an opposite direction. The idea will allow for loosening the restrictions in reserving time intervals, thus increasing throughput of a network. Assume that only A-C and G1-B communication is of interest. In standard s devices simultaneous transmission on those two mesh links is forbidden. However, using MCCAaware acknowledging, station A can transmit to station C when station G1 is transmitting to station B. What is more, station C can transmit to station A when station B is transmitting to station G1. If the volumes of the demands are equal, i.e., the volume of a demand from station A to station C is equal to the volume of a demand from C to A and from G1 to B and from B to G1, then the overall throughput can be doubled thanks to MCCA-aware acknowledging. IV. CONSIDERED SCENARIOS The above concept will be tested in the following settings. We assume that considered WMNs are used to provide Internet access to a group of MPs. The MPs do not communicate with each other directly, thus the only demands in a network are between the MPs and the Internet gateways (represented by MPPs). In our model we assume that the operating conditions are stable so that the network topology and the set of available mesh links do not change. What is more, the routes selected by HWMP are also constant. In this research it is assumed that a transmission scheduling protocol is capable of (approximately) assuring the optimized throughput. That is, we assume that the time slots will be divided between transmissions in such a way that the overall performance of the network will be optimized. Many different ideas are considered to meet this goal ranging from idealistic scenarios with a centralized entity calculating the whole

27 20 scheduling [7], [15], to distributed approaches presented for instance in [12]. Our research shows how much it is possible to gain when an MCCA-aware acknowledging system is used in a network instead of the standard MCCA-unaware acknowledging system. In other words, we compare a scenario where nondirectional mesh links have to be reserved (due to restrictions resulting from using MCCA-unaware acknowledging) with a scenario where directional mesh links can be reserved. Still, we assume that in both scenarios the performance of other parts of the whole reservation mechanism approaches an optimum. MPs-MPPs communication in WMNs is highly asymmetric, and the majority of traffic is directed from MPPs to MPs (like in LTE). In MCCA-unaware acknowledging scenario the asymmetry has no impact on the performance of a network, as all mesh links can be treated as non-directional. However, in case of MCCA-aware acknowledgements the asymmetry should be taken into account. Following [8] we assume that each MPs receives in total 4 times more bits than it sends. A metric that will be used to evaluate the scenarios is the throughput obtained by the worst served MP in a network. V. OPTIMIZATION MODEL In radio networks two stations A and B can communicate if the power of the signal received from station A at station B in comparison to the noise at station B is greater than an acceptable signal-to-noise (SNR) threshold for the applied modulation and coding scheme (MCS). In WMN networks the radio resources are shared between all MPs. Thus, typically all MPs cannot transmit simultaneously but, of course, simultaneous transmissions by subsets of stations are still possible. In such a case, however, not only the noise can disrupt transmission but also the interference induced by other MPs. Therefore, in the context of WMN we use the notion of the signal-to-interference-plus-noise ratio (SINR) instead of SNR. SINR is understood as a proportion of a received power to the sum of the noise and the received interference. In this paper we assume a common interference model where two transmissions interfere only if a sender in the first transmission can successfully transmit to a receiver in the second transmission, or a sender in the second transmission can successfully transmit to a receiver in the first transmission. More sophisticated interference model can be found for instance in [7], where a signal power is taken into account and all concurrent transmission interacts with each other. A group of distinct MPs that can transmit simultaneously will be called a compatible set. In other words, a compatible set defines an admissible transmission pattern, i.e., a group of simultaneous link transmissions such that the corresponding SINR requirements are satisfied. Notice that the only difference between the MCCA-unaware and MCCA-aware acknowledging scenario lays in a list of available compatible sets for the two scenarios, i.e., the MCCA-unaware scenario can use only a subset of compatible sets available to the MCCA-aware scenario. In order to formally present the problem we introduce the following notation. Indices: e mesh link i compatible set d demand Sets: E list of links I list of compatible sets D list of demands Q e list of demands that use link e for downstream R e list of demands that use link e for upstream Constants: U ei = 1 if link e belongs to the compatible set; = 0, otherwise B e maximum transmission bit rate at link e [Mbps] γ upstream/downstream fraction [Mbps/Mbps] Variables: f minimum flow [Mbps] f d flow of demand d [Mbps] z i fraction of time used by compatible set i c e average transmission bit rate at link e [Mbps] Following [15], we formally state the problem in the form of formulation (1) below. It is a non-compact linear programming (LP) formulation based on the list I of compatible sets and continuous variables z i that define the fraction of time that is utilized by the transmission pattern defined by compatible set i I. Notice that we have assumed that operating conditions are stable, thus routes are constant for each demand. This feature results in a situation when all routing information is stored in sets binding demands with mesh links utilized by them, i.e., Q e and R e. [π e 0] d Q e f d + [α] max f (1a) z i = 1 (1b) i I [λ d 0] f f d, d D (1c) c e = i I B e U ei z i, e E (1d) d R e γf d c e, e E (1e) z 0 (1f) The objective is to maximize the minimum flow f. Constraint (1b) assures that the whole available time is used, while (1c) sets f to the minimum over all f d D. Equations (1d) define the capacities of links (constants U ei determines if compatible set i uses mesh link e, while a transmission bit rate obtained on the mesh link, when it is actually used, is equal to B e ). Constraint (1e) assures that the obtained average capacities (transmission bit rates) of mesh links satisfy demands, while (1f) assures non-negativity of variables. The entities shown within the brackets denote the dual variables. Note that the non-compactness of formulation (1) is implied by the exponential number of potential compatible sets. The dual problem (for the notion of duality see for example [14]) corresponding to the primal problem (1) with the current list of compatible sets I is as follows:

28 21 TABLE I: Modulation and coding schemes min αt (2a) λ d = 1 (2b) d D π e B e U ei α, i I (2c) e E λ d e p d π e, d D (2d) modulation coding throughput SINR BPSK 1/2 7.2 Mbps 3.5 db QPSK 1/ Mbps 6.6 db QPSK 3/ Mbps 9.5 db 16-QAM 1/ Mbps 12.8 db 16-QAM 3/ Mbps 16.2 db 64-QAM 2/ Mbps 20.3 db 64-QAM 3/ Mbps 22.1 db λ, π 0 (2e) Let α, π, λ be an optimal solution of the dual problem (2). In order to generate compatible sets the following notation is used. Constant: Z ee = 1 if links e and e cannot be simultaneously used; = 0, otherwise Variable: Y e = 1 if link e belongs to the compatible set; = 0, otherwise Notice that values of Z ee are different for the two considered scenarios, but they are not totally unrelated. If we assume that constants Z ee results from applying the MCCA-unaware acknowledging mechanism and constants Z ee results from applying its MCCA-aware counterpart, then Z ee Z ee for all e, e E. The problem of generating compatible sets that can improve the current primal solution can be formulated as the following binary programming problem: max πeb e Y e e E (3a) Y e + Y e + Z ee 2, e, e E (3b) Y {0, 1} (3c) The objective is to maximize a weighted sum of active mesh links. Constraint (3b) prevents a situation when two interfering mesh links are activated simultaneously. If the objective of an optimal solution Y for (3) is greater than α then the generated compatible set defined by {e E : Ye = 1} is added to I as it may improve the optimal primal solution. More sophisticated methods of generating compatible sets that take higher level interferences into account can be found for instance in [15]. VI. NUMERICAL RESULTS We tested numerically the scenarios of Section IV using the optimization model presented in Section V on thirty different network topologies (exemplary networks can be found in Figure 3), generated randomly in the following way. First, stations were randomly located inside a square. Then, the value of transmitting power (the same for all MPs and MPPs) was set to the minimum value that keeps a network either connected or resilient. The first case, i.e., connected, means that there exists a path consisting of mesh links from each MP to any MPP. Still (a) Test case #13 (b) Test case #14 Fig. 3: Examples of considered network topologies. MPPs do not have to be connected to each other. The second case means that after suffering a single mesh link failure a network is still connected. In other words, there exists a linkdisjoint pair of paths from each MP to any MPP. Notice that the paths in the pair do not have to lead to the same MPP, but can end at two different MPPs. We followed [9] and assumed the path loss exponent of 4, while calculating power received at each station and establishing mesh links. Each MP uses the shortest path with respect to the number of hops in order to reach an MPP. If more than one path have the same length, then the next hop is selected based on priorities given to stations. We assume that each link uses the fastest available (satisfying the SINR constraint) MCS from Table I. Useful information about the considered topologies together with results obtained for the scenarios discussed in Section III are presented in Table II. The first column of the table contains identification numbers of test cases. The next four columns contain topology information. Basis information: the number of MPPs in a network and the number of MPs (excluding MPPs), are in the second and the third column. In the next column the resilience of the networks is described. It can be either - standing for connected network or + standing for resilient network. The topology information ends with the number of mesh links in the fifth column. The following three columns contain the obtained results. They are: minimum throughput A obtained for the MCCA-unaware scenario, minimum throughput B obtained for the MCCA-aware scenario, and the gain expressed in percents and understood as B A A times 100%. As clearly seen in Table II, applying the MCCA-aware acknowledging mechanism can significantly increase raw throughput of WMN. Our experiments show that the average gain is above 50% and it increases from the average of 24% for small, 10-station networks, to the average of 65% for larger, 50-station networks.

29 22 TABLE II: Numerical results topology throughput [Mbps] # MPPs MPs res. L unaware aware gain [%] VII. CONCLUSION In the paper we investigated advantages of using an MCCAaware acknowledging system in IEEE s-based WMN. The MCCA protocol allows for reserving time slots for a point-to-point transmission over a wireless link. The protocol assures that the transmission will not be hindered by any other device that uses MCCA, by preventing all neighbors of both sender and receiver from accessing the medium or accepting reservations during the reserved time window. Communication in the reserved time window is bidirectional, as it consists of data frames sent by the sender and ACK frames sent by the receiver. In the paper, benefits of using separate time window reservations for the data frames and the ACK frames were presented and confirmed numerically. We showed that using the MCCA-aware acknowledging mechanisms can lead to a significant increase in network capacity reaching 50% on average. What is more, we showed that the gain increases with the size of a network. REFERENCES [1] Bay area wireless users group. [2] Seattle wireless. [3] IEEE Standard for Information technology Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 5: Enhancements for Higher Throughput. IEEE Std n-2009 (Amendment to IEEE Std as amended by IEEE Std k- 2008, IEEE Std r-2008, IEEE Std y-2008, and IEEE Std w-2009), pages 1 502, [4] IEEE Standard for Information Technology Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 10: Mesh Networking. IEEE Std s-2011 (Amendment to IEEE Std as amended by IEEE k-2008, IEEE r- 2008, IEEE y-2008, IEEE w-2009, IEEE n-2009, IEEE p-2010, IEEE z-2010, IEEE v-2011, and IEEE u-2011), pages 1 372, [5] I.F. Akyildiz and X. Wang. A survey on wireless mesh networks. IEEE Communications Magazine, 43(9):23 30, [6] R. Bruno, M. Conti, and E. Gregori. Mesh networks: commodity multihop ad hoc networks. IEEE Communications Magazine, 43(3): , [7] A. Capone, G. Carello, I. Filippini, S. Gualandi, and F. Malucelli. Routing, scheduling and channel assignment in WMN networks: optimization, models and algorithms. Ad Hoc Networks, (8): , [8] A. Ghosh, R. Ratasuk, B. Mondal, N. Mangalvedhe, and T. Thomas. LTE-advanced: next-generation wireless broadband technology [Invited Paper]. IEEE Wireless Communications, 17(3):10 22, june [9] A. Goldsmith. Wireless Communications. Cambridge University Press, [10] J. S. Haugdahl. Inside n wireless LANs: Practical insights and analysis. Technical Report available: learningnetwork.cisco.com, Bitcricket, [11] G. Hiertz, D. Denteneer, S. Max, R. Taori, J. Cardona, L. Berlemann, and B. Walke. IEEE s: The WLAN mesh standard. IEEE Wireless Communications, pages , Feb [12] U. Körner, A. Hamidian, M. Pióro, and C. Nyberg. A distributed MAC scheme to achieve QoS in ad hoc networks. Annals of Telecommunications, 66: , [13] A. Krasilov, A. Lyakhov, and A. Safonov. Interference, even with MCCA channel access method in IEEE s mesh networks. In IEEE 8th International Conference on Mobile Adhoc and Sensor Systems, pages , [14] M. Pióro and D. Medhi. Routing, Flow, and Capacity Design in Communication and Computer Networks. Morgan-Kaufmann Publishers, [15] M. Pióro, M. Żotkiewicz, B. Staehle, D. Staehle, and D. Yuan. On maxmin fair flow optimization in wireless mesh networks. Ad Hoc Networks, [16] S. Xu and T. Saadawi. Does the IEEE MAC protocol work well in multihop wireless ad hoc networks? IEEE Communications Magazine, 39(6): , August [17] A. Yarali and B. Ahsant n: the new wave in WLAN technology. In Proceedings of the 4th International Conference on Mobile Technology, Applications, and Systems and the 1st International Symposium on Computer Human Interaction in Mobile Technology, pages , New York, NY, USA, ACM. ACKNOWLEDGMENT Most of the work presented in this paper has been funded by National Science Center (Poland) under grant 2011/01/B/ST7/02967 Integer programming models for joint optimization of link capacity assignment, transmission scheduling, and routing in fair multicommodity flow networks. The work has also been partly supported by the ELLIIT Center at Linköping Lund in Information Technology (Sweden).

30 23 Performance Evaluation of Control Plane Functions in ASON/GMPLS Architecture Sylwester Kaczmarek a, Magdalena Młynarczuk b, Paweł Zieńko c Department of Teleinformation Networks Gdańsk University of Technology Gdańsk, Poland a kasyl@eti.pg.gda.pl, b magdam@eti.pg.gda.pl, c pawel.zienko@interia.pl Abstract It is assumed that demands of information society could be satisfied by architecture ASON/GMPLS comprehended as Automatically Switched Optical Network (ASON) with Generalized Multi-Protocol Label Switching (GMPLS) protocols. Introduction this solution must be preceded performance evaluation to guarantee society expectations. Practical realization is expensive and simulations models are necessary to examine standardized propositions. This paper is devoted to the simulation results of ASON/GMPLS architecture control plane functions in OMNeT++ discrete event simulator. The authors make an effort to explore call/connection set-up time, connection release time in a single domain of ASON/GMPLS architecture. Index Terms ASON; GMPLS; simulation model; call control; connection control I. INTRODUCTION Continuous information growth concerned with sophisticated apllications generates the necessity of new telecommunication network architecture proposition based on optical solutions. The ITU-T Automatically Switched Optical Network (ASON) [1] concept with Generalized Multi-Protocol Label Switching (GMPLS) [2] protocols has a chance to fulfill information society requirements. This solution is named as ASON/GMPLS. The ASON/GMPLS control plane is composed of different components that provide specific functions (including routing and signaling). The main purpose of ASON/GMPLS control plane is to facilitate fast and efficient configuration of connections within a transport layer network to support both switched and soft permanent connections using GMPLS protocols like RSVP-TE for signaling and OSPF-TE for routing. The basic assumption of ASON control plane is separation of call control from connection control. This separation allows control plane names to be completely separate from transport plane names. The ASON architecture itself is only a concept. The advantages of this architecture are presented in [3]. The reference ASON control plane architecture describes the functional components including abstract interfaces and primitives. The recommendation presents interactions between call controller components, interactions among components during connection set-up. It also defines a functional component that transforms the abstract component interface into protocols. For a time being the standardization does not specify all protocols details needed to implementation. Using GMPLS protocols or even mechanism of protocols gives the opportunity to ASON/GMPLS realization. Practical realizations are made only for simple network architecture. For complex research simulations models are needed. The aim of the paper is to present a series of simulation results to show the performance of ASON/GMPLS control plane functions to support switched connections and discuss the problem of call/connection set-up time and connection release time in a single domain. The work on simulation model has been preceded by practical realizations of ASON/GMPLS architecture in laboratory testbed presented in [4,5]. Performed tests validated correctness of all network elements operation including communication procedures and request processing. The same communication procedures are implemented in the simulation model with respect to ASON/GMPLS standardization and the latest trends in ITU-T NGN architecture [6]. The paper is organized as follows. General information about ASON/GMPLS architecture and basic control functions scenarios are depicted in section II. The ASON/GMPLS simulation model is presented in section III. Section IV is devoted to presentation of performance tests results. Conclusions and outlook to future are presented in section V. II. BASIC CONTROL PLANE SCENARIOS A. ASON/GMPLS control plane concept This section is devoted to description of ASON recommendation and GMPLS protocols mechanisms proposed in ASON/GMPLS. The idea of call and connection control is presented in [1]. The ASON recommendation separates the treatment of call and connection control. The call is a representation of the service offered to the user of a network, while connections are one of the means by which networks deliver required services. The ASON/GMPLS control plane is equipped with call and connection components. The components concerned with call service are Calling/Called Party Controller (CCC) and Network Call Controller (NCC). The main roles of CCC are call generation of call requests, acceptance or rejection of incoming call requests, generation of call termination requests. The CCC component is associated with the end of the call. The NCC component supports for calling and called party controllers and additionally supports calls at domain boundaries.

31 24 Apart from call components ASON/GMPLS control plane is equipped with components involved in connection control like: Routing Controller (RC), Protocol Controller (PC), Connection Controller (CC), Link Resource manager (LRM), Termination and Adaptation Performer (TAP). As recommended in [1] the connection controller is responsible for coordination among the Link Resource Manager, Routing Controller and other connection controllers for the purpose of set-up, release and modification of connection. The Routing Controller provides routing functions using GMPLS routing protocol. Link Resource Manager maintains the network topology. The role of Protocol Controller is to map the operation of the components in the control plane into messages that are carried by GMPLS communication protocol between interfaces in the control plane. The Termination and Adaptation Performer holds the identifiers of resources that can be managed using the control plane interfaces. The group of components involved in connection control is considered in further sections as Control Element (CE). Assumed that ASON/GMPLS control plane is equipped with two CCC (CCC 1 and CCC 2), NCC, three CE (CE 1, CE 2, CE 2) and transport plane is represented by three optical cross-connects (OXC 1, OXC 2 and OXC 3) the ASON/GMPLS architecture is presented in Fig.1. The CE i element is a representation of control elements for OXC i. The set-up and release scenario is performed by control plane components. B. Call/connection set-up scenario In this section the authors want to present typical call/connection set-up scenario based on [7]. The same scenario is implemented in the simulation model. The scenario is graphical presented with definition of times necessary to calculate call set-up time, connection set-up time. The basic setup scenario is presented in Fig. 2. Call setup requests are sent by CCC 1. The Calling Party Controller Fig. 2. The set-up scenario for ASON/GMPLS architecture. CCC 1 sents call request to NCC. The NCC component sent call indication to Called Party Controller CCC 2. The CCC 2 component after call confirmation initiates connection set-up process sending connection request to CE 1. Then communication between CE elements is performed signaling RSVP-TE messages PATH and RESV are forwarded according to [7] towards destination CE 3. After successful connection set-up in transport plane CE 1 informs NCC sending connection confirmed. Finally NCC sends call confirmed to Calling Party Call Controller. Taking into consideration the call/connection set-up scenario presented in Fig. 2 value of call set-up time is defined as time from sending call request (t 1 ) up to call confirmed(t 4 ) while connection set-up time is defined as time from sending connection request (t 2 ) to connection confirmed (t 3 ). C. Connection release scenario The basic release scenario is performed as depicted in Fig.3. The value of call connection release time is defined as time from sending call release (t 5 ) up to release confirmed(t 8 ) while connection release time is defined as time from sending connection release (t 6 ) to connection release confirmed (t 7 ). In the case of connection release standardization group distinguishes release scenarios initiated by different call controllers [7]. Path relase and Res relase messages are represented by RSVP-TE messages. Fig. 1. The ASON/GMPLS network architecture. Fig. 3. The release scenario for ASON/GMPLS architecture.

32 25 III. SIMULATION MODEL The ASON/GMPLS simulation model is created in OM- Net++ environment [8]. It consists five main functional blocks: control plane, transport plane, call generation, topology and resource information, initial configuration, measurements. Control plane block consists of functional elements like: Connection Controller (CC), Routing Controller (RC), Link Resource Manager (LRM), Calling/Called Party Controller (CCC), Network Call Controller (NCC). Transport plane block consists of Optical Cross-Connects (OXCs) is emulated. For each OXC blocking probability is assumed. Signaling is performed on separate wavelength. Resource allocation takes into consideration Routing and Wavelength Assignment problem (RWA) [9]. Routing functions are implemented in accoradance with [1,10]. Control plane functions in the simulation model are divided into call control functions and connection control functions. Call control functions concerned with call processing and connection control functions are responsible for set-up and release connections in the transport plane. Call control plane is not aware of transport plane topology. The structure of the control plane model is presented in Fig. 4. The structure of node consists of control plane elements and OXC is presented in Fig. 5. The model of ASON control plane functions is based on the following assumptions: call control functions are represented by elements: two Calling/Called Party Controllers (CCC 1, CCC 2), Network Call Controller (NCC), ids (introduced in the model for call identifier assignment), Fig. 4. The structure of simulated network (Poland). Fig. 5. Fig. 6. The structure of node Białystok. The call control in ASON/GMPLS control plane. connection control function are performed by Control Elements (CE), each Control Element is represented by computed module, the Control Element consists of Connection Controller (CC), Routing Controller (RC), Link Resource Manager (LRM), Termination and Adaptation Performer (TAP) (see Fig.2), the number of CE is equal to the number of nodes, mapping of CE to transport plane (represented as emulated OXC) is one-to-one. As is depicted in Fig. 4 generation block is represented by components: generator high 1, generator low 1, generator high 2 and generator low 2. The generator high 1 and generator low 1 generate requests high (requests with high priority) with defined distribution and send towards CCC 1, while generator high 2 and generator low 2 sent requests low (requests with low priority) towards CCC 2. CCC 1 and CCC 2 sent received requests to ids, which assigns unique call identifier (Call ID) to generated call requests. Afterwards Call ID is located in call request and sent to NCC. The process of call processing is presented in Fig. 6. Topology and resource information block is in charge of storage control plane topology, transport plane topology, domain allocation (links, distance between nodes). All pa-

33 26 rameters including network topology are configurable during initial configuration. To make realistic network conditions the topology is based on [11]. Due to initial configuration block we are able to set initial values for: call generator (distributions of call requests for low/high priority, distribution of connection release for low/ high priority), traffic matrix (coefficient matrix), measurement and run the simulation (i. a. simulation time limit, warm-up period, event log module recording, seed, call time distribution), control plane (assignment sources addresses to nodes, assignment unique id number to each node), transport plane (blocking probability of OXC). Fig. 7. Mean call set-up time [s]. The measurements are performed in NCC component and ids. The ids is responsible for call set-up measurement, call release measurement, The NCC component is responsible for call release measurement, connection release measurement. Necessary times depicted in Fig. 2 and Fig. 3 (t 1 -t 8 ) to calculate call/connection set-up/release times are stored in vector files. The simulation execution consists of warm-up period and n measurements periods. Compergensive statistical analisysis is performed offline base vector (.vec) files Taking into consideration the presented model and call/connection set-up and release scenarios presented in section II, the simulation model allows to evaluate connection loss probability. The connection loss probability could be caused by lack of optical resources or blocking probability of OXC. IV. CONTROL PLANE PERFORMANCE RESULTS AND DISCUSSION The performance evaluation of control plane functions in ASON/GMPLS is presented by performance results. The simulation scenarios include the measurements of call and connection set-up time and connection release time which are estimated with confidence level equals 95%. The investigation were performed using the following assumption: total simulation time: 3600s warm-up period:200s 15 measurements intervals exponential distribution of call request, exponential distribution of connection release requests, 20% of all generated requests are high priority, mean connection duration time: blocking brobaility of OXC: 0.001, signaling link capacity 10Mb/s, wavelength capacity: 1Gb/s capacity of single connection requests: 2.5Mb/s, 5Mb/s, 7.5Mb/s, the number of wavelengths per fiber: 40, In the simulation Poland topology was assumed with shortest path algorithm (Dijkstra) for routing. The section presents exemplary results based on OMNet++ simulations. A. Call/connection set-up results Results presented in Fig. 7 and Fig. 8 indicate that mean values of call set-up time and connection set-up time significantly depend on request intensity assumed as the sum of call/connection set-up requests and connection release requests. In Fig. 7 and Fig. 8 mean values of call/connection setup time are presented for: all generated call/connection set-up requests, call/connection set-up requests successful ended and for call/connection set-up requests unsuccessful ended. Unsuccessful connection set-up request results from lack of free resources in emulated transport plane or blocking probability of OXC. Surprising, all call set-up time and connection setup time have tendency to low in the range from 300 requests per second to 9700 request per second with rapid growth of connection loss probability. The call set-up time and connection set-up time decrease is caused by reducing amount of connections on long distance. The path computation algorithm takes into consideration distance and RWA requirements. Due to this for greater intensity more often shortest connections are established. In the simulation Poland topology was assumed. Taking into consideration topology and routing assumption we noticed that the majority of connections was established on shorter distance. The are more shorter connections than farther connection in Poland topology. The greater intensity is, the greater the number of connections to near nodes is. The Fig. 10, Fig. 11, Fig. 12 present number of established connections in Poland from node 0 (Gdansk) for intensity equals 65 requests per second, 714 requests per second and 9671 requests per second respectively. The Fig. 13, Fig. 14, Fig. 15 present number of established connections in Poland from node 3 (Katowice) for intensity equals 65 requests per second, 714 requests per second and 9671 requests per second respectively. The greater intensity is, the greater shortage of free resources probability is. It is caused by lack of resources providing RWA requirements. For intensity in the range from 300 requests per second to 9700 requests per second the majority of connections are established in first hop. For greater intensity mean values of call set-up significantly growth. Detailed analysis of the OMNeT++ event log verified that for intensity greater than 1000 requests per second successful connections were established to the nearest

34 27 Fig. 8. Mean connection set-up time [s]. Fig. 10. The number of connections from Gdansk node for intensity 65 reguests per second. nodes. This effects smaller call/connection set-up time. For intensity greater than 9700 requests per second the bandwidth of signaling link was too small to service call traffic. The greater intensity is, the longer the waiting times for RSVP massages send between control plane components are. Results presented in Fig. 9 indicate that the connection loss probability for intensity 1098 requests per second is equal 0.15 for high priority connection requests and 0,5 for low priority connection requests. For high priority connection requests and intensities 290, 392, 700 requests per second loss probabilities equal 0.04, 0.1 and 0.3 respectively. For low priority connection request loss probabilities 0.04, 0.1, 0.3 are for intensities 714, 1000, 1690 requests per second respectively. Fig. 11. The number of connections from Gdansk node for intensity 714 reguests per second. B. Connection release results Connection release time results are presented in Fig. 16 and Fig. 17. This time significantly depends on request intensity. Similar to mean values of call/connection set-up time, mean values of connection release time have tendency to low. Results presented in Fig. 17 indicate that connection release time is smaller than connection set-up time. It is associated with realizing connection scenario. The authors assumed that the control elements only sends resource release requests and do not wait for confirmation from emulated optical resources. Presented in Fig. 16 results of call connection release times indicate in the range from 300 requests per second to 7500 Fig. 12. The number of connections from Gdansk node for intensity 9671 reguests per second. Fig. 13. The number of connections from Katowice node for intensity 65 reguests per second. Fig. 9. Connection loss probability.

35 28 requests per second to decrease. For intensity greater than 7500 requests per second time of call service increases. The log analysis shows that assumed value of signaling link capacity in call control is too small and leads to rapid grow of call connection release time. V. CONCLUSIONS In the paper a simulation model of ASON/GMPLS domain architecture is presented. The model was implemented in OMNeT++ simulator, which was proved to be efficient to ASON/GMPLS application. The model allows to determine mean values of call/connection set-up time, connection release time. The results of performance evaluation of ASON/GMPLS architecture are demonstrated. The simulation results show that the loss probability significantly increases for low priority connection requests while call/connection set-up time is shorter about 2ms than for high priority call/connection requests. Moreover, presented results indicate the importance of routing function implementation and resource allocation mechanism (RWA). Verification by detailed analysis shows that the most chosen path was the shortest. Due to this call/connection set-up time decreases. The model allows to simulate the impact of various network topology on processing time. Using the model many set of input variables presented in section III can be changed. Due to space limitation, only call/connection time results for structure of Poland are presented. The structure was investigated as a single domain. The authors are in agreement that further efforts should be made into researching the relationship between time consuming concerned with multi-domain topology and call/connection set-up time and connection release time. Fig. 14. The number of connections from Katowice node for intensity 714 reguests per second. Fig. 15. The number of connections from Katowice node for intensity 9671 reguests per second. REFERENCES [1] ITU-T Recommendation G.8080/Y.1304, Architecture for the Automatically Switched Optical Network (ASON), June [2] E. Mannie, Generalized Multi-Protocol Label Switching (GMPLS) Architecture, IETF RFC 3945, October [3] A. Jajszczyk, Automatically switched optical networks: Benefits and Requirements, IEEE Optical Communication, February 2005, pp [4] S. Kaczmarek, M. Młynarczuk,.M. Narloch, M. Sac, Evaluation of ASON/GMPLS ConnectionC Control Servers Performance. In Information Systems Architecture and Technology, Service Oriented Network Systems, Wroclaw, Wroclaw University of Technology Publishing House,Wroclaw 2011, pp [5] S. Kaczmarek, M. Młynarczuk, M. Narloch, M. Sac, The Realization of NGN Architecture for ASON/GMPLS Network, Journal of Telecommunications and Information Technology (JTIT), March 2011, pp [6] ITU-T Recommendation Y.2012, Functional Requirements and Architecture for Next Generation Networks, April [7] ITU-T Recommendation G /Y , Distributed Call and Connection Management, Signalling mechanism using GMPLS RSVP-TE, March [8] OMNet++ Network Simulation Framework, [9] H. Zang, J.Jue, RWA, B. Mukherjee, A Review of Routing and Wavelength Assignment Approaches for Wavelength-Routed Optical WDM Networks, Optical Netowrks Magazine, January 2000, pp [10] ITU-T Recommendation G /Y , ASON routing architecture and requierements for link state protocol, February [11] SND Network library, Zusse Institut Berlin, Fig. 16. Mean call connection release time [s]. Fig. 17. Mean connection release time [s].

36 29 Traffic Type Influence on Performance of OSPF QoS Routing Micha Czarkowski, Sylwester Kaczmarek, Maciej Wolff Department of Teleinformation Networks Gdańsk University of Technology Gdańsk, Poland Abstract Feasibility studies with QoS routing proved that the network traffic type has influence on routing performance. In this work influence of self-similar traffic for network with DiffServ architecture and OSPF QoS routing has been verified. Analysis has been done for three traffic classes. Multiplexed ON-OFF model was used for self-similar traffic generation. Comparison of simulation results were presented using both relative and nonrelative measures. The basic conclusion is that performance for stream and best-effort class for self-similar traffic is higher than performance for the same class with exponential traffic (Poisson). The other important conclusion is relation between performance difference and offered traffic amount. Index Terms self-similar traffic; exponential traffic; OSPF routing; QoS; DiffServ; packets networks; network performance I. INTRODUCTION Modern telecommunication networks are using many different technologies. The most important technologies and the most common are packet networks. Unfortunately existing packet networks don t guarantee quality. That is why modern convergent technologies are challenging for telecommunication operators. Quality of Service (QoS) guarantee is necessary. The one basic example of QoS ensuring solution is DiffServ architecture with QoS routing. Studies in packet networks proved that traffic in packet networks have self-similar character [7]. Unfortunately until now studies of self-similar traffic were focused on single services device [8]. There was no research for networks with many routers and DiffServ architecture. In this paper performance of networks with QoS routing and DiffServ architecture for different network structures with self-similar traffic was studied. Network performance with exponential offered traffic and self-similar offered traffic was compared. Results for exponential offered traffic for OSPF routing were captured from existing work [4]. Results for self-similar traffic were received in study. The simulation model is based on model from [4]. The paper is organized into sections. Second section describes routing algorithm and realization of DiffServ architecture. Third section describes two traffic types: exponential offered traffic and self-similar offered traffic. In this section self-similar offered traffic model which was used in simulation model is described. Fourth section describes simulation model, its structure and features. Fifth section describes the simulation and presents the results. In this section also conclusions are enunciate. Sixth section is summary and description of next studies step. II. OSPF QOS ROUTING The studied networks use OSPF routing algorithm, Dijkstra algorithm [1], to determine shortest route between source and destination router. This algorithm is sometimes called shortest path first (SPF). There is identical route for all traffic classes in simulation model. Metric used in this SPF algorithm implementation is metric from classical implementation of OSPF. This metric is product of constant number and inverse link capacity. Simulation model implemented DiffServ architecture, where routers are divided into edge routers and core routers. Core routers transmit packets with defined politics only. Edge routers define traffic class of packet and accept or discard traffic stream. Edge routers have also core routers functions. In this implementation of edge routers, decisions about acceptance or rejection of streams are based on actual network load. This algorithm is described in detail in [4]. Method of packets handling depends on the traffic class in DiffServ architecture. If edge router accepts packet, packet class is set and information about it is saved in header. Next routers make decisions about traffic class and packet handling method on the basis of information in header. III. TRAFFIC TYPE: EXPONENTIAL AND SELF-SIMILAR This section describes exponential offered traffic model, self-similar offered traffic model and simulation model with self-similar offered traffic. Exponential offered traffic is short range dependent (SRD). This traffic is easy to simulate. Self-similar offered traffic is long range dependent (LRD). Between events in this traffic there are dependencies in short and long time scale. Hurst coefficient [2] represents level of this dependency. Range of Hurst coefficient value for network traffic is between 0.5 and 1. Network traffic with Hurst coefficient equal to 0.5 is SRD traffic, and an example of this traffic realization is exponential offered traffic. Network traffic with Hurst coefficient greater than 0.5 and less than 1 is self-similar traffic [2].

37 30 Multiplexed ON-OFF model was chosen for self-similar offered traffic modeling. This model is multiplexing many two state streams. In first state, called ON, packets are generated with constant time interval. Packets aren t generated in second state, which is called OFF. ON state time is determined through Pareto distribution, OFF state time is determined through exponential distribution. This model is described in detail in [6]. IV. SIMULATION MODEL Simulation model is based on model described in detail in [4]. This model allows verifying performance of the networks with different QoS routing algorithm and different offered traffic types. Model was implemented using discrete event network simulator, Omnet++ [10]. This model implements DiffServ architecture. As an addition to the model from work [4] self-similar offered traffic generator has been added, the multiplexed ON-OFF streams. Packets are generated independently in edge node for three traffic classes. Hurst coefficient of self-similar traffic for three traffic classes can set independently in this model. We have three traffic classes in the model: stream, elastic and best effort. Service systems are priority queuing (PQ) with stream traffic class as high and for elastic and best-effort traffic classes as low. The last two classes uses weighted fair queuing (WFQ).. The packets from high queue are handling in first order. Packets from WFQ are processed with next priority. AC function in this model is realized through acceptance or rejection stream in border router. First packet in each stream is initial packet. If border router receives this packet, router calculate path for stream first. In next step router verifies QoS parameters: delays (IPTD IP Time Delay), delay variation (IPDV IP Delay Variation), loss ratio (IPLR IP Loss Ratio). These parameters are verified for end-to end link. In terms of capacity there is a check for all intermediate links in the path if all of them include the required bandwidth amount. The QoS values are taken from [9]. If verified path meets above values then stream is accepted, and path is saved in route table and next in packet headers, else stream is rejected. V. RESULTS OF STUDIES A. Simulation Parameters and Scenarios Simulation model has been applied for three structures with different connections density of the networks. The structures are SUN, NEWYORK and NORWAY [11]. Connections density is defined as number of connections between routers divided by number of routers. NEWYORK structure is network with maximum connections density equal 3.06; SUN structure is network with least connections density equal 1.5. Connections density of NORWAY structure is between SUN and NEWYORK and is equal In this paper results for SUN, NORWAY and NEWYORK are presented. For each structure the simulation has been done with forty different traffic classes proportions. For each ten proportion: level of best-effort traffic class is constant, level of streaming TABLE I PROPORTIONS OF TRAFFIC No. of proportion Stream traffic Elastic traffic Best-effort traffic traffic class is increase and level of elastic traffic class is decrease.these proportions are presented in table I. For example, for first proportion: 1% offered traffics are stream traffic, 19% traffics are elastic traffic and 80% traffics are best-effort traffic. The length packets for streaming traffic class are 160B, for elastic traffic class are 500B and for best-effort traffic class are 1500B. Buffer length for streaming traffic class is equal 5 packets, for elastic traffic is equal 10 packets, for best-effort traffic class is equal 50 packets. Probability handling in WFQ of elastic class is 0.4 and this probability of best effort class is 0.6. The simulation time was set to 3600s. For each proportion

38 31 and each structure the simulations has been repeated six times, with exception SUN structure. For this structure simulation was done twelve times. These count repetitions were ensured from width confidence interval. The result of simulation is network performance for each structure and for each traffic proportion. The performance is the number of packet serviced by network. Hurst coefficient for stream, elastic and best effort is 0.9. This value is results of study technical publications on selfsimilar traffic. Analyze of the Hurst coefficient of stream traffic, VoIP traffic, is in the work [5]. Analyze the Hurst coefficient of elastic traffic, MPEG traffic, is in the work [3]. Study of the Hurst coefficient of best effort traffic is in [7]. Result of these works is estimated value of Hurst coefficient equal to 0.9. B. Relative and Non Relative Measure The simulation results are presented using two measures: non relative and relative. Non relative measure described amount packet serviced in network for each traffic class. Amount packets serviced in one network structure, for each proportion, for one traffic class shows on one figure. Relative measure is described by: A rel = A SS A exp A exp (1) In relation 1 A exp is the mean amount serviced packet for exponential offered traffic for all simulation times for each traffic class for one structure, A SS is the mean amount packet for self-similar offered traffic for all simulation times for each traffic class, for one structure. This measure can show relative differences between performance network with self-similar offered traffic and network with exponential offered traffic. Values from this measure, for one network structure, for each traffic classes proportion, for one traffic class are presented in one figure. C. Results In figure 1-6 are presented results for SUN structure. First three figures are presented results in non-relative measure. Next three figures are presented results in relative measured. In figure 7-12 are presented results for NORWAY structure, and in figure are presented for NEWYORK structure in the same layout as for SUN structures. First, were described results in non-relative measurement. In figure 1, 2 and 3 are presented results in amount packet serviced over network for traffic class: stream, elastic and besteffort for SUN structure. For self-similar offered traffic performance is growth about 30% relative to exponential offered traffic for the stream traffic class. Difference of performance for elastic traffic class between self-similar offered traffic and exponential offered traffic is small and for most traffic classes proportion confidence intervals are overlap. For besteffort traffic class, performance for self-similar offered traffic is growth about 7% in compare to exponential offered traffic. The same conclusions are for NORWEY and NEWYORK structure. Results for NORWAY structure for traffic class: stream, elastic and best-effort are presented in figure 7, 8 and 9. Results for NEWYORK structure for traffic class: stream, elastic and best-effort are presented in figure 13, 14 and 15. Major difference between results for NEWYORK or NORWEY and SUN structure is performance growth in percentage for stream traffic class; maximum growth performance is 22% for NORWAY structure; maximum growth performance is 18% for NEWYORK structure. Result of this analysis is the proposal: Difference of performance for elastic traffic class is depends of network connections density. If density is great, it performance difference is less than for network with low connections density. This statement is confirmed in relative measure also. Similar results are for best-effort traffic class, but difference is less, and it is more visible in relative measure. In figure 4, 5 and 6 results for relative measure for traffic class: stream, elastic and best-effort for SUN network structure are presented. Difference of performance between network with self-similar offered traffic and network with exponential offered traffic, for stream traffic class is growth with growth offered stream traffic. The same results is for best-effort traffic class. The performance difference for elastic traffic class requires additional comment. If growth performance difference between network with self-similar offered traffic and network with exponential offered traffic, it is growth with growth amount elastic traffic class. If decrease performance difference between network with self-similar offered traffic and network with exponential offered traffic, this is decrease with decrease amount elastic traffic class. Equivalent conclusions is for NORWEY and NEWYORK structures, results for these structures are presented in 10, 11, 12 and 16, 17, 18 figures. VI. SUMMARY The main conclusion is that higher network performance was noticed for stream and best effort traffic class for selfsimilar offered traffic type than for exponential offered traffic type. Difference of performance for elastic and best-effort traffic class is depends of network connections density. If density is great, it performance difference is less than for network with low connections density. Next important conclusion is the relation between different performance growth and growth for the offered traffic. This relation depends also from buffers length and connection density. To fully confirm this thesis further research is required.

39 32 x 10 6 Amount of packet services in network Exponential traffic Self similarity traffic Proportion s number Relative measure Proportion s number Fig. 1. Streaming class packet services for SUN network structure with exponential and self-similar offered traffic. Fig. 4. Streaming class packet services for SUN network structure with exponential and self-similar offered traffic. Amount of packet services in network x Exponential traffic Self similarity traffic Proportion s number Fig. 2. Elastic class packet services for SUN network structure with exponential and self-similar offered traffic. Relative measure Proportion s number Fig. 5. Elastic class packet services for SUN network structure with exponential and self-similar offered traffic. Amount of packet services in network x Exponential traffic Self similarity traffic Proportion s number Fig. 3. Best-effort class packet services for SUN network structure with exponential and self-similar offered traffic. Relative measure Proportion s number Fig. 6. Best-effort class packet services for SUN network structure with exponential and self-similar offered traffic. REFERENCES [1] J. T. Moy, OSPF Anatomy of an Internet Routing Protocol, Harlow, England: Addison-Wesley, [2] O. I. Sheluhin, M. S. Smolskiy, A. V. Osin, Self-Similar Processes in Telecommunications, John Wiley & Sons 2007, ISBN

40 33 x 10 6 Amount of packet services in network Exponential traffic Self similarity traffic Proportion s number Fig. 7. Streaming class packet services for NORWAY network structure with exponential and self-similar offered traffic. Relative measure Proportion s number Fig. 10. Streaming class packet services for NORWAY network structure with exponential and self-similar offered traffic. Amount of packet services in network x Exponential traffic Self similarity traffic Proportion s number Fig. 8. Elastic class packet services for NORWAY network structure with exponential and self-similar offered traffic. Relative measure Proportion s number Fig. 11. Elastic class packet services for NORWAY network structure with exponential and self-similar offered traffic. 2.5 x Amount of packet services in network Exponential traffic Self similarity traffic Proportion s number Relative measure Proportion s number Fig. 9. Best-effort class packet services for NORWAY network structure with exponential and self-similar offered traffic. Fig. 12. Best-effort class packet services for NORWAY network structure with exponential and self-similar offered traffic. [3] J. Cano, P. Manzoni, On the use and calculation of the Hurst parameter with MPEG videos data traffic, Euromicro Conference, Proceedings of the 26th, vol.1, 2002, pp [4] M. Czarkowski, S. Kaczmarek, Dynamic Unattended Measurement Based Routing Algorithm for DiffServ Architecture, Telecommunications Network Strategy and Planning Symposium (NETWORKS), th

41 34 x Amount of packet services in network Exponential traffic Self similarity traffic Proportion s number Fig. 13. Streaming class packet services for NEWYORK network structure with exponential and self-similar offered traffic. Relative measure Proportion s number Fig. 16. Streaming class packet services for NEWYORK network structure with exponential and self-similar offered traffic. 18 x Amount of packet services in network Exponential traffic Self similarity traffic Proportion s number Relative measure Proportion s number Fig. 14. Elastic class packet services for NEWYORK network structure with exponential and self-similar offered traffic. Fig. 17. Elastic class packet services for NEWYORK network structure with exponential and self-similar offered traffic. Amount of packet services in network 15 x Exponential traffic Self similarity traffic Proportion s number Fig. 15. Best-effort class packet services for NEWYORK network structure with exponential and self-similar offered traffic. Relative measure Proportion s number Fig. 18. Best-effort class packet services for NEWYORK network structure with exponential and self-similar offered traffic. International, Sept. 2010, pp [5] T. D. Dang, B. Sonkoly, S. Molnar, Fractal analysis and modeling of VoIP traffic, Telecommunications Network Strategy and Planning Symposium, NETWORKS 2004, 11th International, 2004, pp [6] N. Likhanov, B. Tsybakov, N. Georganas, Analysis of an ATM buffer with self-similar (fractal) input traffic, INFOCOM 95. Fourteenth Annual

42 35 Joint Conference of the IEEE Computer and Communications Societies. Bringing Information to People. Proceedings. IEEE., vol.3, 1995, pp [7] W. E. Leland, M. S. Taqqu, W. Willinger, D. V. Wilson, On the selfsimilar nature of Ethernet traffic (extended version), IEEE/ACM Trans. Netw., vol. 2, 1994, pp [8] X. Tan, Y. Zhuo, Simulation Based Analysis of the Performance of Self similar Traffic, Proceedings of th International Conference on Computer Science & Education. [9] ITUT-Y-154, Network performance objectives for IP-based services, February [10] OMNeT++, [11] sndlib,

43 36

44 37 Message Inter-Arrival and Inter-Departure Time Distributions in IMS/NGN Architecture Sylwester Kaczmarek, Maciej Sac Department of Teleinformation Networks Gdańsk University of Technology Gdańsk, Poland Abstract Currently is it assumed that requirements of the information society for delivering multimedia services will be satisfied by the Next Generation Network (NGN) architecture, which includes elements of the IP Multimedia Subsystem (IMS) solution. In order to guarantee Quality of Service (QoS), NGN has to be appropriately designed and dimensioned. Therefore, proper traffic models should be proposed and applied. This requires determination of queuing models adequate to message interarrival and inter-departure time distributions in the network. In the paper the above mentioned distributions in different points of a single domain of NGN are investigated using a simulation model developed according to the latest standards and research. Relations between network parameters and obtained message inter-arrival as well as inter-departure time distributions are indicated, which is a step towards identifying proper queuing models and constructing an analytical model suitable for NGN. Index Terms IMS; NGN; message inter-arrival time distributions; message inter-departure time distributions; call processing performance; traffic model I. INTRODUCTION Next Generation Network (NGN) [1] is a proposition for a standardized, packet-based telecommunication network architecture that delivers multimedia services with guaranteed quality. NGN consist of two stratums: transport stratum containing elements specific for particular transport technology and transport independent service stratum, which includes IP Multimedia Subsystem (IMS) [2] elements and utilizes mainly SIP [3] as well as Diameter [4] communication protocols. In order to operate properly, both NGN stratums must be correctly designed. For this reason proposition and application of appropriate traffic models that evaluate among others call processing performance metrics [5,6] are required. These metrics were formerly known as Grade of Service (GoS) parameters and include Call Set-up Delay (CSD) as well as Call Disengagement Delay (CDD). As a result of the performed review [7,8], we found out that standards organizations do not consider traffic engineering in their work and current research [9-12] does not provide traffic models fully compatible with IMS-based NGN architecture (also abbreviated in the next part of the paper as IMS/NGN). Therefore, we decided to propose our own simulation [13,14] as well as analytical [15] model and use it to evaluate mean CSD and mean CDD in a single IMS/NGN domain. During our investigations [15] we obtained some differences between analytical and simulation results, which are noticeable for high load. This lead us to the research concerning message inter-arrival and inter-departure time distributions in a single domain of IMS/NGN performed using the simulation model. This would allow us to improve the precision of the analytical model by replacing simple M/G/1 queuing systems approximating the operation of servers and links with queuing models more adequate for the characteristics of IMS/NGN. The results of the performed investigations are presented in this paper, which is organized as follows. In section II a description of the IMS/NGN network model and call scenario is provided. The simulation model used to measure message inter-arrival and inter-departure time distributions is presented along with the assumed measurements methodology. The results of the performed research are described and commented in section III. Summary and future work are presented in section IV. II. TRAFFIC MODEL OF IMS/NGN Network model [16,17] and basic call scenario with twostage resource reservation [17-20] in a single domain of ITU-T NGN architecture (the most advanced of all available NGN solutions [7,8,15]) are presented in Fig. 1 and Fig. 2 respectively. The model and the scenario are based on the assumption that standard voice calls are utilized and thus application servers are not used. Moreover, we assume that codec sets in User Equipments (UEs) are compatible and no audio announcements are played during the call. Call set-up and disengagement requests are sent by UEs registered in the domain to the P-CSCF (Proxy Call Session Figure 1. Model of a single domain of IMS/NGN [16,17].

45 38 during simulations times of message arrival and departure at all network elements and links can be gathered. This allows calculation of histograms estimating message inter-arrival and inter-departure time distributions. Logical structure of the simulation model is presented in Fig. 3. Details regarding the implementation of the model in the OMNeT++ [21] simulation environment are out of the scope of this paper and can be found in [13,14]. CSCF servers include Central Processing Units (CPUs), which are responsible for handling messages incoming from other elements according to the assumed call scenario (Fig. 2). When CPUs are busy incoming messages are stored in CPU queues. Other network elements in the model respond with a particular delay (RACF responsible for resource allocation and release as well as UEs representing many user terminals). Each element of the model includes communication queues (one for each outgoing link), which buffer messages before sending them through busy links. For proper operation of the model the following assumptions are taken [13,14]: message loss probability is negligible and thus there are no message retransmissions and each request is properly handled, UE1 generates aggregated call set-up requests (SIP IN- VITE messages) from many terminals with exponential intervals and given intensity, λ INV, call duration time is determined by an exponential distribution with definable mean (default: 180 s), times of processing SIP INVITE messages by P-CSCF and S-CSCF are given by random variables T INV 1 and T INV 2 correspondingly, a k factors (k = 1, 2,...,8) determine times of processing other SIP and Diameter messages by CSCF servers Figure 2. Call set-up (messages 1 24) and call disengagement (messages 26 35) scenario in a single domain of IMS/NGN. Control Function) server, which forwards them to the S-CSCF (Serving Call Session Control Function) element, the main server handling all calls. The RACF (Resource and Admission Control Functions) unit representing the transport stratum allocates resources for a new call and releases resources associated with a disengaged call. All elements of the network communicate using SIP protocol [3]. An exception is the communication of P-CSCF with RACF, for which Diameter protocol [4] is utilized. Due to limited space available in the paper more detailed information about the network model and call scenario are not provided. This information can be found in [8,15]. Based on the presented network model (Fig. 1) as well as call scenario (Fig. 2), a simulation model of a single domain of IMS/NGN was proposed [13,14]. The aim of the model is to evaluate mean Call Set-up Delay E(CSD) and mean Call Disengagement Delay E(CDD) [5,6]. Additionally, Figure 3. Structure of the proposed traffic model.

46 39 T T Ri = a 1 T INV i, T RINGi = a 2 T INV i, T OKi = a 3 T INV i, T ACKi = a 4 T INV i, T BY Ei = a 5 T INV i, T OKBY Ei = a 6 T INV i, T AAAi = a 7 T INV i, T ST Ai = a 8 T INV i, i = 1 for P-CSCF, 2 for S-CSCF time of processing messages by RACF is described by a random variable, T X, UE1 and UE2 represent many user terminals processing messages in nonzero time; SIP INVITE message processing time in UE1 and UE2 is described by a given distribution (default: uniform distribution with values from 1 to 5 ms); times of processing other messages are related to this time as in (1), communication times between elements of the network depend on definable lengths of optical links, bandwidth and lengths of transmitted messages, network elements communicate over dedicated interfaces; UE1 and UE2 represent many terminals connected to P-CSCF through a switch/router, communication times between the switches/routers and particular terminals are included in message processing times of UE1 and UE2, simulation is finished when the first of the following conditions occurs: total simulation time is exceeded, maximum number of generated calls is exceeded, confidence intervals for mean CSD and mean CDD are not greater than assumed maximum values. The described simulation model was used to obtain histograms estimating message inter-arrival and inter-departure time distributions. For this reason message arrival and departure times were measured at the following points of all network elements (Fig. 4): CPU i the time of message arrival (the whole message, all bits are received) to the input of the P-CSCF or S- CSCF CPU queue, CPU o the time of message departure from the output of the P-CSCF or S-CSCF CPU, L ik the time of message arrival to the input of link communication queue for kth link (if the link is not busy, then this is the time of sending the first bit of the message), L ok the time of sending the last bit of the message through kth link. Acquired results were further processed in the MATLAB environment [22] to obtain histograms of message inter-arrival and inter-departure time at different points of the network. Additionally, some statistical values regarding the retrieved data were computed, including mean value and variance of message inter-arrival and inter-departure time. Also values of the c 2 variation coefficient were calculated, which is the ratio of variance of message inter-arrival or inter-departure time to squared mean value of this time. III. RESULTS The simulation model described in section II was used to investigate message inter-arrival time distributions at CPU i and (1) Figure 4. Points of message arrival and departure times measurements based on the example of the P-CSCF server. L ik points as well as message inter-departure time distributions at CPU o and L ok points of all elements of the IMS/NGN domain (Fig. 1, Fig. 4). During the investigations data sets presented in Tab. I and message lengths presented in Tab. II were used. Additionally, the following assumptions were made: warm-up period: 1500 s, 5 measurement periods, 0.95 confidence level, a 1 = 0.2, a 2 = 0.2, a 3 = 0.6, a 4 = 0.3, a 5 = 0.6, a 6 = 0.3, a 7 = 0.6, a 8 = 0.6, T INV 1, T INV 2 and T X input parameters are taken as constant values representing the maximum INVITE processing time by P-CSCF, the maximum INVITE processing time by S-CSCF, and the maximum message processing time by RACF respectively, simulation is finished when confidence intervals for E(CSD) and E(CDD) do not exceed 5% of mean Call Set-up Delay and mean Call Disengagement Delay or when total simulation time exceeds s. Table I INPUT DATA SETS Data λ INV T INV 1 T INV 2 T X Link set [1/s] [ms] [ms] [ms] parameters 1 100, 190, km 2 100, 190, km 10 Mb/s 3 100, 190, km 100 Mb/s Table II MESSAGE LENGTHS [11] Message Length in bytes SIP INVITE 930 SIP 100 Trying 450 SIP 180 Ringing 450 SIP 200 OK (answer to INVITE) 990 SIP ACK 630 SIP BYE 510 SIP 200 OK (answer to BYE) 500 Diameter messages 750

47 40 Although measurements in the simulation environment were performed at all available CPU i, L ik, CPU o, L ok points and for all data sets (Tab. I), due to limited space we demonstrate only selected results. It is important to mention that under tested conditions offered loads to the P-CSCF CPU (ρ 1 ) and S-CSCF CPU (ρ 2 ) were as follows: ρ 1 = 0.41 and ρ 2 = 0.16 for λ INV = 100 [1/s], ρ 1 = 0.78 and ρ 2 = 0.30 for λ INV = 190 [1/s], ρ 1 = 0.90 and ρ 2 = 0.35 for λ INV = 220 [1/s]. Performed research demonstrated that message inter-arrival and inter-departure time distributions in a single domain of IMS/NGN quite significantly differ from exponential distributions and depend on many parameters (Fig. 5-12). This fact can be also confirmed by analyzing the c 2 coefficient, which for the most obtained data is far from the value 1. Message inter-arrival and inter-departure time distributions are dependent among others on the offered load to CSCF servers CPUs. For low loads (low SIP INVITE message intensities λ INV, Fig. 5-6) obtained histograms are to a certain extent similar to exponential distributions, especially at CPU i points (Fig. 5). Higher load results in more multimodal histograms with lower c 2 values, which is visible during analysis of the presented results (Fig. 7-10), particularly at CPU o points (Fig. 10). Such character of message inter-departure time distributions at the outputs of CSCF servers CPUs (Fig. 10) can be explained by the fact that under high load CPU queues are in most cases nonempty and one message is processed just after handling another. Therefore, message inter-departure times are usually very close to message processing times (1), which are a finite set of values. This results in multimodal message interdeparture time distributions. Due to different set of messages processed by P-CSCF and S-CSCF these two servers have, however, slightly different inter-departure time distributions at CPU o points (Fig. 10). Although in both elements these distributions are multimodal, for S-CSCF server there is in most cases one dominant peak at 0.5 ms and several smaller peaks, while for P-CSCF unit the position of the dominant peak is more dependent on the input parameters. The investigated histograms are also influenced by the parameters of the links used to connect network elements, which involve application of communication queues (Fig. 3). This fact can be observed by comparing results presented in Fig. 7, obtained based on the assumption that all network elements are in one place and thus communication queues are not utilized, to results demonstrated in Fig. 8-9, where links with particular length and bandwidth are used. In order to simplify simulations, it is assumed that all links have the same length and bandwidth. The influence of communication links on message interarrival and inter-departure time distributions is dependent on the amount of offered load to the links, which is related to the available bandwidth. For low bandwidth (high offered load, Fig. 8) communication links may limit the minimal interval between the messages arriving to the CSCF server, which cannot be smaller than the time of sending the shortest message. This is also clearly visible in Fig. 11, where histograms at the input and output of the link between P-CSCF and S-CSCF are presented. Links with relatively high available bandwidth (low offered load, Fig. 9, Fig. 12) do not have a strong impact on message inter-arrival and inter-departure time distributions at CSCF servers. It can be noticed that for such links message interarrival time distributions at CPU i points are slightly closer to exponential (less multimodal, Fig. 9), even comparing to the data set 1 (Fig. 7) where all network elements are connected directly to each other. It is important that the same experiments were performed for link bandwidths of 100Mb/s and 1Gb/s, however, no visible differences were noticed. Therefore, results obtained for 1Gb/s link bandwidth are not presented in the paper. IV. CONCLUSIONS AND FUTURE WORK The aim of the work presented in the paper was to investigate message inter-arrival and inter-departure time distributions in a single domain of IMS/NGN architecture. For this reason the developed simulation model, which conforms to the latest standards and research, was utilized to gather data in different points of the network. Obtained results indicate that message inter-arrival and inter-departure time distributions in the system are influenced by many parameters (including offered load to CSCF servers and links) and are not exponential. This is especially visible for higher loads, for which histograms of time intervals between messages are multimodal, particularly at the output of CSCF servers CPUs. In order to construct an accurate analytical model of IMS/NGN, proper queuing models for CSCF servers CPUs and optical links will be determined based on the obtained and presented results. We are going to begin our research on this matter with a study on available queuing models with a single service channel, which are also possibly simple enough for practical applications. In the next step chosen queuing models will be applied and obtained analytical results will be verified by simulations. Apart from that, we are planning to develop our traffic model in order to carry out investigations in a multidomain IMS/NGN architecture, including also the elements specific for MPLS, Ethernet and FSA transport technologies [23-25]. REFERENCES [1] ITU-T Rec. Y.2001, General overview of NGN, December [2] 3GPP TS , IP Multimedia Subsystem (IMS); Stage 2 (Release 11), v11.0.0, March [3] J. Rosenberg, et al., SIP: Session Initiation Protocol, IETF RFC 3261, June [4] P. Calhoun, J. Loughney, E. Guttman, G. Zorn, and J. Arkko, Diameter Base Protocol, IETF RFC 3588, September [5] ITU-T Rec. Y.1530, Call processing performance for voice service in hybrid IP networks, November [6] ITU-T Rec. Y.1531, SIP-based call processing performance, November [7] S. Kaczmarek, and M. Sac, Traffic modeling in IMS-based NGN networks, Gdańsk University of Technology Faculty of Electronics, Telecommunications and Informatics Annals, vol. 1, no. 9, pp , 2011.

48 41 Figure 5. Histograms of message inter-arrival times at the P-CSCF (left, c 2 = 2.19) and S-CSCF (right, c 2 = 1.33) CPU i (data set 1, λ INV = 100 [1/s]). Figure 6. Histograms of message inter-departure times at the P-CSCF (left, c 2 = 2.16) and S-CSCF (right, c 2 = 1.36) CPU o (data set 1, λ INV = 100 [1/s]). Figure 7. Histograms of message inter-arrival times at the P-CSCF (left, c 2 = 1.12) and S-CSCF (right, c 2 = 0.87) CPU i (data set 1, λ INV = 190 [1/s]). Figure 8. Histograms of message inter-arrival times at the P-CSCF (left, c 2 = 0.69) and S-CSCF (right, c 2 = 0.59) CPU i (data set 2, λ INV = 190 [1/s]).

49 42 Figure 9. Histograms of message inter-arrival times at the P-CSCF (left, c 2 = 0.90) and S-CSCF (right, c 2 = 0.81) CPU i (data set 3, λ INV = 190 [1/s]). Figure 10. Histograms of message inter-departure times at the P-CSCF (left, c 2 = 0.58) and S-CSCF (right, c 2 = 1.00) CPU o (data set 3, λ INV = 190 [1/s]). Figure 11. Histogram of message inter-arrival times at the input of the P-CSCF S-CSCF link (left, c 2 = 0.67) and histogram of message inter-departure times at the output of the P-CSCF S-CSCF link (right, c 2 = 0.42) (data set 2, λ INV = 220 [1/s]). Figure 12. Histogram of message inter-arrival times at the input of the P-CSCF S-CSCF link (left, c 2 = 0.69) and histogram of message inter-departure times at the output of the P-CSCF S-CSCF link (right, c 2 = 0.71) (data set 3, λ I NV = 220 [1/s]).

50 [8] S. Kaczmarek, and Sac M., Traffic engineering aspects in IMSbased NGN networks ( Zagadnienia inżynierii ruchu w sieciach NGN bazujących na IMS ), in Teleinformatics library, vol. 6. Internet 2011 (Biblioteka teleinformatyczna, t. 6. Internet 2011), ISBN , pp , Oficyna Wydawnicza Politechniki Wrocławskiej, Wrocław, 2012 (in Polish). [9] I. K. Cho, and K. Okamura, A centralized resource and admission control scheme for NGN core networks, International Conference on Information Networking, ICOIN 2009, Chiang Mai, January [10] J. Joung, J. Song, and S. Lee, Flow-based QoS management architectures for the Next Generation Network, ETRI Journal, vol. 30, no. 2, pp , April [11] V. S. Abhayawardhana, and R. Babbage, A traffic model for the IP Multimedia Subsystem (IMS), IEEE 65th Vehicular Technology Conference, VTC2007-Spring, Dublin, April [12] P. S. Gutkowski, and S. Kaczmarek, The model of end-to-end call setup time calculation for Session Initiation Protocol, Bulletin of the Polish Academy of Sciences. Technical Sciences, vol. 60, no. 1, pp , [13] S. Kaczmarek, M. Kaszuba, and M. Sac, Simulation model for assessment of IMS-based NGN call processing performance, 2 nd Conference for Students and Doctoral Students, ICT Young 2012, Gdańsk, Poland, pp , [14] S. Kaczmarek, M. Kaszuba, and M. Sac, Simulation model of IMS/NGN call processing performance, Gdańsk University of Technology Faculty of Electronics, Telecommunications and Informatics Annals, in press, [15] S. Kaczmarek, and M. Sac, Traffic Model for Evaluation of Call Processing Performance Parameters in IMS-based NGN, in Information Systems Architecture and Technology: Networks Design and Analysis, ISBN , pp , Oficyna Wydawnicza Politechniki Wrocławskiej, Wrocław, [16] ITU-T Rec. Y.2012, Functional requirements and architecture of next generation networks, April [17] ITU-T Rec. Y.2021, IMS for next generation networks, September [18] ITU-T Rec. Y.2111, Resource and admission control functions in next generation networks, November [19] ITU-T Rec. Q , Resource control protocol no. 1, version 2 Protocol at the Rs interface between service control entities and the policy decision physical entity, June [20] M. Pirhadi, S. M. Safavi Hemami, and A. Khademzadeh, Resource and admission control architecture and QoS signaling scenarios in next generation networks, World Applied Sciences Journal 7 (Special Issue of Computer & IT), pp , [21] OMNeT++ Network Simulation Framework, October [22] MATLAB - The Language of Technical Computing, October [23] ITU-T Rec. Y.2175, Centralized RACF architecture for MPLS core networks, November [24] ITU-T Rec. Y.2113, Ethernet QoS control for next generation networks, January [25] ITU-T Rec. Y.2121, Requirements for the support of flow state aware transport technology in an NGN, January

51 44

52 45 Service and Path Discovery Extensions for Self-forming IEEE s Wireless Mesh Systems Krzysztof Gierłowski Computer Communications Department, Gdansk University of Technology Narutowicza 11/12, Gdańsk Abstract With the rapid growth of the quantity and capabilities of end-user electronic devices, both stationary and mobile, they are employed in increasing number of applications. In this situation, wireless network technologies begin to play a crucial role as networks access technologies, as cable-based solutions tend to be of limited utility in case of easily portable or mobile devices. Resulting development of wireless technologies reached a stage, where each connecting client can extend its overall resources by becoming a network node capable of forwarding transit traffic. This ability allows a severe reduction of the necessary operator provided infrastructure, compared to classic point-to-multipoint systems. It is also a significant step towards ubiquity of network access, as such wireless mesh systems tend to provide through coverage. One of the most promising wireless mesh solutions currently being developed is an IEEE s standard. However, despite its advantages, a number of areas lack sufficient support, which can lead to significant inefficiency and degradation of service quality in realworld IEEE s network deployments. In the paper we propose two extensions of IEEE s mechanisms, designed to provide better service quality in case of real-world deployment scenarios, especially in case of large systems. Both propositions introduce modifications to mesh path discovery and interworking procedures, while retaining compatibility with standard solution. Their basic functionality and efficiency have been verified by means of simulation model in self-organizing infrastructure and access system mesh deployment scenario. The results clearly indicate their utility, particularly in case of larger deployments of this type. The presented solutions are part of an ongoing research concerning cross-layer mechanisms for IEEE sbased systems, currently reaching testbed implementation stage. Index Terms wireless, , mesh. I. INTRODUCTION The rapid growth of the quantity and capabilities of end-user electronic devices, both stationary and mobile, resulted in their use in increasing number of applications. Their popularity and high functionality created such concepts as: Smart Cities, emphasizing ubiquity of ICT-based (Information and Communication Technology) services in as many elements of our daily lives as possible. Internet of Things, proposing an introduction of electronic devices into as many elements of our physical environment as possible, Cloud Computing, bringing ability to use computing resources (both hardware and software) as services provided by ubiquitous infrastructure, Machine-to-Machine systems, allowing free interaction of different devices to extend their functionality set and overall usefulness. It is evident from the above list of examples, that number of electronic devices will continue to grow and that an efficient communication between them is a crucial component of modern ICT systems. Requirements set before both core and access network systems are high and will continue to grow, both in terms of raw throughput and quality of service (QoS) provided. Apart from these requirements, growing percentage of mobile devices and associated services creates new requirement of ubiquitous network access - user should always be offered some form of network connectivity, as its lack would result a severe reduction of device s functionality. Modern smartphone operating systems and various aas ( as a Service) approaches are good examples of this trend. Wireless network technologies have a crucial role in access networks of such systems, as cable-based solutions are of limited utility in case of necessity to provide network access to easily portable or mobile devices. Moreover, their use is often preferable even in case of stationary devices, due to lack of necessity of deploying a costly and cumbersome cable infrastructure. Most of popular wireless access systems follow point-to-multipoint architecture, with operator maintained infrastructure of access points or base stations (connected with fast cable or point-topoint wireless links), each serving a set of client devices over its coverage area. Such systems must be carefully designed, taking into account both current and future signal propagation characteristics, expected user density, traffic requirements and economic constraints. However, a new emerging type of wireless network can be used to create system, where each connecting client can extend its overall resources by becoming a network node capable of forwarding transit traffic. Such ability allows a severe reduction of the necessary operator provided infrastructure, compared to classic point-to-multipoint systems. It is also a significant step towards ubiquity of network access, as mesh systems tend to provide through coverage. Mesh networks can be used in a variety of roles, starting from small adhoc systems, through a highly robust and redundant infrastructure of an access network, and ending with emergency or military communication networks or self-organizing office/building/campus integrated infrastructure/access systems. In all of these scenarios, the main advantages of mesh net-

53 46 works include autoconfiguration and self-forming abilities. However, as an emerging technology, fully self-forming mesh solutions are still in process of being standardized and a number of technical problems remain to be solved. II. IEEE S STANDARD One of the most promising mesh solutions currently being developed is an IEEE s standard [1], aimed to create a broadband, fully autoconfigurable, dynamically extending, and secure mesh solution, based on widely popular WiFi (IEEE [2]) wireless local area network (WLAN) technology. It is designed to serve in wide variety of environments, starting with small ad-hoc, isolated networks (for example: groups of laptops or smartphones), through industrial/sensor network deployments, office LANs, and ending with large, self-extending, public access systems. The created mesh structure is called a Mesh Basic Service Set (MBSS). The fact that this solution is based on cheap and popular WiFi technology and can be deployed on existing hardware makes it one of very few mesh solutions able to successfully appear and remain on popular WLAN market. Additionally, a number of design decision have been made to make an IEEE s mesh as compatible and as easy as possible to integrate with existing network systems. Examples include: mesh gates - specialized network nodes responsible for integration with external networks, hybrid routing protocol - ensuring that there are relatively small delays in routing to external destinations, higher layer transparency - making mesh network seem as a single Ethernet broadcast domain, complete with 802.1D [3] compatibility (bridging and spanning tree protocol). It is evident, that standard authors aimed to provide a robust building block for modern networks systems, both functional and inexpensive to deploy. Due to its robustness, an IEEE s-based mesh can be deployed in a variety of previously described roles, including the most complex - self-organizing office/building/campus infrastructure and access system. Despite the above advantages, a number of areas in the discussed standard lack sufficient support, which can lead to significant inefficiency and degradation of service quality in real-world IEEE s network deployments. The first serious limitation is the fact, that the standard in its current form does not include support for creating multichannel mesh networks, which leads to severe throughput degradation due to both intra and inter-path interference. This limitation is especially important in case of dense mesh networks, where each transit node directly affects a high number of neighboring nodes. The inability to perform concurrent transmissions within a given neighborhood by spreading them across a number of orthogonal frequency channels or for a single node to concurrently receive and transmit on different channels (by using multiple wireless interfaces) results in highly inefficient RF resource utilization. In this situation, each additional hop on the transmission path consumes high amount of limited RF resources, not only resulting in lowering QoS parameters of a given transmission (due to intra-path interference), but also affecting all neighboring transmission Fig. 1. Maximum theoretical single-channel mesh throughput for IEEE g transmission technology paths (causing inter-path interference). Figure 1 illustrates the theoretical maximum throughput as a function of number of hops in a single channel WiFi mesh, where only one transmission is currently conducted (there is no inter-path interference). Two cases have been considered: an optimistic case - where it is assumed, that each of transit nodes has only two neighbors in interference range, its predecessor and the next node on transmission path, a pessimistic case - where it is assumed that all nodes are within interference range of each other. As can be seen from the above description of efficiency problems of RF resource usage in case of single channel mesh networks, it is crucial to limit mesh transmission path length (number of hops) as much as possible - it will result in both resource conservation and better (as well as more predictable) QoS level for users. This task requires appropriate path discovery protocols, which are adequately addressed in the current standard by employing a hybrid (reactive/proactive) solution. However, we would like to propose an extension of these standard mechanisms, which can provide significant advantages in case of larger mesh structures. III. IEEE S PATH SELECTION MECHANISM The discussed standard utilizes Mesh Discovery and Mesh Peering Management protocols to discover neighbor nodes and create peer relationship between them, creating the base topology of a mesh system. Its operation procedures are outside of our current interest, as for the purpose of this research, we assume an already existing system topology. To discover transmission paths over a given network topology, the IEEE802.11s system utilizes Hybrid Wireless Mesh Protocol (HWMP). It is a hybrid protocol, consisting of both reactive and proactive path discovery mechanisms, able to function concurrently to provide both fast response, adherence to changing transmission conditions, and minimization of management overhead. Moreover, the standard allows for extensibility of path selection protocols, including the ability to use alternate path discovery solutions, as long as the same, single solution is utilized uniformly through the mesh network. Despite the fact, the obligatory HWMP protocol must also be supported by all IEEE s devices, for the sake of compatibility. Peering management mechanisms are responsible for assessing

54 47 node capabilities and deciding, if connecting station is able to participate in a given mesh procedures. The same mechanisms are then responsible for configuration of the newly connected node to use the appropriate path selection protocol. The basic path discovery mechanism of HWMP is a reactive Radio Metric Ad hoc On-Demand Distance Vector (RM- AODV) protocol. It is a modification of well-known AODV [4] protocol, but extended to use a radio aware link routing metric. In case of unmodified AODV metric used for path selection is a number of transmission hops on a given path. Such solution does not take into account the quality of links traversed which makes it poorly suited for wireless environment, where different links can provide a radically different transmission quality. To address these issues, RM-AODV employs Airtime Metric as a measure of link quality, taking into account both their current maximum data rate and transmission error rate - see equation (1). c a = [ O + B ] t r 1 1 e f (1) where c a is link Airtime Metric value, O - technology dependent transmission overhead, r - link throughput, B t - size of the test frame and e f - frame error rate for a given B t. Link load is not taken into account directly, due to rapid and unpredictable changes of this parameter in case of wireless multihop systems, but its impact is reflected by link error rate parameter (which is significantly higher in case of highly loaded links). Airtime Metric can be seen as an amount of link resources necessary to transmit a frame. A. Reactive routing As a reactive protocol, RM-AODV is activated by a source station, when it has a frame to send to a previously unknown destination, or destination for which forwarding information is suspected not to be current. In such case the node initiates path discovery. Each new path discovery initiated by a given station is assigned a unique (incrementing) HWMP Sequence Number. This number, along with intended destination address and path metric field set to 0 is included in Path Request (PREQ) message, subsequently sent by originating station to all of its neighbors as a broadcast HWMP Path Selection frame. Each receiving neighbor checks if it already received PREQ with: grater HWMP Sequence Number - the PREQ is considered to contain stale information and is discarded, as more recent PREQ already have been received, the same HWMP Sequence Number and the same or greater path metric - the PREQ contains current information, but it is also discarded, because the node already received PREQ of the same discovery which arrived through the better path (smaller path metric). the same HWMP Sequence Number and smaller path metric - the PREQ contains current information and arrived by the best path (smallest metric) yet. In the last case the receiving station updates its forwarding information, by remembering the station that it received PREP from, as its best next-hop on a path to PREQ originator. The station then increases path metric of received PREP by the metric of the link it arrived by, and re-broadcasts it to its neighbors. That way all stations in the mesh learn a next-hop towards PREQ originator, thereby creating mesh paths toward it. It should be noted, that the described procedure is conducted with use of HWMP Path Selection Frames sent to a broadcast address and such frames are not subject to reception acknowledgement and retransmission procedures. Such solution have been chosen to lower the resource consumption and process delay at a cost possibility of missing an optimal path due to a loss of Path Selection frames. When an intended recipient of the communication (destination station) receives PREQ which would trigger a rebroadcast according to above rules, it updates its forwarding information and, instead of re-broadcasting PREQ sends a unicast Path Replay (PREP) to PREQ originator along the just discovered (reverse) path. The PREP is forwarded by transit nodes along the path, each of them updating its forwarding information by remembering from which neighbor it received PREP, thus forming path to PREP originator. As PREP reaches the source station, a bi-directional path between the initiator of the discovery procedure and its subject is formed, by presence of current next-hop forwarding information in all transit nodes. It should be noted, that it is possible that the transit stations will re-broadcast PREP from the same discovery procedure multiple times and the destination node will generate multiple PREP messages, if they receive multiple subsequent PREQs with decreasing path metric. However, due to the fact that smaller metric most often corresponds to shorter transmission delay, it is not likely. It is evident that such procedure can be time consuming, especially in case of large mesh networks and distant destinations. Moreover, broadcast procedures tend to consume a considerable amount of resources. To optimize the described procedure, it is possible to allow transit stations which already have current next-hop information toward the destination station, to respond with PREP. That allows the source station to learn the forward path to destination quickly, but its PREQ still must be broadcasted all the way towards the destination station, to form the reverse path from destination to source. B. Proactive routing Apart from the obligatory RM-AODV protocol, the IEEE s standard defines an optional proactive path discovery solution, which can be deployed concurrently with RM-AODV. This solution, sometimes called Tree-Based Routing (TBR) protocol, consists of two independent mechanisms: Proactive Path Request (PPREQ) and Root Announcement (RANN). Both can be used to proactively create and maintain current paths between a selected mesh station (Root Mesh Station) and all other stations in the mesh. Moreover, they reuse a significant number of mechanisms of RM-AODV protocol, thereby simplifying their implementation. In case of the first approach, PPREQ, a selected root station periodically originates PPREQ messages, which can be thought of as PREQ messages addressed to a broadcast destination. Hey are re-broadcasted through the network according

55 48 to the same rules as in case of RM-AODV. That results in periodic refresh of unidirectional paths leading towards Root Station in all stations of the mesh. If mesh station predicts that a bi-directional path will be required, it can respond to PPREQ with unicast PPREP, which will be forwarded to Root Station and create the path in opposite direction. Root Station can also request that all stations respond in such fashion. Due to a relatively high consumption of resources in case of PPREQ method, an alternative solution has been included in the standard - the Root Announcement mechanism (RANN). Instead of periodically sending PPREQ messages in HWMP Path Selection frames, Root Station can choose to send RANN messages instead. The first difference between those two messages is that RANN does not need to be send in a dedicated frame, but can be included in Beacon frames, which are periodically broadcasted by each mesh station as a part of neighbor discovery mechanism, resulting in significant resource conservation. Moreover, while RANN messages are propagated through the network using the same procedure as PPREQ messages, their reception does not result in updating of receiving station forwarding information. Instead they can be used to decide, if active update of this information should be undertaken. For this purpose the information from which neighbor RANN message has been received is stored, and its path metric is compared with metric of the current path that the station has to a Root Station. If the station does not have a path to Root Station or if its metric is worse than RANN metric, the station can perform a reactive, unicast, RANN assisted path discovery. For that purpose, the station sends a unicast PREQ message to the Root, by forwarding it to the station form which it received RANN announcement. Such PREQ is ten forwarded retracing RANN path to Root Station, updating forwarding information in transit stations and forming path towards PREQ originator. Upon reception of the PREQ, Root Station responds with unicast PREP, which forms the path in opposite direction. Due to strictly unicast nature of such discovery, it is both efficient (no broadcast flooding) and reliable, as unicast frames are subject to reception acknowledgement and retransmission. C. Interworking As stated before, IEEE s standard aims to provide easy integration of mesh network with other network technologies, in particular Ethernet wired technologies. An IEEE s MBSS integrates with external networks as another IEEE 802 access domain, complete with support for various IEEE 802.1D mechanisms (such as bridging). However, in contrast to other popular IEEE 802 technologies (for example Ethernet), the MBSS operation is not primarily based on broadcast data delivery, as such approach is not acceptable due to limited resources of wireless system. In this situation, delivery of data to addresses unknown within MBSS cannot be conducted by simple broadcasting it to all stations for the bridging ones to receive and forward to external systems. To emulate this popular method of delivery, mesh stations connected to external networks, named Mesh Gates, support an extended suite of mesh mechanisms. Presence of Mesh Gates is advertised in the MBSS by proactive PPREQ/PPREP messages with a special Gate Announcement field set (if Mesh Gate is also Root Station) or dedicated Gate Announcement frames (GANN), distributed in fashion similar to already described RANN messages. Each Mesh Gate maintains a dedicated database (Proxy Information) of addresses known to be located in external networks accessible through a given gate. Moreover it forwards its contents to other Mesh Gates through MBSS with use of Proxy Update (PXU) messages. That makes all Mesh Gates aware, which gate should receive frames addressed to a particular external destination. If any other Mesh Gate receives frame proxied by other Mesh Gate, it will forward it to the correct gate through MBSS. However it should be noted, that both proxy information exchange and gate to gate data forwarding is conducted through MBSS and thus consuming limited RF resources. Due to the fact that Mesh Gate is functional equivalent of IEEE 802.1D bridge, if more than one gate is connected to a given external network, only one of them can be active at a time, as more could lead to creation of broadcast loops. For this purpose a dedicated protocol must be employed at Mesh Gates - most often a well-known Rapid Spanning Tree Protocol (RTSP) [3]. Mesh station which cannot discover a MBSS path to a given address, assumes that it is located in external network. In such case it sends the data frame to all known Mesh Gates, for further delivery - due to RTSP activity, the frame is delivered to each external network only once, as only one Mesh Gate is active per such network. To optimize further data delivery, Mesh Gate appropriate for a given destination sends PXU message to the station, which allows it to use a single Mesh Gate in continued transmission. IV. PATH DISCOVERY EXTENSION The described mechanisms allow for a significant flexibility in path discovery operations, taking advantage of both reactive routing optimized paths created on demand, and fast connection establishment to critical mesh locations, provided by proactive solutions. Moreover, interworking with outside networks is a subject of much attention, allowing seamless integration of mesh network with other IEEE 802 layer 2 ISO- OSI systems. However, while IEEE s mesh network can provide connectivity with destinations in external networks, it is unable to use resources provided by these networks to support connectivity of intra-mesh destinations. Furthermore, it is unable to utilize multiple mesh gates connecting MBSS with the same external network segment (RTSP protocol disables frame forwarding in all but one), resulting in formation of longer paths within the MBSS (as Mesh Gates near a given station can be disabled by RTSP). The proposed modifications of IEEE s mesh path discovery and forwarding protocols can be summarized as follows: introduction of ability to form a peering relationship between mesh gates, using external network as transmission medium,

56 49 introduction of ability for a mesh station to use any of the mesh gates connected to a single, external network segment. The first proposition allows using fast, cable connections frequently present between mesh gates connected to fixed infrastructure, to form transmission paths between intra-mbss destinations. Due to the fact that Airtime Link Metric for such transmission link will be very small (throughput higher and error rate lower than in wireless technology by several order of magnitude), use of such links will be highly preferred. Also, because it is highly probable in large mesh deployment, that there is a significant number of mesh gates able to communicate with use of external network and distributed through the mesh in more or less uniform fashion, there is a high probability of reducing the average path lengths of intramesh transmission. In fact, for a standard mesh deployment in large, multi-company office building, utilizing about mesh gate for a single floor, the described mechanism will trend to reduce average mesh path lengths to 2-3 hops - which makes it suitable even for real-time multimedia communication. To provide such functionality we need to allow transmission of frames of IEEE s format through the external network. For the sake of simplicity and robustness (ability to function in case of different external network technologies) of the proposed solution and maintaining compatibility with IEEE s standard, we propose to employ a simple mesh frame encapsulation in external network s unicast frames for the purpose. Moreover, it is necessary to provide the ability for mesh gates to detect each other presence (and addresses for unicast communication) through the external network. In case of popular broadcast networks (such as Ethernet), an encapsulation of standard mesh peering messages and their transmission to broadcast address will serve the purpose. However we should remember that in case of wireless network, peer relation establishment and maintenance mechanisms use constantly and frequently generated messages, which are not retransmitted by receiving nodes, thereby limiting such traffic to 1-hop neighborhood. In case of broadcast transmission in wired network, such messages will be transmitted across the whole broadcast domain, consuming network resources. While dissemination of mesh gate presence information widely through the network is advantageous to our purpose, the frequency required in case of unstable IEEE RF environment is highly superfluous in case of cable technologies. Because of that, frequency of mesh detection and peering message exchanges through the cable network need to be reduced. Moreover, a multicast group transmission can be considered instead of broadcast, if used external network technology supports it. As an additional advantage of the described modification, mesh gate to mesh gate communication (such as proxy related management messages and gate-to-gate data forwarding) can be accomplished by forwarding appropriate messages through the external network instead of MBSS, thereby conserving network resources in general, and particularly in important Mesh Gate neighborhoods, where we expect a high levels of network traffic. We also propose to allow using all existing mesh gates as points of contact with external networks, in contrast with unmodified IEEE s standard, where only one active mesh gate can be connected to the same external network segment due to already mentioned broadcast loops effect. We propose to limit the use of RTSP to disabling forwarding between Mesh Gate and external network, while leaving remaining functionality of Mesh Gate active. The frames addressed to external (or unknown) addresses, received by non-forwarding Mesh Gates will be delivered (by the mesh link through external network, due to is low Airtime Metric), to fully active gate for further forwarding. This modification will prevent the unnecessary reduction of the number of Mesh Gates which can be used by mesh station to reach external networks, thereby limiting path lengths through MBSS. However, we must remember, that station initially forwards frames with unknown address to all known Mesh Gates. Due to our modification it would result in unnecessary traffic and forwarding of multiple copies of such frames to a single external network. To prevent it, we introduce an additional element of Gate Announcement messages, which uniquely identifies the network segment a sending mesh gate is connected to (Segment Identifier - SGID), by containing the address of Mesh Gate currently actively forwarding to this segment. Mesh stations will send data frames only to one Mesh Gate from a group of Mesh Gates sharing the same SGID. To illustrate the results of introduction of the proposed extensions, we have performed a simulation experiment utilizing OMNet++ simulation engine [5]. As our modifications are intended mainly for larger mesh networks, we performed a series of simulations of systems containing 100 stations deployed randomly over 2000x2000 m. area. A log-distance path loss propagation model has been used [6], to predict the loss a signal encounters in densely populated areas. All nodes were equipped with a single IEEE g [7] radio interface using the same RF transmission channel. The standard IEEE s discovery and peering mechanisms were then used to form a mesh structure, but configured to not form peer links with nodes of medium or worse link quality, if the node if able to form at least one better quality peer link. It limited the number of links in the mesh structure, but improved their quality. For each scenario a 50 simulation runs have been performed, each with 1 test TCP connection between randomly selected mesh stations at least 500 m apart, and 20 such connections generating background traffic. A chosen number of modified Mesh Gates have been activated at randomly selected stations, starting with 0 (unmodified IEEE s MBSS). All Mesh Gates are connected with a single Gigabit Ethernet network. The results, in form of maximum and average MBSS path length and average throughput for test connection, are presented in figures 2 and 3. It is evident, that presence of Mesh Gates modified in proposed fashion results in reduction of both average and maximum path lengths. Moreover, obtained throughput of the connection is also significantly better, despite the observed tendency to concentrate transmission paths in vicinity of Mesh Gates. Resulting concentration of traffic does not impact the

57 50 Fig. 2. Maximum and average path length for different number of modified Mesh Gates Fig. 3. Average test connection throughput for different number of modified Mesh Gates transmission as bad as intra-path interference in case of longer mesh paths. V. APPLICATION LAYER SERVICE DISCOVERY EXTENSION On the other hand, the fact that traffic sources, such as various application layer servers, can be located within a mesh network (one of its most attractive usage scenarios is selforganizing LAN/campus infrastructure), makes it important to utilize efficient application level service discovery solutions. Such mechanisms will allow clients to connect to the most appropriate server in terms of quality of service and resource consumption. Unfortunately, the fact that an IEEE s mesh is presented to higher ISO-OSI layers as a single layer 2 broadcast domain, does not allow any higher layer mechanisms to obtain reliable information about its structure. For example, IP-based service discovery procedures can be used to select a server (from a preconfigured client-side list) which provides client with the best IP connectivity at a given moment. They are not, however, taking into account an actual mesh transmission path length - for IP layer all destinations within a mesh are only 1 hop away. Such approach can lead to inefficiently long transmission paths, and dynamic organization of mesh network tends to make such paths very unstable. The analysis based on real-world device usage data in multi-office building environment confirm the above problem and indicates high QoS parameter variation for long mesh paths. To mitigate this problem we propose a cross-layer integration solution, allowing application level services and servers to advertise themselves using layer 2 mesh management messages. Such approach allows a mesh structure aware dissemination of information about available services and facilitates a selection of the most appropriate server for a given client. Additionally, application servers can include elements such as, for example: information about supported access methods, authentication schemes and other client requirements, server load information, an initial configuration information for connecting clients in advertisement messages, resulting in significant reduction of initial high level service access time. For the servers to advertise their presence, we propose to create a new type of IEEE s management message - Higher Layer Service Advertisement (HLSA). This message is to be generated by active application servers in form based on RANN message. All the information present in RANN message should be included in HLSA, functionally making the generating server a Root Station using Root Announcement proactive mechanism. Furthermore, the server should include in the message additional informational elements, describing the provided application layer service. A specific set of information parameters will vary depending on a particular service type, but the information should be prepared in a way to minimize high layer message exchanges necessary to access the service. It is also advisable for the servers to set Time To Live (TTL) field in HLSA messages in accordance with their requirements of the service they provide - that way the dissemination of HLSA messages is limited to clients which are able to access the service with satisfying Quality of Experience, and network resources are conserved. Stations receiving HLSA should process it in the same way as RANN messages, by dispersing it through the network, up to TTL limit. That way all of receiving clients will obtain information about: services available in the network, Airtime Metric and number of hops to the most appropriate server for a given client, expected server performance (for example a measure of current server load) and a set of application level, service specific requirements for the client, allowing it to decide if it should use a given server for service access, an optional set of network layer (for example: IP-MAC address mapping making broadcast ARP resolution unnecessary) and service specific configuration parameters. As a result we can expect a number of valuable advantages: a significant reduction of initial service access delay for connecting clients, easy selection of the nearest available servers resulting in shorter transmission paths, providing both good and stable transmission parameters and significant conservation of network resources, reduction of necessary network traffic generated by higher layer network mechanisms, such as (in case of overlying IP network): DNS request-response, ARP broadcast request-response, etc. To verify the proposed solution we have utilized the already described simulation environment, but instead of Mesh Gates (which are not present in this scenario), a number of IP application servers have been activated randomly through the MBSS. The results, presented in figures 4 and 5, compare maximum and average path lengths between an application client and its chosen server for IP-based server discovery

58 51 prone to errors and less predictable. On the other hand, Airtime Metric values for wireless links are calculated by mesh stations proactively and over longer periods of time, producing more reliable results. The use of proposed mechanism allows that information to be delivered to the client almost instantly. The proposed HLSA mechanism is clearly able to provide shorter paths, thus conserving network resources and providing better service to clients. Fig. 4. Maximum client-server path length for IP and HLSA-based server discovery Additionally, as an example of advantages in initial service connection time provided by the solution, table I contains a comparison of popular DNS-based service discovery [9] and server selection method deployed in previously described, simulated IEEE s environment, with the proposed HLSAbased mechanism. Fig. 5. Average client-server path length for IP and HLSA-based server discovery method [8] and the proposed, cross-layer HLSA procedure. It can be seen, that IP-based discovery results in selection of more distant application servers, compared to the proposed solution, which provides clients with information necessary to select the server with the best Airtime Metric. The cause for this effect is a large number of factors (originating from both ISO-OSI layer 2 and 3 mechanisms) which impact relatively high-level IP-based communication quality assessment. Also, limited time which can be devoted to such on-demand assessment makes the result prone to errors caused by short-time fluctuations of path quality. It can be observed that the longer the distance between client and server, the more IP discovery is TABLE I EXAMPLE IP-BASED AND CROSS-LAYER SERVER DISCOVERY PROCEDURE TIMINGS Step IP Discovery Time HLSA Time 1 Broadcast ARP Req-Resp <1 s. Gathering / comparing ms. Service discovery (for predefined DNS server) HLSA announcements DNS Query-Response <200 ms. (for SRV records) 2 Broadcast ARP Req-Resp. <1,5 s Server selection (for obtained set of 10 application servers) ICMP Test Req-Resp. <5 s. (for obtained set of 10 application servers) 3 Connection to server and <500 ms. HLSA assisted mesh <300 ms. Server connection initial, service specific path discovery handshake Connection to server <300 ms. and initial, service specific handshake initiated by HLSA Total time <8,2 s. <1,4 s.

59 52 It should be noted, however, that the presented time values are highly dependent upon mesh structure and various network and service configuration parameters as well as network load, and, as such, should be threated only as an example. Also, highly inefficient IP ARP resolution can be eliminated altogether by including appropriate layer 3/layer 2 mappings in the HLSA messages. An optional inclusion of initial, service specific configuration parameters can allow further time gain, but will most often require a modification of server and client software. [7] IEEE Standard for Information Technology- Telecommunications and Information Exchange Between Systems- Local and Metropolitan Area Networks- Specific Requirements Part Ii: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std., [8] A. Delphinanto, T. Koonen, and F. den Hartog, Real-time probing of available bandwidth in home networks, Communications Magazine, IEEE, vol. 49, no. 6, pp , june [9] M. Cotton, L. Eggert, J. Touch, M. Westerlund, and S. Cheshire, Internet Assigned Numbers Authority (IANA) Procedures for the Management of the Service Name and Transport Protocol Port Number Registry, RFC 6335 (Best Current Practice), Internet Engineering Task Force, Aug [Online]. Available: VI. CONCLUSIONS All proposed extensions retain full compatibility with IEEE s standard, taking advantage of its inbuilt customization functions and concentrating rather on extending their scope of usage then introducing completely new solutions in place of already standardized ones. Proposed solutions can be generally classified as integration mechanisms, operating across network and ISO-OSI layer boundaries. Their basic functionality and efficiency have been verified by means of simulation model in sizable (100 nodes), selforganizing mesh deployment scenario. The results clearly indicate their expected utility in real-world IEEE s mesh deployments, and further studies concerning their impact on specific applications performance are currently being conducted. Presented solutions are part of an ongoing research concerning cross layer integration, interworking and mobility support for IEEE s-based systems, which are reaching implementation and testbed experiment stage. ACKNOWLEDGMENT Work supported in part by the Ministry of Science and Higher Education, Poland, under Grant N REFERENCES [1] IEEE Standard for Information Technology Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 10: Mesh Networking, IEEE Std., [2] IEEE Standard for Information technology Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std., [3] IEEE Standard for Local and metropolitan area networks: Media Access Control (MAC) Bridges, IEEE Std., [4] C. Perkins, E. Belding-Royer, and S. Das, Ad hoc On-Demand Distance Vector (AODV) Routing, RFC 3561 (Experimental), Internet Engineering Task Force, Jul [Online]. Available: [5] A. Varga and R. Hornig, An overview of the omnet++ simulation environment, in Simutools 08: Proceedings of the 1st international conference on Simulation tools and techniques for communications, networks and systems & workshops. ICST, Brussels, Belgium, Belgium: ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2008, pp [6] M. Hidayab, A. Ali, and K. Azmi, Wifi signal propagation at 2.4 ghz, in Microwave Conference, APMC Asia Pacific, dec. 2009, pp

60 53 Single Hysteresis Model for Multi-Service Networks with BPP Traffic Maciej Sobieraj, Maciej Stasiak, Joanna Weissenberg and Piotr Zwierzykowski Poznań University of Technology, Chair of Communication and Computer Networks, Poznań, Poland Abstract This paper presents a double hysteresis model for multi-service network systems that are offered Erlang, Engset and Pascal traffic streams. The occupancy distribution in the system is approximated by a weighted sum of occupancy distributions in multi-threshold systems. Distribution weights are obtained on the basis of a specially constructed Markovian switching process. The results of the calculations of radio interfaces in which the single hysteresis mechanism has been implemented are compared with the results of the simulation experiments. The study demonstrates high accuracy of the proposed model. Index Terms threshold models; hysteresis mechanizm; multiservice BPP traffic, WCDMA radio interface I. INTRODUCTION Many network systems make use of traffic management mechanisms that aim at an increase in the traffic capacity of the network. Such mechanisms are to be primarily found in access networks that are characterized by low capacity of resources. A good example of the above is provided by, for example, 2G, 3G, and 4G radio access networks in which radio interface capacities are very limited. Traffic management mechanisms in these systems usually employ such mechanisms as [1]: resource reservation, partial limitation of resources, priorities, traffic overflow, non-threshold compression and threshold compression. The operation of the reservation mechanism is based on making the capabilities of the reservation of resources for pre-selected call classes dependent on the load level of the system [2], [3]. Many operators take advantage of the mechanism of partial limitation of resources that limits the number of serviced calls of appropriate traffic classes to a predefined value [4]. Priorities are, in turn, based on prioritization of defined call classes. Prioritized calls can - in the case of the lack of free resources - effect a termination of service for calls with lower priority [1]. The overflow mechanism is one of the oldest mechanisms used in telecommunications [5], [6]. When the mechanism applies, calls that cannot be serviced in a given system due to its current occupancy level are redirected to other systems that still have free resources. Non-threshold compression is, in turn, based on a possibility of making the throughput of serviced calls of selected classes decreased in order to obtain free resources for servicing new calls [1]. This mechanism forms a basis of the High Speed Packet Access technology (HSPA) in Universal Mobile Telecommunications System (UMTS) networks [7]. In the threshold compression mechanism, the bit rate allocated to a new call depends on the load of the system. The mechanism is used to service elastic and adaptive traffic [8]. The first model of a threshold system, the so-called Single Threshold Model (STM), was devised in [9] and concerned a system that was called Single Threshold System (STS). Works [8], [10] considers systems with a number of independent thresholds, the so-called Multi Threshold Systems (MTS) and the corresponding analytical models, the so-called Multi Threshold Models (MTM). Paper [11], describe a variant of the single-threshold system - Single Hysteresis System (SHS) and the corresponding analytical model - Single Hysteresis Model (SHM). In SHS, two thresholds, in place of one, are introduced. The operation of each of the thresholds is dependent on the direction of changes in the load in the system. The introduction of hysteresis is followed by a more stable operation of the system, which can be proved by a decreased number of transitions between areas with high and low load. The present paper proposes a generalization of SHM for traffic streams of BPP type. The very name - BPP [4], [10] stems from the names of the types of call streams, (Bernoulli, Poisson and Pascal) that comprise Engset, Erlang and Pascal traffic, respectively. The paper is structured as follows. Section II presents analytical models of the BPP traffic. Section III discusses DTM, whereas Section IV describes SHS with BPP traffic. In Section V, the results of the analytical calculations are compared with the results of simulation experiments of the Wideband Code Division Multiple Access (WCDMA) radio interface that is used in the UMTS network. Section VI presents the conclusions resulting from the study. II. MULTI-SERVICE BPP TRAFFIC Multi-service traffic is a mixture of different traffic streams that are differentiated from one another by the number of allocation units, the so-called Basic Bandwidth Units (BBU) [3] that are necessary to set up a connection in the system. Traffic streams can be generated by an infinite (Erlang traffic) or finite (Engset and Pascal traffic) number of traffic sources. The intensity of Erlang traffic of class i, generated in the occupancy state of the system n BBUs, can be expressed by the following formula: A i (n) = A i = λ i /µ i = const, (1) where: λ i - is the average call intensity of calls of class i, µ i - the average intensity of service of calls of class i.

61 54 The intensity of Engset traffic of class j and that of Pascal traffic of class k depend on the state of the system and is defined in the following way [10]: A j (n) = [N j y j (n)]α j = [N j y j (n)] γ j µ j, (2) A k (n) = [N k + y k (n)]α k = [N k y k (n)] γ k µ k, (3) where: N c - is the number of traffic sources of class c, y c (n) - the number of calls of class c, serviced in state n, α c - traffic intensity from one free source of class c: α c = γ c /µ c, (4) γ c - call intensity of calls from one free source of class c. In the adopted notation, the indexes i, j, and k are used to denote classes of Erlang, Engset and Pascal traffic, respectively. The index c is in turn used to consider traffic related to any traffic class. III. SINGLE THRESHOLD SYSTEM A. Single Threshold System- working idea Assume that for pre-defined call classes one threshold Q, has been introduced. The operation of the system with single threshold with the example of calls of one class c is presented in Fig. 1. If the load of the system is very low (0 n Q), the Call Admission Control (CAC) function allows for a new call of class c with the maximum number of BBUs, equal to t c,1. to be serviced. Following an increase in the load in the system and after exceeding the threshold Q, within the area of maximum load (Q < n V ), the CAC function admits for service a call of class c with the minimum number of BBUs, equal to t c,2. B. Adaptive and elastic traffic in SHS In systems servicing the so-called adaptive traffic [8] the change applies only to the number of BBUs necessary to set up a connection of a given class. The assumption is that traffic of this type requires sending of all data, while a decrease in the number of allocated BBUs will be followed by a deterioration of the Quality of Service (QoS) parameters. A good example of the above is the voice service "full-rate" and "half-rate" in the GSM network. Elastic traffic [8] requires all data to be transferred, thus a decrease in the allocated number of BBUs will be followed by an increase in the service time, i.e. the parameter 1/µ c. HSDPA traffic in the UMTS network BBUs t c,1 t c,2 Fig. 1. Q Single Threshold System - working idea. V load is an example of the above. Thus, the service of elastic traffic causes the value of offered traffic in particular load areas to be changed. Therefore, in the case of the service of elastic traffic, Formulas (1), (2) and (3) can be rewritten in the following way: A i,s (n) = A i = λ i /µ i,s = const, (5) A j,s (n) = [N j y j,s (n)]α j,s = [N j y j,s (n)] γ j µ j,s, (6) A k,s (n) = [N k + y k,s (n)]α k,s = [N k y k,s (n)] γ k µ k,s, (7) where s indicates load area. We can distinguish two load areas in STM: s = 1 for n < 0; Q > and s = 2 for n (Q; V >. C. Occupancy distribution in STS The occupancy distribution in STS with BPP traffic can be determined on the basis of the model worked out in [9] for STM with Erlang traffic and in [10] for MTM with BPP traffic. According to this model, the occupancy distribution in STS can be written as follows: n[p n ] (V ) Q = 2 { A i,s (n t i,s )t i,s σ i,s (n t i,s )[P n ti,s ] (V ) Q + s=1 i M 1 A j,s (n t j,s )t j,s σ j,s (n t j,s )[P n tj,s ] (V ) Q + j M 2 A k,s (n t k,s )t k,s σ k,s (n t k,s )[P n tk,s ] (V ) Q, (8) k M 3 where: n[p n ] (V ) Q - probability of n BBUs being busy in STS with capacity V BBUs, M x - a set of call classes of Erlang (x = 1), Engset (x = 2) and Pascal calls (x = 3), respectively, t c,s - the number of BBUs required to set up a connection of class c in load area s, A c,s (n) - the average traffic intensity of class c offered to the system in the occupancy state n that belongs to the load area s. For adaptive traffic, this parameter is determined by (1)-(3); for elastic traffic, by (5)-(7). σ c,s (n) - conditional transition probability that in (8) is a switching coefficient between appropriate load areas. For traffic classes that do not undergo the threshold mechanism, this parameter always takes on the value equal to one: σ c,s (n) = 1. (9) For all traffic classes that undergo the threshold mechanism, the value of the parameter σ c,s (n) is defined in the following way: σ c,1(n) = { 1 for n Q, 0 for n > Q, σ c,2(n) = { 0 for n Q, 1 for n > Q. (10) To determine the occupancy distribution in STM according to (8) it is necessary to determine the values of intensities of offered Engset A j,s (n) and Pascal A k,s (n) traffic streams in individual states of the service process. These values can be determined on the basis of the parameter y c,s (n), i.e., the number of calls of a given class serviced in state n that belong to the load area s. This parameter can be approximated by the

62 a) Q Q+tc,1 Q Q-tc,2 Q a) BBUs 55 t c,1 b) b) t c,2 BBUs Q 2 Q 1 V load Fig. 2. Threshold for the first (a) and the second (b) scenarios. t c,1 t c,2 S=1 average number of calls of a given class that are serviced in the occupancy state n [1], [10]: y c,1 (n) = A c,1 (n t c,1 )σ c,1 (n t c,1 )[P n tc,1 ] (V ) Q /[P n] (V ) Q, for n Q + t c,1, (11) c) BBUs t c,1 Q 2 β Q 1 α V load y c,2 (n) = A c,2 (n t c,2 )σ c,2 (n t c,2 )[P n tc,2 ] (V ) Q /[P n] (V ) Q, for n > Q + t c,2. (12) Notice that in order to determine the occupancy distribution (8) in STM it is necessary to determine the values y c,s (n). These parameters can be determined on the basis of (11) and (12) which in turn require the knowledge of the distribution (8). Therefore, the determination of the occupancy distribution in STM requires a construction of a special iterative program which is discussed in detail in [10]. After the determination of the occupancy distribution in STM it is possible to determine blocking probabilities for individual call classes: E c = V [P n] (V ) n=v t Q. (13) c,2+1 Formula (13) expresses the sum of blocking states for calls of class c in the highest (s = 2) load area. D. Modified threshold values Consider a STS in which the Q threshold for c class calls has been introduced (Fig. 1). In the model, the occupancy states of the system are divided into two load areas. Assume that the mode of operation of STS depends on the direction of the load change. To analyze the system we can consider two scenarios for its operation. The first scenario assumes that the load of the system increase. This situation for class c corresponds to Fig. 2a. It is assumed in the Fig. 2a that the number of demanded BBUs changes after exceeding the threshold Q from the value t c,1 BBUs in the area (n Q) to the value t c,2 BBUs in the area (Q < n V ). In such a system, the occupancy distribution can be approximated by (8) determined for the STM described in Section C. The second scenario assumes that the loads of the system decrease. According to the definition of the threshold, it is the last state in which a call that demands a reduced number of BBUs (t c,2 ) can appear. This transition is marked in t c,2 Q 2 S=2 Fig. 3. System with single hysteresis mechanism (a) and its decomposition into DTM components (b)-(d). Fig. 2b with bold line. In order to determine the occupancy distribution, the so-called residual traffic, marked in Fig. 2b with dotted line, has to be additionally considered. Residual traffic is traffic that results from calls admitted in the lower load area (n Q) that have not yet been terminated before the system has been transferred from the lower load area to the higher load area (Q < n V ). The relation between threshold values for the first and the second scenarios can be determined on the basis of relation [11]: Q 1 V load Q = Q t c,2 1, (14) where Q defines the threshold for the second scenario that corresponds to the threshold Q for the first scenario. IV. SINGLE HYSTERESIS SYSTEM A. Single Hysteresis System - working idea The operation of the single hysteresis system for calls of one class c is show in Fig. 3a. In STS, one threshold Q (Fig. 1) has been introduced, while in SHS a pairs of thresholds Q 1, Q 2 is introduced. With a change in the load from low to high, threshold Q 1 operates. With a change from high to low load, thresholds Q 2 is used. Between Q 1 and Q 2 transition areas appear that form hysteresis. In SHS, two, partly overlapping, load areas can be distinguished. In the low load area (s = 1, 0 n Q 1 ), the Call Admission Control function (CAC) admits for service a new call of class c with the maximum number of BBUs, equal to t c,1. In the high-load area (s = 2, Q 2 < n V ) the CAC function admits a new call of class c with a lowest number of BBUs, equal to t c,2.

63 56 B. Occupancy distribution in SHS Figures (3b)-(3c) show a decomposition of SHM (Fig. 3a) into two STMs, where s indicates the area of the considered load. STMs, models are selected in such a way as to have the corresponding load area as high as possible. The arrows between Figures (3b) and (3c) indicate possible transitions between neighboring STMs. The arrows between STM1 (Fig. 3b) and STM2 (Fig. 3c) indicate that the instance of exceeding of threshold Q 1 triggers a change from the STM1 model to STM2, whereas the instance of exceeding of threshold Q 2 is followed by a change from the STM2 model to STM1. The parameters α and β that correspond to the arrows define intensities of the transitions between appropriate STMs. They are determined by values of streams that exceed indicated thresholds. How these parameters are determined will be presented in the next chapter. The transition STM2 STM1 is aligned with the direction of the change in the load from high to low, therefore this transition will be described by STM2 with the threshold that corresponds to the second scenario (Section III D), i.e., threshold Q 2 (Formula (14)). Hence, the thresholds in the considered system can be written in the following way: { Q x for odd x, Q x = Q (15) x = Q x t c,s 1 for even x, where t c,s is the number of BBUs that is necessary to set up a connection of class c in the load area s, if threshold Q x defines the transition STMs STMs-1. Let us denote the occupancy distributions in ST M s presented in Fig. (3b)-(3c) with the symbols [P n ] (V ) Q 1 ad [P n ] (V ) Q. 2 These distributions can be determined on the basis of (8) for appropriate pairs of thresholds adopted for a given STMs (Formula (15)). The occupancy distribution in SHM [P n ] (V ) H 1,H 2 can be modeled on the basis of the weighted sum of the occupancy distributions in ST M s into which the SHM under consideration is decomposed [12]: [P n ] (V ) H 1,H 2 = P (1)[P n ] (V ) Q 1 + P (2)[P n ] (V ), (16) where P (s) is the probability that SHS stays in the load area s, that corresponds to the average time the system spends in this particular load area. C. Switched process in SHM Probabilities P (s) can be determined on the basis of the two-state Markov process [12], whose diagram is presented in Fig. 4. This process is an analytical model for switches between appropriate load areas. The states in the diagram correspond to the execution of the service process in a given load area (described by a corresponding ST M s ), whereas the parameters α and β denote the intensities of transitions between the appropriate load areas. On the basis of the process presented in Fig. 4, it is possible to add and solve in a convenient way the state equations. The solution is expressed with the following formulas: P (1) = Q 2 β α + β, P (2) = α α + β. (17) Fig. 4. α 1 2 Markovian switching process in SHM. β The intensities of transition α determine the transitions in the direction from lower load higher load and are the sum of all traffic streams that exceed the appropriate thresholds. Thus: α = Q 1 n=q 1 t max+1 c M 1 M 2 M 3 A c,s (n)t c,s ϕ c,s (n), (18) The parameter ϕ c,s (n) is calculated in the following way: { 1 for n > Q x t c,s, ϕ c,s (n) = (19) 0 for n Q x t c,s. The intensities of transition β determine transitions in the direction from higher load lower load and are the sum of all service streams that exceed appropriate thresholds. Therefore: β = Q 2+t max 1 { y c,s (n)t c,s ϕ c,s (n)+ n=q 2 c M 1 M 2 M 3 + y c,s 1 (n)t c,s 1 ϕ c,s 1(n)}. (20) c M 1 M 2 M 3 The parameters ϕ c,s (n) and ϕ c,s(n) are calculated as follows: { 1 for n < Q x + t c,s, ϕ c,s (n) = (21) 0 for n Q x + t c,s, { ϕ 1 for n < Q x + t c,s 1 t c,s, c,s(n) = (22) 0 for n Q x + t c,s 1 t c,s. The second sum within the brace bracket in (20) includes residual traffic of class c that is serviced in area s (Section III D). Fig. 5a shows traffic streams of class c for the transition ST M s ST M s+1, whereas Fig. 5b presents service streams for the transition ST M s+1 ST M s. The accompanying assumption is that t c,s = 3 and t c,s+1 = 1. V. NUMERICAL STUDY In order to confirm the proposed method of blocking probability calculation the results of the analytical calculations were compared with the results of the simulation experiments of the WCDMA radio interface (uplink direction). In the simulation, 95% confidence intervals were calculated after the t-student distribution which are almost included within the marks plotted in Fig The study was carried out for users demanding a set of four traffic classes. In the examined WCDMA interface [12] it was assumed that the SHS was applied to the second traffic class. In all presented examples, it was also assumed that: capacity of radio interface was equal to V = 8000 BBUs

64 Fig. 5. Interpretation of passages ST M s ST M s+1 (a) and ST M s+1 ST M s (b). blocking probability Calculation class 1 Simulation class 1 Calculation class Simulation class 2 Calculation class 3 Simulation class 3 Calculation class 4 Simulation class offered traffic [Erl] Fig. 6. Blocking probability in the system without hysteresis (Erlang traffic streams) [12] the number of BBUs required by calls of particular classes: t 1,1 = 53 BBUs, t 2,1 = 257 BBUs, t 2,2 = 129 BBUs, t 2,3 = 64 BBUs, t 3,1 = 503 BBUs, t 4,1 = 1118 BBUs. traffic of particular classes was offered to the system in the following exemplary proportions: A 1,1 (0)t 1,1 : A 2,1 (0)t 2,1 : A 3,1 (0)t 3,1 : A 4,1 (0)t 4,1 = 1 : 1 : 1 : 1. the hysteresis thresholds are assumed to be equal to, respectively: Q 2 = 4000 BBUs, Q 1 = 6500 BBUs. Figure 6 presents the results for the WCDMA radio interface without hysteresis. Figure 7 shows the radio interface to which a SHS was introduced for the second class. All figures express results depending on total traffic offered to a single BBU. It is assumed in the presented experiments that all traffic streams are Erlang streams. Comparing the results in Figs. 6 and 7 it is observable that the introduction of SHM is effected by a decrease in the blocking probability for all traffic classes. This phenomenon is related to a decrease in the number of BBUs allocated to calls that undergo hysteresis. Figure 8 presents the results of the study with the assumption that all traffic classes are of Engset type (generated by 4000 traffic sources). Then, Fig. 9 presents the results for a mixture of Erlang (class 1), Engset (class 2, N 2 = 1000) and Pascal streams (classes 3 and 4, N 3 = N 4 = 1000). The results of the research study confirm high accuracy of the proposed SHM model for BPP traffic. The error of the method does not exceed the error obtained in studies on systems with Erlang traffic only. VI. CONCLUSIONS This paper proposes a new analytical model of SHS to which a mixture of different BPP traffic streams is offered. The SHS, introduced into a given system, allows the blocking probability to be decreased for particular traffic classes and leads to a reduction in fluctuations in the load. The paper also presents a possibility of the application of SHS for traffic control in the UMTS network. All the presented simulation experiments for the WCDMA radio interface in the UMTS blocking probability Calculation class 1 Simulation class 1 Calculation class Simulation class 2 Calculation class 3 Simulation class 3 Calculation class 4 Simulation class offered load [Erl] Fig. 7. Blocking probability in SHS for Erlang traffic streams [12] blocking probability Calculation class 1 Simulation class 1 Calculation class Simulation class 2 Calculation class 3 Simulation class 3 Calculation class 4 Simulation class offered traffic [Erl] Fig. 8. Blocking probability in SHS for Engset traffic streams (N 1 = 1000, N 2 = 1000, N 3 = 1000 and N 4 = 1000)

65 58 blocking probability Calculation class 1 Simulation class 1 Calculation class Simulation class 2 Calculation class 3 Simulation class 3 Calculation class 4 Simulation class offered traffic [Erl] Fig. 9. Blocking probability in SHS for Erlang (class 1), Engset (class 2, N 2 = 1000) and Pascal (classes 3 and 4, N 3 = N 4 = 1000) traffic streams network confirm good accuracy of the proposed analytical SHM model for traffic streams of BPP type. Summing up, the single hysteresis mechanism can be successfully used in the call admission control function of communications and cellular networks. REFERENCES [1] M. Stasiak, Głąbowski M., A. Wiśniewski and P. Zwierzykowski, Modeling and dimensioning of mobile networks: from GSM to LTE, Chichester, Wiley, [2] J. Roberts, "Teletraffic models for the Telcom 1 integrated services network," Proc. 10th International Teletraffic Congress (ITC 83), Montreal, Canada, 1983, p [3] M. Pioro, J. Lubacz and U. Korner, "Traffic engineering problems in multiservice circuit switched networks," Comput. Networks ISDN Systems, vol. 20, no , p. 542, [4] V. Iversen, Teletraffic Engineering and Network Planning, Lyngby, Technical University of Denmark, [5] Q. Huang and V. Iversen, "Approximation of loss calculation for hierarchical networks with multiservice overflows," IEEE Trans. Commun., vol. 56, no. 3, pp , [6] M. Głabowski, K. Kubasik, and M. Stasiak, "Modeling of systems with overflow multi-rate traffic," Telecommun. Systems, vol. 37, no. 1-3, pp , [7] H. Holma and A. Toskala, "WCDMA for UMTS: HSPA Evolution and LTE," 5th ed. New York, London, John Wiley and Sons, [8] V. Vassilakis, I. Moscholios, and M. D. Logothetis,"Call-level performance modelling of elastic and adaptive service-classes with finite population," IEICE Trans. Commun., vol. E91-B, no. 1, pp , [9] J. Kaufman, "Blocking with retrials in a completly shared recource environment," Performance Evaluation," vol. 15, no. 2, pp , [10] M. Glabowski, A. Kaliszan and M. Stasiak, "Modeling product form state-dependent systems with BPP traffic," Performance Evaluation, vol. 67, no. 2, pp , [11] M. Sobieraj, M. Stasiak, J. Weissenberg, and P. Zwierzykowski, "Analytical model of the single threshold mechanism with hysteresis for multi-service networks," IEICE Trans. Commun., vol. E95-B, no. 01, pp [12] M. Sobieraj, M. Stasiak, and P. Zwierzykowski, " Model of the threshold mechanism with double hysteresis for multi-service networks," in Communications in Computer and Information Science, vol. 291, A. Kwiecien, P. Gaj, P. Stera, Eds. Springer, 2012, pp [13] J. Roberts, U. Mocci, J. Virtamo, Eds., Broadband Network Teletraffic, Final Report of Action COST 242, Springer, 1996.

66 59 Packet Dispatching Scheme Employing Request Queues and Load Balancing for MSM Clos-Network Switches Janusz Kleban and Krzysztof Janecko Faculty of Electronics and Telecommunication Poznan University of Technology, Poznań, Poland NetWorkS!, Poznań, Poland Abstract In this paper, we propose the LB-MOMD (Load- Balanced Maximal Oldest-Cell First Dispatching) scheme for the MSM (Memory-Space-Memory) Clos-network switches. This scheme is a modification of the MOMD algorithm proposed by R. Rojas-Cessa, E. Oki and H. J. Chao. The LB-MOMD scheme delivers better performance, than the MOMD algorithm, in terms of throughput, cell delay and input buffers size, under different traffic distribution patterns, for Bernoulli as well as bursty traffic models. The performance of both algorithms was evaluated using computer simulation and results are shown in this paper. I. INTRODUCTION The Clos switching fabric is well known and seems to be the most studied one. It was proposed by C. Clos in 1953 and was implemented in many telecommunication systems [1]. Currently, it is adopted to use in large-capacity switches to meet the exponential traffic growth in the Internet. The Closnetwork packet switches can be classified based on the buffering strategy used, namely: Space-Space-Space (SSS or S3), Memory-Memory-Memory (MMM), Memory-Space-Memory (MSM), and Space-Memory-Memory (SMM) switches. In this paper we focus on the MSM Clos switching fabric made of nonblocking crossbars as switching modules. This architecture uses bufferless modules in the second stage, but it has buffers in the first and third stage. Since the architecture has no buffer in the second-stage modules, the out-of-sequence problem is eliminated but how to dispatch cells from the first stage to the second stage becomes an important issue. The cells are fix-sized packets obtained by segmentation of variable-length packets arriving to the ingress line cards of a large-capacity switch. Packets are assembled from cells at the egress line cards, before they depart. Cells are transmitted through the switching fabric during a time slot. Since buffers are allocated in the first stage the Virtual Output Queuing (VOQ) mechanism may be implemented very easy, to avoid the Head-Of-Line (HOL) blocking phenomenon [2]. HOL blocking causes the idle output to remain idle even if at an idle input there is a cell waiting to be sent to an (idle) output, but the cell is held-up by the first packet. In VOQ switches every input provides a single and separate FIFO for each output. Such a FIFO is called a Virtual Output Queue. When a new cell arrives at the input port, it is stored in the destined queue and waits for transmission through a switching fabric. Internal blocking and output port contention problems in the MSM Clos-network switches must be solved by fast arbitration scheme. The arbitration scheme decides which items of information should be passed from inputs to arbiters, and - based on that decision - how each arbiter picks one cell from among all input cells destined for the output. The arbitration scheme may be implemented as centralized or distributed. In the case of centralized arbitration, the decisions are taken by a central arbiter. In the distributed manner, each input and output has its own arbiter operating independently from the others. Algorithms which can assign the route between input and output modules are usually called packet dispatching schemes. Different dispatching schemes for the three-stage Clos-network switches were proposed in the literature [3] - [8]. The basic idea of these algorithms is to use the effect of desynchronization of arbitration pointers and a common requestgrant-accept handshaking scheme. Most of these schemes can achieve 100% throughput under the uniform traffic, but under the nonuniform traffic the throughput is usually reduced. In [3] R. Rojas-Cessa, E. Oki, and H. J. Chao have proposed the maximum weight matching dispatching (MWMD) scheme for the MSM Clos-network switches. The complexity of MWMD scheme is high for its implementation in highperformance packet switching nodes. An alternative scheme called the maximal oldest-cell first matching dispatching (MOMD) with lower complexity than the MWMD algorithm was also proposed by the authors. This algorithm meets requirements of a maximal dispatching scheme, and may be implemented in the real environment due to low complexity. In this paper, the LB-MOMD scheme is proposed and evaluated. The scheme is a modification of the MOMD scheme. The main idea of proposed modification depends on a different approach to the matching process within input modules. Instead of a random selection, a simple load-balancing mechanism is employed in the LB-MOMD scheme. The remainder of this paper is organized as follows. Section II introduces some background knowledge concerning the architecture of the MSM Clos switching fabric under the LB- MOMD scheme, that we refer to throughout this paper. Section III describes the LB-MOMD packet dispatching scheme, while Section IV presents simulation results obtained under the MOMD and LB-MOMD schemes. We conclude this paper in

67 60 IP (0,0) IP (0,n-1) IP (i,0) IP (i,n-1) IP (k-1,0) IP (k-1,n-1) VOMQ(0,0) VOMQ(0,k-1) VOMQ(i,0) VOMQ(i,k-1) VOMQ(k-1,0) VOMQ(k-1,k-1) IM (0) IM (i) IM (k-1) RQ(0,0,0) RQ(0,0,m-1) RQ(0,k-1,0) RQ(0,k-1,m-1) RQ(i,0,0) RQ(i,0,m-1) RQ(i,k-1,0) RQ(i,k-1,m-1) RQ(k-1,0,0) RQ(k-1,0,m-1) RQ(k-1,k-1,0) RQ(k-1,k-1,m-1) IM link arbiter CM link arbiter CM (0) OM (0) CM (r) CM (m-1) LI (i, r) LC (r, j) OM (j) OM (k-1) OP (0,0) OP (0,n-1) OP (j,0) OP (j,n-1) OP (k-1,0) OP (k-1,n-1) Fig. 1. The MSM Clos-switching fabric architecture under the LB-MOMD scheme TABLE I A NOTATION FOR THE MSM CLOS SWITCHING FABRIC UNDER THE LB-MOMD SCHEME Notation Description IM Input module at the first stage CM Central module at the second stage OM Output module at the third stage m Number of CMs n Number of input/output ports in each IM/OM k Number of IMs/OMs i IM number, where 0 i k 1 j OM number, where 0 j k 1 h Input/output port number in IM/OM, where 0 h n 1 r CM number, where 0 r m 1 IM(i) The (i + 1)th input module CM(r) The (r + 1)th central module OM(j) The (j + 1)th output module IP (i, h) The (h + 1)th input port at IM(i) OP (j, h) The (h + 1)th output port at OM(j) LI(i, r) Output link at IM(i) that is connected to CM(r) LC(r, j) Output link at CM(r) that is connected to OM(j) V OMQ(i, j) Virtual output-module queue at IM(i) that stores cells destined for OM(j) RQ(i, j, r) Request queue at IM(i) that stores requests of cells destined for OM(j) that traverse CM(r). The queue also keeps a waiting time W (i, j, r), which is the number of time slots a HoL request waits for dispatching Section V. II. MSM CLOS SWITCHING FABRIC UNDER LB-MOMD SCHEME The MSM Clos switching fabric with all elements required by the LB-MOMD scheme, namely buffers and arbiters, is shown in Fig. 1. To define the architecture, the same terminology as for the MSM Clos switching fabric under the MOMD scheme proposed in [3] is used (see Table I). Clos-networks are well known and widely analyzed in the literature. The threestage Clos-network architecture is denoted by C(m, n, k), where parameters m, n, and k entirely determine the structure of the network. In the MSM Clos switching fabric architecture ready to run the LB-MOMD scheme the first stage consists of k IMs, and each of them has an n m dimension. At each IMs buffers are divided to k VOMQs to eliminate Head-Of-Line blocking. Each VOMQ is associated with m request queues RQ(i, j, r). The second stage consists of m bufferless CMs, and each of them has a k k dimension. The third stage consists of k OMs of capacity m n, where each OP (j, h) has an output buffer. Each output buffer can receive at most m cells from m CMs, so a memory speedup is required here. The LB- MOMD scheme employs distributed arbiters in IMs and CMs. There are m output-link arbiters in each IM(i) and k arbiters, corresponding to OM(j), in each CM(r). III. LB-MOMD PACKET-DISPATCHING SCHEME When a cell arrives to IP (i, h) it is buffered in the relevant queue V OM Q(i, j) and gets a time stamp assigned. The time stamp is entered along with a request to the RQ(i, j, r), where r is selected taking into account the number of requests waiting in request queues. In the LB-MOMD scheme the shortest RQ(i, j, r) queue is selected for buffering the new request, but round-robin routine is used to select a relevant queue among queues with the same size. The configuration of connecting path within IMs and CMs is appointed by output link arbiters in IMs and CMs. The matching between V OMQ(i, j) and the output link LI(i, r) is determined based on time stamps. The LB-MOMD algorithm works as follows: Phase 1: Matching within IM Step 1: Each non-empty RQ(i, j, r) sends a request to the unmatched output link arbiter. The request includes the time stamp of the cell waiting at the head of V OMQ(i, j) queue. Step 2: Each output-link arbiter associated with LI(i, r) chooses one request by selecting the oldest time stamp. If there is more than one request with the same value of the time stamp, the request with the largest index j is chosen. Step 3: Each output-link arbiter sends the grant to the selected RQ(i, j, r) and V OMQ(i, j). Phase 2: Matching between IM and CM Step 1: Each output-link arbiter associated with LI(i, r) sends the request related to matched VOMQ to CM(r). Each arbiter located in CM(r) and associated with OM(j) chooses one request with the oldest time stamp. If there is more than one request with the same value of the time stamp, the request with the largest index i is chosen. The arbiter sends the grant to LI(i, r) of IM(i). Step 2: More iterations may be performed between unmatched VOMQs, LI(i, r) arbiters and CM arbiters to match all LC(r, j) output-links with VOMQs. The rejected requests are not considered in the new iterations between IM and CM within the time slot. Step 3: Each IM sends cells from granted VOMQs at the next time slot. Up to m cells may be dispatched from granted VOMQ. The LB-MOMD and MOMD scheme differ in VOMQs selection process. The first scheme tries to find the shortest RQ(i, j, r) queue for buffering the new request. To do that, it

68 61 has to know which relevant queue is the shortest one. Next, the round-robin routine is used to find the shortest queue. The MOMD algorithm selects the RQ(i, j, r) queue by random selection of r. IV. SIMULATION EXPERIMENTS Two packet arrival models are considered in simulation experiments: the Bernoulli arrival model, and the bursty traffic model, where the average burst length is set to 16 cells. Several traffic distribution models (the most popular in this research area) have been considered, which determine the probability p ij that a cell, which arrives at an input, will be directed to a certain output. The considered traffic models are: Uniform traffic. This type of traffic is the most commonly used traffic profile. In uniformly distributed traffic, the probability p ij that a packet from input i will be directed to output j is uniformly distributed through all outputs, that is: p ij = p/n i, j (1) Chang s traffic. This model is defined as: { 0 for i = j p ij = otherwise 1 N 1 Transdiagonal traffic. In this traffic model some outputs have a higher probability of being selected, and respective probability p ij was calculated according to the following equation: p 2 for i = j p ij = (3) p 2(N 1) for i j Bidiagonal traffic. This type of traffic is very similar to the transdiagonal traffic but packets are directed to one of two outputs, and respective probability p ij was calculated according to the following equation: 2 3 p for i = j p p ij = 3 for j = (i + 1) mod N (4) 0 otherwise The experiments have been carried out for the MSM Clos switching fabric of size C(8, 8, 8), and for a wide range of traffic loads per input port: from p = 0.05 to p = 1, with a step of The 95% confidence intervals that have been calculated after t-student distribution for five series with cycles are at least one order lower than the mean value of the simulation results, and therefore they are not shown in the figures. The simulation results were collected after the starting phase comprising cycles, which enables the stable state of the switching fabric to be reached. Three main performance measures have been evaluated: average cell delay in time slots, maximum size of VOMQs, and throughput. It has been assumed, that a switch can achieve 100% throughput under the uniform or nonuniform traffic, if the switch is stable, as it was defined in [9]. In general, a switch is stable for a (2) particular arrival process if the expected length of the input queues does not grow without limits. The size of the buffers at the input and output side of switching fabric is not limited, so cells are not discarded. However, they encounter the delay instead. Due to the unlimited size of buffers, no mechanism controlling flow control between the IMs and OMs (to avoid buffer overflows) was implemented. Selected results of the simulation experiments are shown in the charts (Figs 2-11). Figs 2, 3, 4, and 5 show the average cell delay, in time slots, obtained for the uniform, Chang s, bidiagonal, and transdiagonal traffic patterns under the MOMD and LB-MOMD schemes for 4 and 8 iterations, and Bernoulli arrival model. The results obtained under these algorithms and uniform, and nonuniform traffic distribution patterns for the bursty arrival model are shown in Figs. 6, 7, 8, and 9. Figs 10 and 11 show the average cell delay and maximum VOMQ size for all investigated traffic distribution patterns and arrival models under the LB-MOMD scheme, respectively. Fig. 2 shows the average cell delay under the MOMD and LB-MOMD algorithm for uniformly distributed Bernoulli traffic with 4 and 8 iterations. It can be noticed that the LB- MOMD scheme produces better results than MOMD scheme, especially for input load p > 0.5. For p = 1 the average cell delay is 5 times less under the LB-MOMD scheme than under the MOMD scheme (with 8 iterations), and is equal to 120 and 600 time slots, respectively. Very similar results were obtained for Chang s traffic (Fig. 3). The LB-MOMD scheme can serve very effectively also more demanding traffic distribution patterns. For bidiagonal traffic the average cell delay is relatively high only for p = 1, and is comparable to results obtained under the MOMD scheme. For p < 1 the LB-MOMD scheme produces better results than the MOMD scheme using 4 iterations, instead od 8 iterations used by the MOMD scheme. Analogous trends can be observed for the transdiagonal traffic. In this case the average cell delay under the LB-MOMD scheme is lower than under the MOMD algorithm for all range of input load, particulary for p = 1 the delay under the LB-MOMD scheme is 4 times lower than under the MOMD algorithm. The results received under the LB-MOMD scheme for uniformly and nonuniformly distributed bursty traffic are also acceptable (see Figs 6, 7, 8, and 9). What is interesting, the results are worst than for the Bernoulli arrival model, but the throughput of the MSM Clos-network switch is not limited and is equal to 100% for all investigated traffic distribution patterns, and all range of input traffic. The behavior of the LB-MOMD scheme is not satisfactory for the bidiagonal traffic and p = 1. The delay, equal to 5000 seems to be to high. Fig. 10 shows comparison of the average cell delay for unform and nonuniform traffic distribution patterns and two considered packet arrival models under the LB-MOMD scheme. It can be seen, that the differences between presented results are not very high. It means that the proposed packet dispatching scheme can efficiently support the uniform as well as the nonuniform traffic distribution patterns in the MSM Closnetwork switches. Comparison of maximum VOMQ size for investigated traffic distribution patterns and packet arrival models under the LB-MOMD algorithm is presented in Fig.

69 Average cell delay (time slots) Average cell delay (time slots) Average cell delay (time slots) Average cell delay (time slots) Average cell delay (time slots) Average cell delay (time slots) Average cell delay (time slots) MOMD iim-cm=4 LB-MOMD iim-cm=4 MOMD iim-cm=8 LB-MOMD iim-cm= MOMD iim-cm=4 LB MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm= ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 2. Average cell delay, uniform traffic, Bernoulli model MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm=4 LB MOMD iim-cm= ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 5. Average cell delay, transdiagonal traffic, Bernoulli model MOMD iim-cm=4 LB MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm= ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load 10 Fig. 3. Average cell delay, Chang s traffic, Bernoulli model 11. The charts in Fig. 11 are similar in shapes to charts shown in Fig. 10. Only the bidiagonal traffic distribution pattern for p = 1 requires the VOMQ buffers to be bigger than 1000 cells ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 6. Average cell delay, uniform traffic, bursty model V. CONCLUSIONS The new packet-dispatching scheme, called LB-MOMD, for the MSM Clos-network switches, is presented in this paper. The algorithm is an improved version of the MOMD scheme, and can provide better performance in terms of cell delay, input buffers size, and throughput under different traffic distribution patterns, for both Bernoulli and bursty packet arrival model. Simulation experiments have shown that the proposed scheme can support the uniform as well as nonuniform traffic providing 100% throughput. This is a highly desirable property of the packet dispatching algorithm for the switching fabric of the Future Internet packet node MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm=4 LB MOMD iim-cm= ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 7. Average cell delay, Chang s traffic, bursty model MOMD iim-cm=4 LB MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm= MOMD iim-cm=4 LB MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm= ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 4. Average cell delay, bidiagonal traffic, Bernoulli model 1 0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 8. Average cell delay, bidiagonal traffic, bursty model

70 Max VOMQ size (number of cells) Average cell delay (time slots) Average cell delay (time slots) MOMD iim-cm=4 MOMD iim-cm=8 LB MOMD iim-cm=4 LB MOMD iim-cm= ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load [7] J. Kleban and H. Santos, "Packet dispatching algorithms with the static connection patterns scheme for three-stage buffered Clos-network switches", in Proc. IEEE International Conference on Communications ICC-2007, June 2007, Glasgow, Scotland. [8] J. Kleban, M. Sobieraj, and S. Węclewski, "The modified MSM Clos switching fabric with efficient packet dispatching scheme", in Proc. IEEE High Performance Switching and Routing HPSR 2007, New York, 30 May - 1 June [9] N. McKeown, A. Mekkittikul, V. Anantharam, J. Walrand, "Achieving 100% throughput in an input-queued switch", IEEE Trans. Commun., pp , Aug Fig. 9. Average cell delay, transdiagonal traffic, bursty model Uniform traffic, Bernoulli Bidiagonal traffic, Bernoulli Uniform traffic, bursty Bidiagonal traffic, bursty Transdiagonal traffic, Bernoulli Chang's traffic, Bernoulli Transdiagonal traffic, bursty Changs traffic, bursty ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 10. Average cell delay under the LB-MOMD scheme ACKNOWLEDGMENT The work described in this paper was financed from the research funding as research grant UMO /01/B/ST7/ REFERENCES [1] C. Clos, "A study of non-blocking switching networks", Bell Sys. Tech. Jour., 1953, pp [2] H. J. Chao, B. Liu, High performance switches and routers, Wiley Interscience, New Jersey, [3] Oki E., Chao H. J., Rojas-Cessa R. "Maximum and Maximal Weight Matching Dispatching Schemes for MSM Clos-Network Packet Switches", IEICE Transactions of Communications, 2010, vol. E93-B, no.2, pp [4] E. Oki, Z. Jing, R. Rojas-Cessa, and H. J. Chao: "Concurrent roundrobin-based dispatching schemes for Clos-network switches", IEEE/ACM Trans. on Networking, vol. 10, no.6, 2002, pp [5] K. Pun and M. Hamdi, "Dispatching schemes for Clos-network switches", Computer Networks, no. 44, pp , [6] J. Kleban and A. Wieczorek, "CRRD-OG - a packet dispatching algorithm with open grants for three-stage buffered Clos-network switches", in Proc. High Performance Switching and Routing HPSR 2006, pp Uniform traffic, Bernoulli Bidiagonal traffic, Bernoulli Uniform traffic, bursty Bidiagonal traffic, bursty Transdiagonal traffic, Bernoulli Chang's traffic, Bernoulli Transdiagonal traffic, bursty Chang's traffic, bursty ,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Input load Fig. 11. The maximum VOMQ size under the LB-MOMD scheme

71 64

72 65 Quality Management in 4G Wireless Networking Technology Małgorzata Langer Institute of Electronics Łódz University of Technology Lodz, Poland Abstract Although multimedia communication is possible with 3G systems, scalability and cost have been problems preventing wide deployment thus far. The 4G networks are all-ip based heterogeneous networks that allow users to use any system at anytime and anywhere, and support a variety of personalized, multimedia applications such as multimedia conferencing, video phones, video/movie-on-demand, education-on-demand, streaming media, multimedia messaging, etc. Personalized services should be supported by personal mobility capability, which concentrates on the movement of users instead of users terminals. As the technology matures, traffic congestion increases, and competitive pressures mount, QoS and policy management will become more and more important. In preparation, operators must make sure they are working with vendors that have a strong framework to supply end-to-end QoS and are capable of supporting evolving needs. The paper discusses qualitative and quantitative indicators of telecommunication services. Keywords 4G, LTE, QoS, SAE, bearer, OSI, logical and transport channels, MAC, Physical layer, scheduling, Policy Management I. INTRODUCTION (HEADING 1) With the introduction of the first generation (1G) mobile communication systems in the 1980s, ordinary people were able to communicate with others with voice while being on the move. Then with the second generation (2G) mobile communication systems (as CDMA - Code Division Multiple Access, GSM - Global Systems for Mobile Communications) in the early 1990s, people were able to communicate not only with voice but also with text messaging. With the introduction of the third generation (3G) mobile communication systems (e.g., WCDMA - Wide-band Code Division Multiple Access, also known as UMTS) in the late 1990s, people were able to communicate with more data-centric services and applications. 3G is a wireless technology, which wins over its predecessors because it has high speed transmission, advanced multimedia access and global roaming. This technology allows connecting the phone to the Internet, or other IP networks, to make a voice or video call or to download and upload data. Although multimedia communication is possible with 3G systems, scalability and cost have been problems preventing wide deployment thus far. The 4G networks support a variety of personalized, multimedia applications such as multimedia conferencing, video phones, video/movie-on-demand, education-on-demand, streaming media, multimedia messaging, etc. Personalized services should be supported by personal mobility capability, which concentrates on the movement of users instead of users terminals (one will start watching a film on the 3 screen of his/her mobile, will continue from the exact point on 15 laptop screen and then will move to home TV set screen, so for example 50 to see the final scenes), and which involves the provision of personal communications and personalized operating environments. So the evolution of 3G into an evolved radio access referred to as the Long-Term Evolution (LTE) and an evolved packet access core network in the System Architecture Evolution (SAE) has become obvious. After many discussions, on December 2010 the main standardizing body in Telecommunications ITU established that not only WiMAX- 2 and LTE-Advanced (technologies of future), but also WiMAX and LTE (implemented already) are the networking technologies worthy of the 4G name [1]. In the 1980s, five functional areas were identified, namely Fault management, Configuration management, Accounting management, Performance management, and Security management (FCAPS). These five functional areas were sufficient to cover most, if not all, of the issues related to the operations and management of the wired networks including the Internet and enterprise networks. With the introduction of wireless and mobile networks several additional areas, which could not be easily covered by FCAPS, had to be added. They are Mobility management, Customer management, and Terminal management. The quality issues have stopped to be described as best effort only (though still it is best effort for low data rate in Internet). Supporting multimedia applications with different quality of service (QoS) requirements in the presence of diversified wireless access technologies (e.g., 3G cellular, IEEE WLAN, Bluetooth) is one of the most challenging issues for fourth-generation (4G) wireless networks. In such a network, depending on the bandwidth, mobility, and application requirements, users should be able to switch among the different access technologies in a seamless manner.

73 66 Efficient radio resource management and CAC (call admission control) strategies are key components in such a heterogeneous wireless system supporting multiple types of applications with different QoS requirements. 4G is a packet-based network. Since it should carry voice as well as internet traffic it should be able to provide different level of QoS. Other network level issues include Mobility Management, Congestion control, and QoS Guarantees (4G systems are expected to provide real-time services so e.g. pre-computed delay bound is required for the service). II. MIGRATION OF TECHNOLOGY DIFFERENCES BETWEEN 3G AND 4G Up to the 3G technology (UMTS) the transmission has been (and is) realized in time domain. The main difference in 4G technology is frequency domain for transmitting. Orthogonal Frequency Division Multiplexing (OFDM) is a combination of TDMA (Time Division Multiplexing Access) and FDMA (Frequency Division Multiplexing Access) and has been known since the 50-60s from military. Analog processing made it extremely expensive. In 80s its complexity reduced due to digital signal processing. It was already a technology proposal for UMTS, but only recently some challenges for mobile communication, particularly for uplink synchronization have been solved. The OFDM becomes today s dominating communication technology and WiMAX, LTE, Flash OFDMA are using it now. The main benefits are: - high spectral efficiency with simple receiver design - bandwidth divided into many narrow tones in theory fully orthogonal one to each other - high-rate data distributed onto many low-rate channels - flexible bandwidth allocation The Fig. 1 shows the example of resource allocation in TDMA, FDMA and two types of OFDM: the distributed OFDMA (as in WiMax), and the localized OFDMA (as in LTE). It helps in better understanding the issue of the technology basics. The deep technical description of them is not the task of this paper, and these considerations would be beyond its scope. III. KEY SERVICES AND QOS The present and future mobile-communication systems differ and will differ with time and with country. They need to be adaptive to the changing service environment. The upper limit of the data rate demand and the lower limit of the delay requirement are difficult to provide in a cost-efficient manner. The services (at least from the user standpoint of view) should be provided with the highest data rate, the lowest delay and the lowest jitter that the system can provide. This is unattainable in practice and contradictory to the operator goal of an efficient system the more delay a service can handle the more efficient the system can be. There are several key services that span the technology space. Those are: - Voice: The end-to-end delay requirement for circuit switched voice is approximately 400 ms and it is not disturbing humans in voice communication. The sufficient today quality, as the end-to-end delay is not noticeable in older implemented technologies, slows the development of voice service in 4G. Fig. 1. Multiple Access; resource allocation: a) pure TDMA; b) pure FDMA; c) distributed OFDMA; d) localized OFDMA Voice packets (in 4G) should require small amounts of data, frequently, with no delay jitter. The problem is that IMS (IP Multimedia Subsystem) has not been developed as fast as it was expected and voice applications stay in the circuit switched domain, still. - Real-time applications (games, mainly): Experts say [2] that players look for game servers with a ping time of less than 50 ms. So these applications require small amounts of data (as game update information), but with a low delay, a limited delay jitter and relatively frequent. - Interactive file applications (download and upload): They require high data rates and low delays. - Background file download and upload: The example is e- mail. This service accepts lower bit rates and longer delays. - Television: This is the streaming downlink to many users at the same time requiring moderate data rates (higher with HD), the very low delay jitter, though delays may be tolerated (but approximately the same delay for all users in the neighborhood). Policy management allows operators to control granularly the availability and QoE of different services. First, policies are

74 67 used to dynamically allocate network resources for example, a particular bandwidth can be reserved in the radio base station and core network to support a live video conversation. Next, policy rules control the priority, packet delay, and the acceptable loss of video packets in order for the network to treat the video call in a particular manner. The quality of service is characterized by the combined aspects of service support performance, service operability performance, service ability performance, service security performance and other factors specific to each service. TL 9000 [11] is the first unified set of quality system requirements and metrics designed specifically for the telecommunications industry. TL 9000 encompasses the ISO 9001 standard, plus additional industry-specific telecom requirements and covers industry performance based measurements including reliability, delivery, and service quality. The TL 9000 management system is applied by telecom manufacturers and suppliers engaged in the design, development, production, delivery, installation and maintenance of telecommunications products and services. The table No. 1 presents QoS indicators, gathered on the base of the ETSI Guide Quality of Telecom Services [6]. The introduced set of parameters gives the review of technical, economical and functional issues those have to be considered and indicators those should be measured and tested objectively. Some technical measurements may not be directly perceptible by customers or can vary from a particular user feeling but still the indicator value or the feature affects the quality and assurance of service. The indicators presented in ETSI Guide can be divided into two parts. The first one is related with technical, functional aspect and the second one is related to other aspects such as: sales, repair, provision, charging, billing, upgrade, complaint management, commercial and technical support. Nowadays, there are several standards describing QoS measurements, for example [7, 8, 11]. Measurements of the parameters can be made using different methods: technical measurements performed by an independent organization or by the supplier, a survey performed among users, or a mixture of user s opinion and technical measurements. Some results are obtained in terms of degrees of satisfaction and not in technical terms. For example the condition of Seamless mobility is kept until a user doesn t notice that the hand-over happens. TABLE I. SELECTED INDICATORS AND ASSESSED PARAMETERS OF TELECOMMUNICATION SERVICES Service Indicator Parameters Audio Availability Rate of server accessibility, Listening breakup broadcast ratio, Listening break-up frequency, Successful one minute listening ratio. Fidelity / accuracy Speed Capability Reliability Flexibility Usability Audio quality. Access time, Starting delay. Throughput achieved. Rate of overall technical reliability, Levels of customers complaints. Provider capacity to adjust to the user connection and equipment features. User friendless of the interface. Directory enquiry services Security Protection against user identity theft, Protection against intrusion and breach of customer s privacy. Availability Fidelity / accuracy Speed Capability Reliability Flexibility Usability Rate of accessibility to the service, Outage frequency, Served call rate. Rate of correctness in answering the customer questions Response time for directory enquiry services, Replay time. Adequacy of the number of operators to the number of call. Rate of overall technical reliability, Level of customer complaints. Range of availability means to access the service. User friendless of the interface, Ability of the operator to cope with the caller language. Security Compliance to the customer security specifications as given in the contract, in particular: protection of the customer s private data or related to the person concerned by the enquiry. Availability Rate of SMTP failures, Rate of POP3 failures, Outrages rate, Outages frequency, rate of message loss. Fidelity / accuracy Speed Capability Reliability Flexibility Usability Security Rate of undue deletions of by the security mechanisms Average time to check an empty mailbox, Average delivery time Server throughput, Speed to upload to mail server, Speed of download from mail server. Rate of overall technical reliability, Level of customers complaints. Ease to change the contractual specifications. User friendless if the interface. Protection against intrusion, spam and any kind of virus. Fax Availability Refers to availability of connection. Internet Access Fidelity / accuracy Reliability Security Availability for Internet access Availability for web browsing Availability for web page hosting Fidelity / accuracy for web browsing Transmission fidelity test. Ratio of sent and received fax, Rate of overall technical reliability, Level of customer complaints. Protection against identity, theft and content violation. Successful login ratio, Outage rate, outage frequency, Rate of successful authentication, Rate of successful access to generic name translation. Outage rate to a set of designed sites, Availability of web pages hosted by ISP, Frequency of untimely breakup, Rate of accessibility to the input ports, Rate of accessibility to the output ports. Rate of accessibility to the allocated space. Error rate in data transmissions.

75 68 Multimedia Message Service (MMS) Operator services Short Message Service (SMS) Speed for Internet access Speed for web browsing Speed for web page hosting Capability for Internet access Capability for web browsing Reliability for the overall service Flexibility for the overall service Usability for the overall service Security for the overall service Availability Fidelity / accuracy Speed Reliability Flexibility Usability Security Availability Fidelity / accuracy Speed Capability Reliability Flexibility Usability Security Availability Fidelity / accuracy Speed Delay, Radio channel access delay, Authentication time, Generic domain name translation time. Web response time. Time to upload a test web page. Throughput achieved, Throughput of dialup access to the Internet. Occupation rate of ISP links, Occupation rate of ISP input ports. Rate of overall technical reliability, Level of customer complaints, Fault report rate per fixed access lines. Ease to change the contractual specifications. User friendless of the interface, Adaptability to make use easier to people with disabilities. Protection against user identity theft, intrusion and breach of customer s privacy. Successful MMS Ratio. Completion Rate for MMS. End to End delivery time Rate of overall technical reliability, Level of customer complaints. Ease to change the contractual specification, Range of available means to send and receive MMS. User friendless of the interface. Protection against intrusion, spam and any kind of virus. Rate of accessibility to the service, Outage frequency, Served call rate. Rate of correctness in fulfilling the customer request. Response time for operator services, Call setup time. Adequacy of the number of operators to the number of call. Rate of overall technical reliability, Level of customer complaints. Range of available means to access the service. User friendless of the interface, Ability of the operator to cope with the caller language. Protection of the customer s private data. Successful SMS Ratio. Completion Rate for SMS. End to End delivery time for SMS. Reliability Flexibility Usability Security Rate of overall technical reliability, Level of customer complaints. Range of available means to send and receive SMS. User friendless of the interface. Protection against intrusion, spam and any kind of virus. Telephony Availability Unsuccessful call ratio, Dropped call ratio, Retain ability rate, Outage rate Video Broadcast Fidelity / accuracy Speed Capability Reliability Flexibility Usability Predictable speech connection quality, Observed voice quality, Signaling failure rate. Call setup time. Availability of the voice and signaling channels. Rate of overall technical reliability, Level of customer complaints, Fault report rate per fixed access lines. Ease to change the contractual specifications. User friendliness of the interface, Adaptability to make use easier to disable people. Security Protection against user identity theft, fraudulent. Availability Fidelity / accuracy Speed Capability Reliability Flexibility Usability Security Rate of server accessibility, Display breakup ratio, Display breakup frequency, Successful one minute watching ratio. Audio quality, Video quality. Access time, Starting time. Throughput achieved. Rate of overall technical reliability, Level of customer complaints. Ease to change the contractual specifications. User friendliness of the interface. Protection against user identity theft. Voice mail Availability Rate of successful access to the recording server, Rate of successful access to the message listening server, Outage rate of the message listening server, Outage frequency of the message recording server, Rate of message loss. Fidelity / accuracy Speed Reliability Flexibility Usability Rate of message spoiling failure of the information to the voice mailbox owner. Response time of the voice guide after the reply time out, Message recording server response time, Time to receive the notification of a message record in the voice mailbox, Message listening server response time. Rate of overall technical reliability, Level of customer complaints. Ease to change the contractual specifications, Range of available means to record and receive. User friendliness of the interface. Security Protection against fraudulent message listening and change of the welcome recorded message. The Table 2 covers standardized QCIs (QoS Class Identifiers) for LTE [9], where GBR stands for Ground Based Radio here.

76 69 Guaranteed Bit Rates (also called GBRs) are not part of them since as traffic handling attributes cannot be preconfigured for a QoS class. They must therefore be dynamically signaled within the service, instead. A QCI is simply a "pointer" to a TFP (Traffic Forwarding Policy) and can be associated with a TFP defined within each user plane edge/node. Within a specific node multiple QCIs may be associated with the same TFP. Within up to the 4G technologies either best effort policy prevails or, in the 3G (e.g. UMTS) four traffic classes with defined QoS attributes may be pointed: Conversational, Streaming, Interactive, Background. QCI TABLE II. Resource type Priority STANDARDIZED QCIS FOR LTE Packet delay budget (ms) Packet error loss rate Example services base station must function intelligently to perform radio resource management as well as physical transmission, more as a smart access router. The Fig. 2 introduces the idea of coexisting technologies. LTE is only connected to the packet core, while circuit core will not continued to be developed. 3GPP packet core can connect to non 3GPP technologies. IMS is (it should be) the integral part of the evolved packet core. The Fig. 3 shows the overall LTE architecture, where one can see the clear separation and defined interfaces between different domains. In such a layout any evolution is independent of access, core, transport and services. The main principle of SAE (System Architecture Evolution) is that control and user s panels are separated one from the other (Fig. 4). Nevertheless it is only a logical architecture by now, though it assures the flexible deployment for various scenarios. 1 GBR Conversational voice 2 GBR Conversational video (live streaming) 3 GBR Non-conversational video (buffered streaming) 4 GBR Real gaming 5 Non-GBR IMS signaling Fig. 2. Idea of coexisting technologies 6 Non-GBR Voice, video (live streaming), interactive gaming 7 Non-GBR Video (buffered streaming) 8 Non-GBR TCP-based (e.g. WWW, ), chat, FTP, p2p file sharing, progressive video, etc. 9 Non-GBR Performance-enhancing features can improve perceived quality of service (end-user s point of view) or system performance (operator s point of view). Though LTE and its evolution can yield better data rates and shorter delay, so it can greatly improve as the service experience (for an end-user) as the system capacity (for an operator). Fig. 3. LTE Architecture. Separated Domains IV. NETWORK ARCHITECTURE In up to 3G technologies, i.e. for example in majority of today implemented cellular networks, based on circuitswitching, they consist of base stations, base station controllers, switching centers, gateways, and so on. The base station (BS) plays the role of physical transmission with fast power control and wireless scheduling. The base station controller (BSC) performs the largest part of the radio resource management. Whenever a mobile terminal (MT) moves into another cell, it requires handoff to another base station. In 4G network the Fig. 4. Separation of control and user planes The separation of control and user planes gives the highly optimized implementation and scalability (number of users versus data traffic/services). The control plane covers MME, which is a powerful server and is placed at a secure location. It

77 70 contains NAS (Non Access Stratum) protocol, covers such functions as security control, managing subscriber profile and service connectivity, packet core bearer control. The other server PCRF is a part of the operator s switching centre and deals with policy and charging control, with decisions on how to handle QoS in the network. The bearer uniquely identifies packet flows that receive a common QoS treatment between the terminal and the gateway. Independent of a service type, a bearer is defined through the network to which it connects the UE (referred to as Access Point Name in 3GPP and the QoS Class Identifier (QCI). The bearer is the basic enabler for traffic separation, that is, it provides differential treatment for traffic with differing QoS requirements. So one UE can have several bearers at the same time, if a call is on, files are transferred, etc. The concept of the bearer and the associated signaling procedures further enable the system to reserve system resources before packet flows those are mapped to that bearer into the system. Data to be transmitted enters the processing chain in the form of IP packets on one of the SAE bearers. Prior to transmission over the radio interface, incoming packets are passed through multiple protocol entities. With regard to quality and quality management the most important protocol entities are: PDCP Packet Data Convergence Protocol; there is one PDCP entity per SAE bearer and it is responsible for ciphering and integrity protection of the transmitted data. RLC Radio Link Control; located in the enodeb, which offers services to the PDCP in the form of RBs (radio bearers) there is one RLC entity per radio bearer configured for a terminal. It is responsible for segmentation/ concatenation, retransmission handling and in-sequence delivery to higher layers. MAC Medium Access Control which handles scheduling (as uplink as downlink). The scheduling functionality is located in enodeb, which has one MAC per cell. PHY Physical Layer it offers services to the MAC layer in the form of transport channels. The PHY handles coding/decoding, modulation/ demodulation, multi-antenna mapping and other typical physical layer functions. It offers services to the MAC layer in transport channels. A. Logical Channels And Transport Channels The logical channels set (the MAC offers services to the RLC in the form of logical channels) covers control channels, used for transmitting control and configuration information necessary for LTE, and a traffic channel, used for the user data (Dedicated Traffic Channel DTCH). The MAC uses services in the form of transport channels, and a transport channel is defined by how and with what parameters the information is transmitted over the radio interface. Data in a transport channel is organized into transport blocks. Each transport block is related to a Transport Format (TF). The TF includes information about size, modulation scheme etc., but also the MAC layer can control different data rates by varying the transport format. DL-SCH (Downlink Shared Channel) is the main transport channel used to transmit downlink data in LTE. It supports the dynamic rate adaptation, channel-dependent scheduling in time and frequency domains, hybrid ARQ (automatic repeat request) with soft combining, controls mobile-terminal power consumption through DRX (discontinuous reception). The similar features relate to UL-SCH (Uplink Shared Channel). B. Schedulling The main principle of the LTE radio access is shared transmission, it means that the time frequency resources are dynamically shared between users. The dynamic scheduler is a part of the MAC layer. It controls the assignment of uplink and downlink resources. The scheduler takes advantage of the channel variations and schedules transmissions to a mobile terminal on resources with better channel conditions. But the decision is taken per mobile terminal and not per radio bearer. So the terminal is the only one that handles logical-channel multiplexing and is responsible for the choice from which RB the data is taken. Each RB is assigned a priority and a prioritized data rate. Remaining resources, if any, are given to the radio bearers in priority order. For the downlink the terminal transmits channel-status reports and for the uplink a sounding reference signal those make the base to take the proper decision. The scheduler also controls the inter-cell interference. The scheduling strategy is vendor specific and may vary for different cases. V. POLICY MANAGEMENT In term of global connectivity, LTE solutions can deliver a data rate of at least 100 Mbps between any two points in the world, with smooth handoff across heterogeneous networks, so they result in seamless connectivity and global roaming across multiple networks. They provide a high quality of service for next-generation multimedia support including real-time audio, high-speed data, Internet protocol television (IPTV), video content and mobile TV. On top of that, carriers must provide for interoperability with existing wireless standards, and an all IP, packet switched network that can bridge the great difference of latency between networks. Quality of service provision in 4G networks present several challenges including: (a) the specification of QoS requirements, (b) the translation of QoS parameters among heterogeneous access networks, (c) the renegotiation of QoS, and (d) the management of QoS requirement within roaming agreements and mobile users profiles. Traditionally, the handover process has been considered among wireless networks using the same access technology (e.g., among cells of a cellular network). This kind of handover process is the horizontal handover (HHO). The new handover process among networks using various technologies is the vertical handover (VHO).The vertical handover in 4G networks and WIMAX networks has been developed to provide QoS routing, specification and management of QoS contracts, and handover control through the establishment of transparent procedure. Policy management plays a fundamental role in implementing QoS in mobile broadband. It is the process of

78 71 applying operator-defined rules for resource allocation and network usage. Dynamic policy management sets rules for allocating network resources, and includes policy enforcement processes. Policy enforcement involves service data flow detection and applies QoS rules to individual service data flows. Manufacturers, vendors, mobile operators do not have unlimited resources and capital; the radio spectrum is finite. Three areas relate to the policy management: - Limiting network congestion - Monetizing services - Enhancing service quality Providing high service quality by over-provisioning network capacity would leave an operator at a competitive disadvantage to providers that offer the same or better quality service at a lower cost. Policy management starts with differentiating services (applications) and subscriber types, and controlling the QoE (quality of experience) of each type. VI. CONCLUSIONS The world is at the beginning of an era marked by tremendous growth in mobile data subscribers and mobile data traffic. Infonetics Research [12] forecasts that mobile data subscribers will grow from million in 2010 to 1.8 billion in Today s mobile broadband networks carry multiple services that share access (radio) and core network resources. Each service has different QoS requirements in terms of packet delay tolerance, acceptable packet loss rates, and required minimum bit rates. Additionally one should consider two perspectives of performance: end user s one and operator s one. Given that system resources are limited, there will thus be a trade-off between a number of active users and the perceived quality of service in terms of user throughput. The 4G mobile systems focus on seamlessly integrating the existing wireless technologies including GSM, wireless LAN, and Bluetooth. The 4G networks are all-ip based heterogeneous networks that allow users to use any system at anytime and anywhere. Users carrying an integrated terminal can use a wide range of applications provided by multiple wireless networks. The 4G systems provide not only telecommunications services, but also data and multimedia services. As the technology matures, traffic congestion increases, and competitive pressures mount, QoS and policy management will become more and more important. In preparation, operators must make sure they are working with vendors that have a strong framework to supply end-to-end QoS and are capable of supporting evolving needs. A bearer has two or four QoS parameters, depending whether it is real-time or best effort service: - QoS Class Indicator (QCI) - Allocation and Retention Priority (ARP) - Guaranteed Bit Rate (GBR) real real-time services only - Maximum Bit Rate (MBR) real-time services only There are two major types of bearers: guaranteed bit rate (GBR) and non-guaranteed bit rate (non-gbr). GBR bearers are used for real-time services, such as conversational voice and video. A GBR bearer has a minimum amount of bandwidth that is reserved by the network, and always consumes resources in a radio base station regardless of whether it is used or not. If implemented properly, GBR bearers should not experience packet loss on the radio link or the IP network due to congestion. GBR bearers will also be defined with the lower latency and jitter tolerances that are typically required by realtime services. Non-GBR bearers, however, do not have specific network bandwidth allocation. They are for best effort services, such as file downloads, , and Internet browsing. These bearers will experience packet loss when a network is congested. A maximum bit rate for non-gbr bearers is not specified on a per-bearer basis. However, an aggregate maximum bit rate (AMBR) is specified on a per subscriber basis for all non-gbr bearers. Data applications are typically best effort services, characterized by variable bit rates, and are tolerant to some loss and latency before the user perceives poor quality. The standards and recommendations provide mechanisms to drop or downgrade lower-priority bearers in situations where the network become congested. The enodeb is the radio base station in LTE and it plays a critical role in end-to-end QoS and policy enforcement. The enodeb performs uplink and downlink rate policing, as well as RF radio resource scheduling. It uses ARP when allocating bearer resources. The effectiveness of radio resource scheduling algorithms in enodeb s has a tremendous impact on service quality and overall network performance. Quality of experience (QoE) is a measure of the overall level of customer satisfaction with a service. Quantitatively measuring QoE requires an understanding of the key performance indicators (KPI) that impact users perception of quality. KPIs are unique by service type. Each service type such as conversational video, voice, and internet browsing, have unique performance indicators that must be independently measured. The evolved LTE architecture is able to provide QoS per user and per service, implementing the notion of a user profile associated with control element functions. An integrated management approach to service and network management in the case of heterogeneous and mobile network access is a key to Quality Management. LTE employs intelligent scheduling methods to optimize performance, from both end-user and operator standpoints of view. REFERENCES [1] Eric M. Zeman: ITU Changes Tune, LTE and WiMax Can Be Called 4G in Phone Scoop ( posited [2] Erik Dahlman et al.: 3G Evolution. HSPA and LTE for Mobile Broadband, Elsevier [3] ITU-T X.200 series of recommendations [4] Gilbert Held: Network Management. Techniques, Tools and Systems, John Wiley & Sons Ltd. [5] ITU-T Recommendation series M.3000 as in 2011

79 72 [6] ETSI EG Quality of telecom services; User Group; V1.2.1 ( ); [7] Quality of Service Standards and Guidelines for the Telecommunications Sector - Consultative Doc; Office of Utilities Regulation; October 2010 [8] Neal Seitz: ITU-T QoS Standards for IP-Based Networks IEEE Communications Magazine, June 2003 [9] S.Sesia et al.: LTE The UMTS Long Term Evolution: From Theory to Practice, 2009, [10] Reiner Ludwig, Hannes Ekstrom et al.: An Evolved 3GPP QoS Concept, Journal conference papers, Ericsson Research & Development, 2009 [11] TL 9000 Release 5.0 (released June 22, 2009) [12] IXIA Rev A: Quality of Service (QoS) and Policy Management in Mobile Data Networks, August 2010

80 73 Effective-availability method for blocking probability calculation in switching networks with overflow links and point-to-point selection Michał Dominik Stasiak and Mariusz Gła bowski Chair of Communications and Computer Networks Poznan University of Technology ul. Polanka 3 room 110, Poznań, Poland Chair of Communications and Computer Networks Poznan University of Technology ul. Polanka 3 room 230, Poznań, Poland michal.m.stasiak@doctorate.put.poznan.pl, mariusz.glabowski@put.poznan.pl Abstract The present paper proposes an analytical method for blocking probability calculation in multi-service switching networks with overflow links. The method belongs to the group of effective availability methods. The paper gives special attention to the determination of the effective-availability parameter in switching networks with overflow links. The results of the analytical calculations of blocking probability are then compared with the results of the simulations for a selected multi-service switching network with overflow links and point-to-point selection. Index Terms switching networks, inter-stage overflow links I. INTRODUCTION The switching networks can be divided into blocking and non-blocking networks [1] [3]. In case of blocking switching networks, the internal blocking phenomenon can be observed, and, consequently, some of the traffic offered is lost. One of the methods for this phenomenon to be reduced, aside from repackaging and rearrangements [2], [3], is the application of inter-stage overflow links [4]. This solution is based on the introduction of additional links between switches of a given stage of the switching network. Overflow links were introduced for the first time to Pentaconta dial telephone switching systems [3]. In switching networks of the Pentaconta system [4] overflow links were used in the first stage. This allowed design engineers to decrease the internal blocking probability by some percent [5]. A possibility of an introduction of overflow links was also considered in digital switching networks [6] [8]. Subsequently, the application of additional overflow links was considered in context of multi-service switching networks. The first simulation studies [9], [10] confirmed that the application of overflow links in multi-service switching networks can lead to a significant decrease in value of internal blocking probability for calls of all traffic classes offered to the networks. The simulation research [9], [10] has been followed by analytical studies of multi-service switching networks with overflow links. In [11], the method for blocking probability calculation in multi-service networks with overflow links, point-to-group selection, and Erlang traffic streams have been proposed. Then, this method has been extended in order to model the switching networks with point-to-group selection and Engset traffic streams [12]. The first attempt to calculate the point-to-point blocking probability in the multi-service switching networks with overflow links has been made in [13]. The idea behind the method is based on a direct application of the value of the effective availability parameter as the measure for the internal blocking probability. However, such an approach leads to significant inaccuracy of determination of the internal blocking probability in the switching networks with overflow links. In this paper, in order to increase the accuracy of the analytical model of the multi-service switching networks with overflow links, a different approach to estimating the internal blocking probability is presented. The further part of the papers is organized as follows. Section II describes the structure and operation of a threestage Clos network with overflow links. Section III discusses the PPBMT method (Point-to-Point Blocking with Multi-rate Traffic [14]) that forms the basis for the adopted method for modeling switching networks. The method for a determination of the effective availability in networks with overflow links is also presented. Section IV presents a comparison of the calculations and the simulations for a selected switching network. Section V sums up the paper. II. SWITCHING NETWORKS WITH OVERFLOW LINKS Let us consider a multi-service switching network, composed of k symmetrical switches of k k links in each stage. Let us assume that each of the inter-stage links has the capacity equal to f BBUs 1 and that outgoing transmission links create link groups called directions. One of typical methods for realization outgoing directions is presented in Fig. 1. The network is offered multi-rate traffic (multi-service traffic) that 1 BBU is the greatest common divisor of resources required by call streams offered to the system [15] [17].

81 74 consists of a number of classes of call streams. A class i call requires t i BBUs to set up a connection. A call stream of each class is a Poisson stream. Fig. 1. stage 1... k 1... k Clos switching network with a system of overflow links in the first The considered switching network operates with point-topoint selection. According to this type of selection, the process of setting up a new connection in the switching network without overflow links is as follows. First, the control device of the switching network determines the first stage switch, on the incoming link of which a class i call appears. Next, the control system finds the last-stage switch having a free outgoing link, which has at least t i free BBUs in the required direction. Then, the controlling algorithm attempts to set up a connection between the selected switches of the first and the last stages. The existence of such connection path causes realization of the connection. In opposite case, if the attempt is unsuccessful, the algorithm rejects the call due to the internal blocking. In order to lower the probability of the internal blocking probability, to the considered switching networks the overflow links have been introduced. In this paper it is assumed, that the overflow links are introduced to the first stage in such a way that the overflow links connect the additional output of a given switch with the additional input of a neighboring switch of the same stage (Fig. 1). The output of the last switch is connected with the additional input of the first switch. Such a structure of the switching network leads to an increase in the number of possible connection within the switching network and, consequently, to the significant decrease in the internal blocking [9], [10]. If each last-stage switch does not have t i free BBUs, i.e., if all output links in a given direction are busy in the required direction, a call is lost because of external blocking. III. PPBMT METHOD The PPBMT method can be used to determine the blocking probability in a multi-service switching network with the point-to-point selection [14], [18]. The basic idea behind the method is provided by the remarks [19], [20] that a z-stage network with point-to-point selection can be reduced to a z 1- stage network with point-to-group selection. In this system the output group of the network is composed of all interstage links that lead to the selected switch of the third stage 1... k (in Fig. 1 marked with bold line links that lead to the first switch of the last stage). The total blocking probability E tot,i for class i calls in a z-stage multi-service switching network in the PPBMT method can be determined on the basis of the following equation: E tot,i = E ex,i + E in,i [1 E ex,i ], (1) where E ex,i is the external blocking probability for class i calls, and E in,i is the internal blocking probability for class i calls: E ex,i = P (i, 0), (2) [( k d(i) k s )] P (i, s) d(i) E in,i = ). (3) 1 P (i, 0) s=1 ( k d(i) In Formulas (2) and (3), P (i, s) is the distribution of free BBUs for calls of class i in the limited-availability group (modelling the outgoing directions) described later in the paper. The effective availability d(i) defines the number of available switches of the second stage for one switch of the first stage in the three-stage switching network. According to the assumptions of the PPBMT method, in order to determine the effective availability we are to construct the socalled equivalent network, i.e., a single-service network with identical topological structure as the multi-service network and with the capacity of the inter-stage links equal to a single BBU. The method for a determination of the effective availability will be presented further on in the text. A. Modeling links in switching networks In the paper it is assumed that interstage links can be modeled by the full-availability group and that the outgoing directions can be modeled by the limited-availability group. 1) Model of outgoing directions: Let us consider a model of the limited-availability group (LAG) with multi-service traffic and the capacity V L [21]. The limited-availability group consists of k identical separated resources (e.g., transmission links) with the capacity equal to f BBUs. The total capacity of the system V L is equal to k f BBUs. In the considered model, M traffic classes that belong to the set M = {1, 2,..., M} are defined. A given class i requires t i BBUs to set up a new connection. The service time for class i calls has an exponential distribution with the service rate µ i. The limited-availability group services a class i call only when this call can be carried entirely by a single resource. Thus, the admission of a new call depends not only on the value of available resources in the whole limited-availability group but also on the distribution of available BBUs in particular resources. In order to take into account the influence of the specific structure of the limited-availability group on the process of a determination of the occupancy distribution using Kaufman-Roberts recursion, in [21] the use of conditional coefficients of passing ξ i (n) was proposed. The value of the parameter ξ i (n) does not depend on incoming call process and can be determined as follows [21]: ξ i (n) = F (V L n, k, f, 0) F (V L n, k, t i 1, 0), (4) F (V L n, k, f, 0)

82 75 where F (x, k, f, t) is the number of arrangement of x free BBUs in k links, calculated with the assumption that capacity of each link is equal to f BBUs and each link has at least t free BBUs: F (x, k, f, t) = = x kt f t+1 r=0 ( 1) r ( k r )( x k(t 1) 1 r(f t+1) k 1 ). (5) Having the conditional coefficients of passing, we can calculate the occupancy distribution, i.e., the probability [P n ] VL of n BBUs being busy, using the following modified Kaufman- Roberts formula [22], [23]: M n[p n ] VL = A i t i [P n ti ] VL ξ i (n t i ), (6) i=1 where A i is the intensity of traffic of class i offered to output group. On the basis of the occupancy distribution [P n ] VL the blocking probability for calls of class i can be determined: E i = V L t i n=0 [P n ] VL [1 ξ i (n)] + V L n=v L t i+1 [P n ] VL. (7) The distribution [P n ] VL forms a basis for a determination of the distribution of available links in LAG, i.e., the distribution P (i, s) [21]. This distribution determines the probability of an event that s output links in a given direction (from among k number of all links in this direction) can service a call of class i: P (i, s) = V L n=0 [P n ] VL P (i, s V L n), (8) where P (i, s x) is the conditional distribution of free links in LAG determined with the assumption that x BBUs in the group are free: ( k s) Ψ F (w, s, f, t i )F (x w, k s, t i 1, 0) w=st P (i, s x) = i F (k, x, f, 0) (9) where: Ψ = sf if x sf and Ψ = x if x < sf. 2) Model of inter-stage links: The inter-stage links of the switching networks can be modeled by the full-availability group, i.e. by the model of a single resource (e.g., a single transmission link) with the complete sharing policy. Let us consider the full-availability group with multi-service traffic and with the capacity equal to f BBUs [24]. The fullavailability group is offered the same traffic structure as in the case of the limited-availability group considered in Section III-A1. In order to calculate the occupancy distribution and the blocking probability for individual traffic classes in a full availability group that models inter-stage links, Equations (6) and (7) can be used, with the following assumptions taken into consideration: 0 n f 1 i M ξ i (n) = 1, (10) where f is the capacity of each inter-stage link. B. Effective availability of switching network In line with the PPBMT method, the analysis of a z-stage switching network can be reduced to the analysis of a z 1- stage switching network. In case of three-stage switching network, the effective availability for calls of class i is determined by the following formula [14], [18]: d(i) = [1 π(i)]k + π(i)y 1(i). (11) k The parameter π(i) is the probability of direct nonavailability that defines that a given switch of the second stage is not available for a given switch of the first stage. This parameter can be determined by Lee s method [25] on the basis of the equivalent graph of the two-stage network. The graph for a two-stage network with and without overflow links [11] is presented in Fig. 2. Figure 2a) shows all connection paths between one switch of the first stage and one switch of the second stage in the switching network without inter-stage overflow links. The introduction of the inter-stage overflow links between two switches of the first stage is followed by an increase in the number of connection paths in the graph. Assuming that the overflow links are lossless 2, the probability graph for twostage equivalent switching network with overflow links can be transformed to the form shown in Fig. 2b). Fig. 2. links a) e(i) b) Graph of 2-stage network a) without overflow links, b) with overflow The parameter Y 1 (i) in (11) is the average fictitious traffic of class i that is serviced by one switch of the first stage. This traffic can be determined on the basis of the following formula: Y 1 (i) = ke(i). (12) The parameter e(i) in Formula (12) and in Fig. 2 is the load of the inter-stage link of the equivalent switching network for traffic of class i. It was adapted [14] that this load was equal to the blocking probability for calls of class i in the real interstage link with the capacity f BBU. The blocking probability and the occupancy distribution in the link with the capacity f can thus be determined on the basis of the Kaufman-Roberts distribution [22], [23]: n[p n ] f = e(i) = e(i) e(i) M a i t i [P n ti ] f, (13) i=1 f n=f t i+1 [P n ] f, (14) 2 Such an assumption results from the simulation studies presented in [9] in which it was observed that a two-fold increase in the capacity of overflow links as compared to the capacity of inter-stage links led to a virtual elimination of the phenomenon of blocking in the overflow links.

83 76 where a i is traffic offered to a single link of the switching network: a i = A i /k. (15) After the determination of the load of the links of the equivalent network e(i), it is possible to determine the value of the parameter π(i). On the basis of the graphs from Fig. 2 we have: for the switching network without overflow links: π(i) = e(i), (16) for the switching network with overflow links: π(i) = [e(i)] 2. (17) IV. COMPARISON OF RESULTS OF ANALYTICAL AND SIMULATION MODELING The simulation study as well as the analytical calculations were carried out for a 3-stage Clos network. The network was composed of 4 symmetrical 4 4 switches in each of the stages. The capacity of inter-stage links and outgoing links was 30 BBU. The capacity of overflow links is twice as high as that of inter-stage links 3. The network was offered multi-rate traffic that consisted of 3 traffic classes requiring 10 BBUs, 5 BBUs and 1 BBU, respectively. Traffic of all classes was Erlang traffic and was offered in the following proportions: A 1 t 1 : A 2 t 2 : A 3 t 3 = 1 : 1 : 1. To set up connections, the point-to-point selection was used in the network. The simulation study was carried out using a dedicated simulator based on the event scheduling method [9]. In the simulation experiments 95 percent confidence interval was introduced, determined on the basis of the t-student distribution for 10 series, each of 100,000 calls that demanded the maximum number of resources in each series. Figure 3 shows the results of the calculations and the simulation experiments of the internal blocking probability for the network without overflow links and the network with overflow links. The results are presented in relation to the average traffic a offered to one BBU of the switching networks. The results indicate a substantial decrease in the internal blocking probability following the introduction of the overflow link system. The results of the analytical calculations (marked with lines in Fig. 3) of the blocking probability are higher than the results of the simulation experiments (marked with dots, with confidence intervals taken into consideration), but the error in the calculations can be acceptable from the engineering point of view. V. CONCLUSIONS In the paper an effective-availability method for point-topoint blocking probability calculation in switching networks with overflow links has been proposed. The method is based on the PPBMT method, originally designed for modeling switching networks with multi-rate traffic, point-to-point selection, 3 According to [10], such a capacity of overflow links stabilizes the internal blocking at the level of insignificant values as compared to the external blocking (at least one order of magnitude of lower values). and without overflow links. The proposed modification of the PPBMT method consists in appropriate modification of the structure of the probability graph of the switching network. The presented simulation and calculation results prove the assumptions of the proposed method. REFERENCES [1] C. Clos, A study of non-blocking switching networks, Bell System Technical Journal, pp , [2] W. Kabaciński, Nonblocking electronic and photonic switching fabrics. Berlin: Springer, [3] A. Jajszczyk, Wstęp do telekomutacji. Warszawa: Wydawnictwa Naukowo-Techniczne, [4] R. Fortet, Systeme Pentaconta Calcul d orange. Paris: LMT, [5] M. Stasiak, Computation of the probability of losses in switching systems with mutual aid selectors, Rozprawy Elekrotechniczne, Journal of the Polish Academy of Science, vol. XXXII, no. 3, pp , 1986, in Polish. [6] H. Inose, T. Saito, and M. Kato, Three-stage time-division switching junctor as alternate route, Electronics Letters, vol. 2, no. 5, pp , [7] L. Katzschner, W. Lorcher, and H. Weisschuh, On a experimental local PCM switching network, in International Seminar o Integrated System for Speech, Video and Data Communication, Zurich, 1972, pp [8] E. Ershova and V. Ershov, Digital Systems for Information Distribution. Moscow: Radio and Communications, 1983, in Russian. [9] M. Stasiak and P. Zwierzykowski, Multi-service switching networks witch overflow links, Image Processing and Communications, vol. 15, no. 2, pp , [10], Performance study in multi-rate switching networks with additional inter-stage links, in Proceedings of the The Seventh Advanced International Conference on Telecommunications (AICT 2011), M. Głąbowski and D. K. Mynbaev, Eds. St. Maarten, The Netherlands Antilles: IARIA, Mar. 2011, pp [11] M. Głąbowski and M. Stasiak, Internal blocking probability calculation in switching networks with additional inter-stage links, in Information Systems Architecture and Technology, A. Grzech, L. Borzemski, J. Świątek, and Z. Wilimowska, Eds. Oficyna Wydawnicza Politechniki Wrocławskiej, 2011, vol. Service Oriented Networked Systems, pp [12] M. Głąbowski and M. D. Stasiak, Internal blocking probability calculation in switching networks with additional inter-stage links and Engset traffic, in Proceedings of the 8th IEEE, IET International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP 2012), Poznań, Poland, Jul [13], Switching networks with overflow links and point-to-point selection, in Information Systems Architecture and Technology, A. Grzech, L. Borzemski, J. Świątek, and Z. Wilimowska, Eds. Oficyna Wydawnicza Politechniki Wrocławskiej, 2012, vol. Networks Design and Analysis, pp [14] M. Stasiak, Combinatorial considerations for switching systems carrying multi-channel traffic streams, Annales des Télécommunications, vol. 51, no , pp , [15] V. G. Vassilakis, I. D. Moscholios, and M. D. Logothetis, Call-level performance modelling of elastic and adaptive service-classes with finite population, IEICE Transactions on Communications, vol. E91-B, no. 1, pp , [16] K. Ross, Multiservice Loss Models for Broadband Telecommunication Network. London: Springer, [17] M. Pióro, J. Lubacz, and U. Korner, Traffic engineering problems in multiservice circuit switched networks, Computer Networks and ISDN Systems, vol. 20, pp , [18] M. Głąbowski, Modelowanie systemów multi-rate ze strumieniami zgłoszeń BPP, ser. Rozprawy Nr 433. Poznań: Wydawnictwo Politechniki Poznańskiej, 2009, 189 stron. [19] A. Lotze, A. Roder, and G. Thierer, PPL a reliable method for the calculation of point-to-point loss in link systems, in Proceedings of 8th International Teletraffic Congress, Melbourne, 1976, pp. 547/1 44. [20] S. Kaczmarek, Metody analizy własności ruchowych cyfrowych pól komutacyjnych obsługujacych ruch zintegrowany, ser. Eletronika. Wydawnictwo Politechniki Gdańskiej, 1993, vol. Zeszyty Naukowe Politechniki Gdańskiej.

84 blocking probability Fig. 3. Simulation 10-5 Class 1 without overflow links Class 2 without overflow links Class 3 without overflow links Class 1 with overflow links Class 2 with overflow links Class 3 with overflow links traffic offered per BBU Internal point-to-point blocking probability in 3-stage Clos network without and with overflow links [21] M. Stasiak, Blocking probability in a limited-availability group carrying mixture of different multichannel traffic streams, Annales des Télécommunications, vol. 48, no. 1-2, pp , [22] J. Kaufman, Blocking in a shared resource environment, IEEE Transactions on Communications, vol. 29, no. 10, pp , [23] J. Roberts, A service system with heterogeneous user requirements application to multi-service telecommunications systems, in Proceedings of Performance of Data Communications Systems and their Applications, G. Pujolle, Ed. Amsterdam: North Holland, 1981, pp [24] M. Głąbowski, M. Sobieraj, and M. Stasiak, Analytical modeling of multi-service systems with multi-service sources, in Proceedings of 16th Asia-Pacific Conference on Communications (APCC). Auckland, New Zealand: IEEE, Oct. 2010, pp [25] C. Lee, Analysis of switching networks, Bell System Technical Journal, vol. 34, no. 6, 1955.

85 78

86 Erlang s Ideal Grading with Multiplication of Links and BPP Traffic Slawomir Hanczewski Chair of Communication and Computer Networks Poznan University of Technology 79 Abstract The present article proposes a model of the Erlang s Ideal Grading with the multiplication of links and BPP traffic streams. The model can be used for modelling the radio interface of the UMTS (WCDMA) system, as well as for modelling switching networks carrying a mixture of different multi-rate traffic streams. The results of the analytical calculations are compared with the results of the simulation data for selected non full-availability groups, which confirms high accuracy of the proposed model. λ / 3 λ / 3 I. INTRODUCTION In a non full-availability group [1] (also called grading), traffic sources have no access to all links of the group but only to some number of links d, which is called availability. Traffic sources that have access to the same links form a grading (component) group. The number of grading groups (load groups) g and the number of available links d in a given grading group are dependent on the structure of the grading. In gradings, traffic sources that belong to different grading groups can have access to the same links. This phenomenon is called partial multiplication of outputs. The capacity of the non full-availability group is within: d V gd. A particular instance of the non full-availability group is the Erlang s Ideal Grading (EIG), the structure and a precise model of which was proposed by [2]. In this group the number of grading groups is equal to the number of all possible choices d of links from its total pool of V. This means that two grading groups differ in at least one link. Furthermore, the group is characterized by the following properties: each grading group has access to the same number of links of group d (constant availability), traffic offered to the group by all grading groups is the same A g = A/g (where A is the total traffic offered to the group, Ag is the offered traffic to links that are available to a single grading group), with new calls, links are chosen randomly. Figure 1 shows an example of the EIG with the capacity V = 3 and availability d = 2. A detailed description of the model proposed by Erlang is given in the vast literature of the subject, e.g. in [2] or [1]. For our purposes, it is only worth stressing that in order to derive the EIF formula (Erlang s Interconnection Formula) for determining the blocking probability in the group, Erlang made use of the analysis of the Markovian process that takes place in the group. Since the number of grading groups in the ideal non full-availability group is usually considerably higher than in real systems, initially the model was only of theoretical λ / 3 Fig. 1. Erlang s Ideal Grading (V = 3, d = 2, g = 3) significance. It soon turned out, however, that, for g 5, the properties of real non full-availability groups are similar to those of the ideal non full-availability group. Therefore, the EIF formula was increasingly applied for determining blocking in real groups. Due to complex and complicated computational process, the application of the EIF formula was initially limited to groups with low capacities only. Hence, Erlang proposed an approximate method that made it possible to carry on calculations for groups with higher capacity [2]. The relevant literature abounds in numerous other approximate solutions worked out for engineering applications. One of them is O Dell s method [3]. In this method, traffic serviced in the considered non full-availability group with the capacity V is defined as the sum of the carried traffic in the fullavailability group with the capacity d and traffic carried in the non full-availability group with the capacity V d. Following this assumption, O Dell derived a formula, for a given group structure (called the O Dell group), enabling a determination of the blocking probability. Another method is based on the formulas that make it possible to determine the occupancy probability of given links in the full-availability group, e.g. on the Palm-Jacobaeus formula (PJ)[4], [5]. It was proved that the PJ formula could be used for determining characteristics of non full-availability groups within relatively small loads [6]. For high group loads, however, the obtained values of loss probabilities are far too much overestimated. All methods that were known in the late 1960s were successively gathered and presented in Lotze s work [1]. Outgoing groups in switching networks in first telephoneexchange were formed as non full-availability groups. What followed was that selectors had no access to all outgoing

87 80 links but only to part of them. This solution was mainly imposed by economical reasons. With the introduction of the first electronic switching networks, non full-availability groups were almost exclusively replaced by full-availability groups. Still, the analytical models worked out for non full-availability groups continued to be used for modelling electronic switching network [7], [8], [9]. The model of the Erlang s Ideal Grading in particular (EIF formula), was used in the Divided Loses Method (DLM) [10], [11] for the approximation of outgoing groups in multi-stage switching networks with single-channel traffic. The EIF formula is also used in methods for determining the blocking probability in switching networks with multicast connections [12]. In [13], a model of a non full-availability system, the so-called limited-availability group, is presented. This is a model of several separated links servicing a mixture of multi-rate traffic. The model was used, for example, in modelling outgoing groups in multi-stage switching networks carrying a mixture of different multi-rate traffic streams. Since all the above models of the non full-availability group relate to single-service systems, then they cannot be used for modelling systems with multi-rate traffic. The first model of the Erlang s Ideal Grading with multi-rate traffic appeared in 1993 [14]. The model formed a basis for the studies and investigations on the applications of the model of the ideal Erlang s model with multi-rate traffic in modelling modern telecommunications systems currently carried out by the authors of the article. The authors have worked out a model for a group in which each class of calls can have different availability (availability can take on non-integer values). The study conducted by the authors has also proved some possibility of applicability of the model of the non full-availability group in modelling full-availability groups with reservation [15]. The present article proposes a model for the Erlang s Ideal Grading with links multiplications and BPP (Binomial- Poisson-Pascal) traffic. The grading s structure was proposed in [16]. The model for the proposed non full-availability group worked out by the authors can be used for modelling the radio interface of the UMTS system in which interference imposed by other cells can be taken into account by way of changing the value of the availability parameter. The surrounding investigations carried out by the authors are very much in line with the current tendencies (trends) in modelling wireless systems by non full-availability systems [17], [18], [19]. II. MODEL OF ERLANG S IDEAL GRADING WITH A MULTIPLICATION OF LINKS AND BPP TRAFFIC A. Structure of the group The structure and an appropriate model (PCT1 traffic) for Erlang s Ideal Grading with a multiplication of links is presented in [16]. Let us consider now the structure of this group. Figure 2 presents an example of a group with a multiplication of links with the capacity of 12 BBUs (to define the smallest unit of capacity of the system, the notion of basic bandwidth unit BBU of a group is used [20]). The structure of the group is described by the following parameters: - total capacity V, and with V = vf, - number of links v, - capacity of link f, - total availability D,where D = df, - availability to links d (this parameter directly corresponds to the availability parameter in the Erlang model). The group under scrutiny is composed of 3 links, each with the capacity f = 4. The group services two classes of calls that demand t 1 = 1 and t 2 = 4 BBUs, respectively. Since availability to links is fixed (d = 2), traffic sources that are related to the serviced ( classes ) of ( calls ) are divided into three v 3 grading groups (g = = = 3). d 2 t =1 t = g=3 d=2 Fig. 2. Structure of the Erlang s Ideal Grading (V = 12, v = 3, f = 4) B. BPP traffic In any telecommunications system, BPP traffic occurs when the call stream appearing at the input of this system is composed of a mixture of Erlang, Engset and Pascal traffic streams. In the case of the Erlang stream, traffic offered by calls is constant and is equal to the value: A i = λ i /µ i, (1) where: λ i - is the arrival rate of calls of class i, µ i - the average intensity of service of calls of class i. In the case of Engset and Pascal streams, traffic offered by these streams depends on the number of traffic sources and is equal to: A j (n) = [N j n j (n)]λ j /µ j, (2) A k (n) = [S k n k (n)]υ k /µ k, (3) where: N j is the number of Engset traffic sources of class j, Λ j is the arrival rate of calls generated by a single free source of class j,

88 81 S k is the number of Pascal traffic sources of class k, Υ k is the arrival rate of calls generated by a single free source of class k. n j (n) is the average number of class j Engset sources serviced in the occupancy state n, n k (n) denoted the average number of class k Pascal sources serviced in the occupancy state n. links C. Model Consider now a model of the Erlang s Ideal Grading with a multiplication of links and BPP traffic. Our assumption is that the group services m I Erlang traffic classes, m J Engset call classes and m K Pascal classes; thus the total number of serviced classes is equal to: m = m I + m J + m K, (4) If we adopt the symbol c to denote any class of calls, then the number bandwidth units demanded by this class for service can be denoted as t c. A new call is admitted for service only when it will be serviced by BBUs that belong to one of all available links. Additionally, the group satisfies all the assumptions made for the Erlang s Ideal Grading, i.e.: free link for a new call is randomly chosen (free BBUs within the selected free link are also randomly chosen), offered traffic distributes uniformly in all grading groups. Basing on model [21], the blocking probability in the considered group with BPP traffic can be determined in the following way. Let us first determine the blocking probability β c for calls of class c in a single grading group. A blocking state for calls of class c occurs in any randomly chosen grading group in a case when none of the available links has at least t c free BBUs. This event always occurs when the total number of free BBUs in available links is lower than the demanded number of BBUs t i for calls of class c. Following similar reasoning as with the case of the non fullavailability group without multiplication, the blocking state will always occur if the service process in the grading group under consideration will be in one of the states belonging to the set: Ψ c = {(D t c + 1), (D t c + 2),..., D}. In the group with the multiplication of links, the set Ψ i does not include, however, all blocking states. Let us then consider such an unfavourable distribution of busy BBUs in available links with the example of the group presented in Figure 2. Our considerations will be carried out for the call of class 2, which demands 4 BBUs. The second grading group has two links: 2 and 1. Assume that the second grading group is in the occupancy state 2 BBU. This means that the group services two calls of the first class (t 1 = 1). A feasible arrangement of busy BBUs in the available links of the second grading group is presented in Figure 3 (which link is busy is of no importance here, but rather the number of busy BBUs in particular links). Two possible combinations of the arrangement of busy BBUs are clearly visible. In the first case (Figure 3.a), all busy BBUs are in one available link (there are two such arrangements at hand). Thus, all BBUs in the other link are free and the blocking state does not exist. In the other case (Figure 3.b), there is one busy BBU in each Fig. 3. "# $#!! Arrangement of busy (free) BBUs in the second grading group available link of the second grading group (there is only one such arrangement). Such a arrangement of busy BBUs causes the group to be in the blocking state for calls that demand 4 BBUs. In the considered case, the blocking state can occur in one in three possible arrangements of busy BBUs in the available links. Let us generalize the above considerations. The blocking state in any randomly selected grading group composed of d links can occur for calls of class c demanding t c BBUs if the number of busy BBUs in each available link is equal or higher than f t c +1. This means that the blocking state can occur if in all d available links the number of busy BBUs is equal or higher than: x = (f t c + 1)d. (5) On the basis of the considerations, we can complement now the set of states Ψ c with such states in which the number of busy BBUs in the links of the grading group is contained within (f t c + 1)d c x < D t c + 1. Therefore, the set Ψ c for the non full-availability group with the multiplication of links can be rewritten as follows: Ψ c = {((f t c +1)d), ((f t c + 1)d) + 1),..., (D t c ), (D t c + 1), (D t c + 2),..., D}. Assume now that the number of all busy BBUs in the group is equal to n. The grading group under consideration will be blocked if x busy BBUs in this group (x n) will satisfy the condition x Ψ i. The probability of such an event can be mathematically written as follows: β c (n, x) = P (x Ψ c n)p A (x), (6) where: P (x Ψ c n) - is the probability of such an event that the number of busy BBUs in the grading group satisfies the condition x Ψ c, under the assumption that there are n busy BBUs in the whole group, P A (x) - is the probability of the unfavourable arrangement x of busy BBUs in the group which leads to blocking event. This probability is equivalent to the probability of unfavourable arrangement D c x of free BBUs in a given grading group. Due to the properties of the Erlang s Ideal Grading, the probability P (x Ψ c n) can be determined on the basis of the hyper-geometrical distribution that, in the considered case, will take on the following form: P (x Ψ c n) = ( D x )( V D n x ) / ( V n ). (7)

89 82 The blocking probability of calls of class c P A (x) in a given grading group can be defined as the ratio of unfavourable arrangement of free BBUs to the number of all possible arrangements of free BBUs in the grading group. The number of arrangements z of elements in v boxes, with the capacity of f elements each, can be defined by the following combinatorial formula [13]: z f+1 ( )( ) F (z, k, f) = ( 1) i v z + v 1 i (f + 1). i v 1 i=0 (8) Using (8), the formula determining the blocking probability of the grading group in which there are x busy BBUs, the value P A (x) can be written in the following form: P A (x) = F (D x, d, t c 1), (9) F (D x, d, f) where F (D x, d, t c 1) denotes the number of unfavourable distributions D x of free BBUs in d links, under the assumption that each of them has t c 1 free channels at the maximum (which is the necessary condition for the blocking state to occur), while F (D x, d, f) denotes the number of all possible distributions of free channels. Substituting (7) and (9) to (6), we get: [( D x )( V D n x ) ( )] V F (D x, d, tc 1) β c (n, x) = / n F (D x, d, f) (10) Taking all possible blocking states in the grading group into consideration, the blocking probability for this group for calls of class c, in the occupancy state n, will equal to: β c (n) = k x=(f t c+1)d β c (n, x), (11) where: k = n for ((f t c + 1)d n D, k = D for n > D. It should be stressed that, due to the properties of the Erlangt s ideal grading, (11) determines the conditional blocking probability in the discussed group. We determine now, the conditional probability of passing σ i (n) for calls of class i. This probability is equal to: The conditional probability of passing for calls of class c (σ c (n)) complements the conditional blocking probability (β c (n)), determined by Formula (11). Therefore, we can write: σ c (n) = 1 β c (n) = k x=(f t c+1)d β c (n, x). (12) Having determined the conditional probabilities for the considered group, we are in position to determine the occupancy distribution in the group. Using the considerations for systems with BPP traffic and gradings [22], the formula for the blocking probability Calculation class 1 (Er) Simulation class 1 (Er) 10 5 Calculation class 2 (En) Simulation class 2(En) Calculation class 3 (Pa) Simulation class 3 (Pa) traffic offered [erl] Fig. 4. Loss probability in the Erlang s Ideal Grading with the multiplication of links: V = 210, v = 6, f = 35 and d = 3 occupancy distribution can be rewritten in the following form: np (l) (n) = m I A it i σ i (n t i )P (l) (n t i )+ i=1 + m J α j[n j n (l 1) j=1 j (n t j )]σ j (n t j )t j P (l) (n t j )+ + m K k=1 β k[s k + n (l 1) k (n t k )]σ k (n t k )t k P (l) (n t k ), (13) where l is iteration number. The value of the blocking probability can be eventually determined by means of the following equation: E c = V n=(f t c+1)d [P n] V β c (n). (14) III. EXEMPLARY RESULTS The analytical model presented in the article is an approximate model. To verify the accuracy of the adopted theoretical assumptions it is necessary then to compare the results obtained on the basis of the analytical calculations with the results of the simulation experiments. The study was carried out for groups carrying a mixture of different multirate traffic that had a different structure (capacity, number of links, capacity of links and availability). Figure 4 shows the results for the group with the following structure: V = 210, v = 6, f = 35, d = 3 and D = 105. The group services three classes of calls that demand respectively: t 1 = 1 (Erlang), t 2 = 3 (Engset) and t 3 = 7 (Pascal) BBUs. Figure 5, in turn, shows the results of the investigations for the group: V = 384, v = 6, f = 64 and d = 4. The results presented in Figures 4 5 are presented in relation to traffic offered to one BBU of the group, where solid lines denote the results of the calculations and the points indicate the results of the simulation. The simulation study included a confidence interval of 95%, which has not been put in the graphs on account of its low value. The comparison of the simulation data with the analytical calculations for the proposed structure of the group clearly indicates satisfactory accurateness of the proposed model.

90 83 blocking probability Calculation class 1 (Er) Simulation class 1 (Er) Calculation class 2(En) Simulation class 2 (En) 10 5 Calculation class 3 (Pa) Simulation class 3 (Pa) Calculation class 4 (Er) Simulation class 4 (Er) traffic offered [erl] Fig. 5. Loss probability in the Erlang s Ideal Grading with the multiplication of links: V = 384, v = 6, f = 64 and d = 3 IV. CONCLUSIONS The article presents a model of the Erlang s Ideal Grading with the multiplication of links and BPP traffic. To verify the accuracy of the proposed model, the results of the calculations were compared with the results of the simulation. The simulation experiments indicate good accuracy of the calculations. This proves all the adopted theoretical assumptions for the proposed model right. Furthermore, it should be stressed that the calculations made according to the proposed formulae are not complicated and are easily programmable. The model proposed in the article can be used for modelling of other telecommunication systems (UMTS). [14] M. Stasiak. An approximate model of a switching network carrying mixture of different multichannel traffic streams. IEEE Transactions on Communications, 41(6): , [15] Maciej Stasiak and Slawomir Hanczewski. Approximation for multiservice systems with reservation by systems with limited-availability. In Nigel Thomas and Carlos Juiz, editors, EPEW, volume 5261 of Lecture Notes in Computer Science, pages Springer, [16] Maciej Stasiak Mariusz Glabowski, Slawomir Hanczewski and Joanna Weissenberg. Modeling erlang s ideal grading with multirate bpp traffic. Mathematical Problems in Engineering, [17] V.B. Iversen. Modelling restricted accessibility for wireless multi-service systems. In Second International Workshop of the EURO-NGI Network of Excellence, volume 3883/2006 of Lecture Notes in Computer Science, pages , [18] Maciej Stasiak and Slawomir Hanczewski. A model of WCDMA radio interface with reservation. In Proceedings of the Information Theory and Its Applications (ISITA 2008), [19] M. Stasiak, P. Zwierzykowski, J. Wiewióra, and D. Parniewicz. Computer Performance Engineering, volume 5261 of Lecture Notes in Computer Science, chapter An Approximate Model of the WCDMA Interface Servicing a Mixture of Multi-rate Traffic Streams with Priorities, pages Springer Berlin, Palma de Mallorca, Spain, September DOI: / [20] J.W. Roberts, editor. Performance Evaluation and Design of Multiservice Networks, Final Report COST 224. Commission of the European Communities, Brussels, [21] Mariusz Głąbowski, Maciej Stasiak, and Joanna Weissenberg. Properties of recurrent equations for the full-availability group with bpp traffic. Mathematical Problems in Engineering, 2012:17, Article ID [22] M. Głąbowski, A. Kaliszan, and M. Stasiak. Modeling product-form state-dependent systems with bpp traffic. Perform. Eval., 67: , March REFERENCES [1] A. Lotze. History and development of grading theory. In Proceedings of 5th International Teletraffic Congress, pages , New York, [2] E. Brockmeyer. A survey of A.K. Erlang s mathematical works. Danish Acad. of Tech. Sciences, (2): , [3] Dell. An outline of the trunking aspect of automatic telephony. (65): , [4] C. Palm. Nagra foljdsatser urde Erlang ska formlerna. Tekn. Medd. Kungl. Telegrafstyrdsen, (1 3), [5] C. Jacobaeus. A study on congestion in link-systems. Ericsson Technics, (48):1 68, [6] M.A. Szneps. Sistiemy raspriedielienia informacji. Mietody rascziota. Radio i Swiaz, Moskwa, [7] A. Lotze. Bericht uber Verkehrtheoretische Untersuchungen CIRB. Technical report, Inst. für Nachrichten-Vermittlung und Datenverarbeitung der Technischen Hochschule, Univ. of Stuttgart, [8] A. Lotze, A. Roder, and G. Thierer. PPL a reliable method for the calculation of point-to-point loss in link systems. In Proceedings of 8th International Teletraffic Congress, pages 547/1 44, Melbourne, [9] A. Lotze, A. Roder, and G. Thierer. PCM-charts. Technical report, Institute of switching and data technics, University of Stuttgard, [10] E.B. Ershova and V.A. Ershov. Digital Systems for Information Distribution. Radio and Communications, Moscow, in Russian. [11] V.A. Ershov and M. Igelnik. Grade of service analysis for multi-channel switching in ISDN. In Proceedings of 13th International Teletraffic Congress, volume 14, pages , Copenhagen, [12] Slawomir Hanczewski and Maciej Stasiak. Point-to-group blocking in 3-stage switching networks with multicast traffic streams. In Petre Dini, Pascal Lorenz, and José Neuman de Souza, editors, Proceedings of First International Workshop (SAPIR 2004), volume 3126 of Lecture Notes in Computer Science, pages , Fortaleza, August Springer. [13] M. Stasiak. Blocking probability in a limited-availability group carrying mixture of different multichannel traffic streams. Annales des Télécommunications, 48(1-2):71 76, 1993.

91 84

92 85 Performance evaluation of the MSMPS algorithm under different distribution traffic Grzegorz Danilewicz, Member IEEE, Marcin Dziuba Chair of Communication and Computer Networks Poznan University of Technology Polanka 3, Poznań Abstract In this paper, we present performance evaluation of the Maximal Size Matching with Permanent Selection (MSMPS) algorithm [1]. The first and full description of the algorithm was presented earlier in [1]. Performance evaluation of MSMPS under uniform traffic was also presented in [1]. In this article, we present computer simulation results under nonuniformly, diagonally and lin-diagonally distributed traffic models. The simulations was performed for different switch sizes: 4 4, 8 8 and We discuss results for MSMPS algorithm and for other algorithms well known in the literature. All results are presented for switch size but simulation results are representative for other switch sizes. It is shown that our algorithm achieve similar performance results like another algorithms, but it does not need any additional calculations. This information causes that MSMPS algorithm can be easily implemented in hardware. I. INTRODUCTION Several well known scheduling algorithms have been proposed in the literature [2], [3], [4], [5] [6], [7]. All these algorithms, which are responsible for configuration of a switching fabric, are very sophisticated and they achieve a good efficiency and short time delay. During designing of a new algorithm, a theoretical approach is applied. It means that designers do not pay attention to algorithm implementation constraints. Most of well known algorithms, which achieve the good performance results, are very difficult for implementation in the real switching fabric hardware. This is due to very complicated calculations which must be performed during algorithms work. The high calculations complexity makes this algorithms impractical. Instead, most of the new generation switches and routers use much simpler scheduling algorithms to control and configure switching fabric. One of this kind of algorithms is MSMPS [1], which achieve the similar performance results like the rest of algorithms but does not need to perform a lot of complicated calculation. Other important fact which influence switches and routers performance is switching fabric buffers architecture. In our research we study a switching fabric with VOQ (Virtual Output Queue) system [7], [8]. This buffering system has been proposed to solve a HOL (Head of Line) effect. In VOQ system each switching fabric input has a separate queue for a packet directed to particular output of a switching fabric. Using this kind of architecture, its performance depends only on a good scheduling algorithm. Algorithm should be very fast, achieve the good results (high efficiency and short time delay) and be easy to implement in hardware. In the presented architecture we use centralized scheduling mechanism. In this mechanism all decisions considering setting up connections between switching fabric inputs and outputs (connection patterns) are made by algorithm or driver implemented in a separated control module. Driver can control some connected switching fabrics located in different equipments (i.e. routers). Such solution can be used in the new generation networks for example in SDN (Software Defined Networks). Routers are responsible for direct packets in data paths but high level decisions (routing) are moved to separate module or device which is located out of routers. Routing decisions are sent to routers to execute suitable connection patterns in each switching fabric of each router. This paper is organized as follows. In Section II, the switch architecture is discussed. In the next part of this article, all simulation parameters are explained. In the same section, we have presented all traffic distribution models which are used in our research. Then we show computer simulation results under different traffic patterns. Results achieved for MSMPS algorithm, are compared with another algorithms well known in the literature. In Section IV, same conclusion are given. II. SWITCH ARCHITECTURE The general VOQ switch architecture is presented in Figure 1 [9]. Input 0 Input N-1 VOQ(0,0) VOQ(0,N-1) VOQ(N-1,0) VOQ(N-1,N-1) Scheduling system Switching fabric Output 0 Output N-1 Fig. 1. General VOQ switch architecture In our research we use switching fabric with input queuing system (Input Queued switches), where buffers are placed at the inputs. Each input has separated queue which is divided into N independent VOQs. The total number of virtual queues depends on the number of inputs and outputs. We assumed that in our switch, the number of inputs and outputs is equal and in

93 86 general case is N. Based on this assumption, total number of VOQs in switching fabric, with N number of inputs/outputs, is equal to N 2. Additionally, each virtual queue is denoted by VOQ (i,j), where i is the input port number and j is the output port number. It can be assumed that: 0 i N 1 and 0 j N 1. Between inputs and outputs modules, the switching fabric is placed. In the switching fabric, all connections between inputs and outputs are established. Implemented algorithm is responsible for a proper configuration of mentioned switching fabric. The most important module, in presented symmetrical switch, is scheduling system module. This is a module, where algorithms are implemented. In the scheduling module all information about queues conditions are stored. It means that scheduling system has knowledge about numbers of packets waiting in all queues, to be send through the switch. This information is necessary to make a right decision by MSMPS algorithm about connection pattern in the switching fabric. III. SIMULATION CONDITIONS In this paper, we present performance results for some scheduling algorithms, well known in the literature, and for MSMPS algorithm. All graphs are plotted as the results of computer simulations. Packets are incoming at the inputs according to Bernoulli arrival model [10], [11]. Under this model, only one packet can arrive at the input in each time slot (basic of time unit). We assume that one packet may occupy only one time slot. In Bernoulli model, probability that packet will arrive at the input is equal to p, where pɛ(0 < p 1). We present simulation results as a mean value of ten independent simulation runs. Number of iteration in one simulation run is equal to , where the first steps are reserved for obtaining convergence in the simulation environment. We assume also that our switching fabric is strict sense nonblocking. It means that there is always possible to establish connection between each suitable and idle input and suitable and idle output of the switching fabric. Performance results consider the following parameters. 1) Efficiency: Efficiency is parameter which was calculated according to equation 1. Numerator is the number of packets passed in n- th time slots through the switching fabric. Denominator is the number of packets which have arrived to the switch buffers in n-th time slot [1]. a n n q = (1) b n where: n - time slot number, a n - number of packets passed in n time slot through the switching fabric, b n - number of packets which arrive to the switching fabric in n time slot 2) Mean Time Delay Mean Time Delay (MTD) is calculated according to equation 2. Numerator is a sum of difference between time when n a packet is transferred by the switch and the time when the packet has arrived to the buffer system. Denominator is a total number of packets served by the switching fabric. t out t in n MT D = (2) k n where: MT D - Mean Time Delay, n - time slots number, t in - time when a packet arrived to the VOQ, t out - time when the same packet is transferred by the switching fabric k - number of packets. Three distributed traffic models were taken into account in this paper. Each of this model determines the probability that packet which appear at the input, will be directed to the certain output. These considered traffic models are: A. Non-uniformly distributed traffic The probability of the packet arriving at the input i, directed to the output j is presented in Table I. For readability, table shows traffic distribution in 4 4 switch. Analogous traffic distribution is used for other switch sizes: 8 8 and It can be observed from Table I that in this type of traffic model, some outputs have higher probability of being selected [12]. This probability can be defined as: p ij and it can be calculated according to the equation 3: p ij n 1 2 for i = j, 1 2(N 1) for i j. where: N - Number of switch inputs/outputs. TABLE I NON-UNIFORMLY DISTRIBUTED TRAFFIC IN 4 4 SWITCH WITH VOQ Input 0 Input 1 Input 2 Input 3 Output 0 Output 1 Output 2 Output B. Diagonally distributed traffic In this type of distribution model, the traffic is concentrated in two diagonals of the table (traffic matrix). The probability that packet is appeared at the suitable input i and it will be directed to the output j is equal to p ij = 1 2. Probability for the rest of inputs (not placed in two diagonals) is p ij = 0 [13], [14], [15], [16]. From Table II it can be observed that input i has packets only for output i and for output ((i + (N-1)) mod N) (3)

94 87 TABLE II DIAGONALLY DISTRIBUTED TRAFFIC IN 4 4 SWITCH WITH VOQ Output 0 Output 1 Output 2 Output Input Input Input Input C. Lin-diagonally distributed traffic Lin-diagonally distributed model is a modification of diagonally distributed model. Considered lin-diagonally model and its probabilities are presented in Table III. It can be seen from this table that a load decrease linearly from one diagonal to the other. In general case, probability can be calculated according to the following formula [17]: N d p d = p N(N + 1)/2 with d = 0,..., N 1, then p ij = p d if j = (i + d)modn. Where: p d - probability of packet arriving in lin-diagonally distributed traffic, p - probability of packet arriving in Bernoulli process, N - Number of switch inputs/outputs, d - output number. TABLE III LIN-DIAGONALLY DISTRIBUTED TRAFFIC IN 4 4 SWITCH WITH VOQ (4) where no packets are to be send through the switch [1]). Above 20% load, efficiency of MSMPS algorithm increases and reaches mean value about 0.95 with growing tendency. Different phenomena can be observed for other algorithms. All of them maintain efficiency on a high level about 1. But above 60% load, PIM and islip rapidly decreases with nonuniformly and lin-diagonally distributed traffic. Only MMRRS maintain efficiency about 1 for both mentioned traffic distributions. It looks different with diagonally distributed traffic. Efficiency for MSMPS algorithm systematically decreases for over 40% load, efficiency is under 0.9. This type of distribution caused that packets are concentrated in two diagonals of the traffic matrix (Table II). For this traffic model our algorithm achieve the worst results. Efficiency MSMPS 0.3 islip HRRM 0.2 MMRRS PIM Load Fig. 2. The efficiency for Bernoulli arrivals with nonuniformly distributed traffic in switches Input 0 Input 1 Input 2 Input 3 Output 0 Output 1 Output 2 Output p 1 10 p 2 10 p 1 10 p 3 10 p 4 10 p 1 10 p 2 10 p 2 10 p 3 10 p 4 10 p 1 10 p 3 10 p 2 10 p 3 10 p 4 10 p IV. SIMULATION RESULTS ANALYSIS In this section performance of the MSMPS algorithm will be compared with another algorithms for VOQ switches. Up today, several scheduling algorithms are presented in the literature [2], [3], [4], [5] [6], [7]. We compare and analyze results for: islip which was presented in [2], Maximal Matching with Round-Robin Selection (MMRRS) [3], [4], [5], Hierarchical Round-Robin Matching (HRRM) [6] and Parallel Iterative Matching (PIM) [7]. The efficiency is plotted in Figure 2, 3 and 4. This parameter was calculated according to equation 1. Similarly as for MTD, we present results only for switch size. From Figures 2 and 3 it can be observed that for low traffic load (between 10% - 20%) our algorithm achieve the worst results compared to other algorithms. Conducted simulations confirm, that MSMPS algorithm can not cope with low traffic load for different traffic models. The reason is that our algorithm focused very much on access alignment for all outputs, instead of avoiding of empty connections (connections Efficiency MSMPS 0.3 islip HRRM 0.2 MMRRS PIM Load Fig. 3. The efficiency for Bernoulli arrivals with lin-diagonally distributed traffic in switches The MTD is a function of traffic load and is plotted in Figures 5, 6 and 7. MTD is measured in time slots, where one slot is the basic of time unit in presented system. Computer simulations were performed for different switch sizes. Only the results for switch size are shown in Figures. We assume that the input buffers are infinitely long. We have presented results for Bernoulli arrivals with different distribution traffic. From Figure 5 it can be seen that for nonuniformly distributed traffic MSMPS algorithm achieve the best results (the lowest MTD) compared to other algorithms. Up to 75% load, only HRRM algorithm achieve similar results. The highest MTD, for this type of distribution, has reached MMRRS algorithm.

95 88 Efficiency MSMPS 0.3 islip HRRM 0.2 MMRRS PIM Load MTD [cells] MSMPS islip HRRM MMRRS PIM Load Fig. 4. The efficiency for Bernoulli arrivals with diagonally distributed traffic in switches For 10% load, MMRRS algorithm has already achieved 4 cells delay, when the rest of algorithms reached results close to 0. Very similar results are achieved by all algorithms with lindiagonally distribution traffic - Figure 6. MSMPS algorithm achieve almost the same results like for nonuniformly distribution. The same situation can be observed with MMRRS algorithm. Interesting situation occurred above 60% load, when MTD for PIM and islip algorithm rapidly increase. It can be caused by arbiters synchronization problem. From the Figure 7, with results for diagonal distribution traffic, it can be seen that MTD for our algorithm rapidly increased. This is due to our algorithm based on permanent connection patterns and for high load some outputs are blocked. According to this fact, to much empty connections are established. This effect can be eliminated by set up connections (between inputs and outputs) for more than one time slot. Acceptable results are reached by MMRRS algorithm which behave extremely well for diagonal distribution traffic. MTD [cells] MSMPS islip HRRM MMRRS PIM Load Fig. 5. The MTD for Bernoulli arrivals with nonuniformly distributed traffic in switches V. CONCLUSIONS AND FUTURE WORK In this paper, we have presented performance results for MSMPS scheduling algorithm for VOQ switches under different traffic patterns. Its performance confirms that MSMPS algorithm can be used in practice. This algorithm achieved high efficiency and in the same time low latency is provided. In the next studies, we want to implement this algorithm Fig. 6. The MTD for Bernoulli arrivals with lin-diagonally distributed traffic in switches MTD [cells] MSMPS islip HRRM MMRRS PIM Load Fig. 7. The MTD for Bernoulli arrivals with diagonally distributed traffic in switches in separate chips or in the switching fabric equipment. Our algorithm works in simply way and there is no additional calculation needed. MSMPS algorithm can be also modified to support different traffic priorities and switch architectures. ACKNOWLEDGMENT The work described in this paper was financed from the research funding as research grant UMO-2011/01/B/ST7/ REFERENCES [1] G. Danilewicz, M. Dziuba, The New MSMPS Packet Scheduling Algorithm for VOQ Switches, 8th IEEE, IET International Symposium on Communication Systems Networks and Digital Signal Processing (CSNDSP), Poznań, July [2] N. McKeown, The islip Scheduling Algorithm for Input-Queued Switches, IEEE/ACM Trans. on Networking, vol. 7, pp , April [3] A. Baranowska, W. Kabaciński, The New Packet Scheduling Algorithms for VOQ Switches ICT 2004, LNCS 3124, pp , [4] A. Baranowska, W. Kabaciński, MMRS and MMRRS Packet Schedul-ing Algorithms for VOQ Switches, MMB PCTS 2004, pp , September [5] A. Baranowska, W. Kabaciński, Evaluation of MMRS and MMRRS Packet Scheduling Algorithms for VOQ Switches under Bursty Packet Arrivals, High Performance Switching and Routing (HPSR), pp , May [6] A. Baranowska, W. Kabaciński, Hierarchical round-robin matching for virtual output queuing switches, Telecommunications, pp. 196, July [7] T. Anderson and et al., High-speed switch scheduling for local-area networks, ACM Transactions on Computer Systems, vol. 11, no. 4, pp , November 1993.

96 89 [8] Y. Tamir and G. Frazier, High performance multiqueue buffers for VLSI communication switches, Proc. 15th Annu. Symp. Comput. Arch., pp , June [9] A. Baranowska, W. Kabaciński, Hierarchiczny Algorytm Planowania Przesyłania Pakietów Dla Przełacznika z VOQ, Poznańskie Warsztaty Telekomunikacyjne (PWT), Poznań, [10] H. Jonathan Chao, B. Liu, High Performance Switches and Routers, ISBN-13: , John Wiley and Sons, pp , New Jersey, [11] P. Giaccone, D. Shah, and S. Prabhakar, An Implementable Parallel Scheduler for Input-Queued Switches, IEEE Micro, vol. 22, no. 1, pp , [12] K. Yoshigoe and K. J. Christensen, An evolution to crossbar switches with virtual ouptut queuing and buffered cross points, IEEE Network, vol. 17, no. 5, pp , September [13] P. Giaccone, D. Shah, and S. Prabhakar, An Implementable Parallel Scheduler for Input-Queued Switches, IEEE Micro, vol. 22, no. 1, pp , [14] D. Shah, P. Giacconeand, and B. Prabhakar, Efficent Randomized Algorithms for Input-Queued Switch Scheduling, Proc. of Hot-Interconnects IX, vol. 22, no. 1, pp , January [15] P. Giaccone, B. Prabhakar, and D. Shah, Randomized Scheduling Algorithms for High-Aggregate Bandwidth Switches, IEEE J. Select. Areas in Commun., vol. 21, no. 4, pp , May [16] Y. Jiang and M. Hamdi, A Fully Desynchronized Round-Robin Matching Scheduler for a VOQ Packet Switch Architecture, IEEE HPSR 01, pp , May [17] A. Bianco, P. Giaccone, E. Leonardi, and F. Neri, A Framework for Differential Frame-Based Matching Algorithms in Input-Queued Switches, IEEE Infocom, 2004.

97 90

98 91 Performance of mechanisms of resources limiting in networks with multi-layered wirtualization Bartłomiej Dabiński, Damian Petrecki, Paweł Świa tek Wroclaw University of Technology Institute of Computer Science Wrocław, Poland bartlomiej.dabinski, damian.petrecki, Abstract We describe the performance of various methods of QoS assurance for each connection in an environment made of virtual networks and dedicate end-to-end connections, on the basis of research conducted with the use of the authorial network management system named Executed Management, which uses resources virtualization platforms VMware and Mininet for testing purposes. We compare functionality and performance of our solution to widespread implemented mechanisms as OpenFlow and MPLS. We point to reasons for selecting wellknown techniques to isolate networks and limit bandwidth on different levels of virtualization. Index Terms Execution Management, parallel networks, network virtualization performance, OpenFlow, MPLS, virtual networks, virtual networks performance I. INTRODUCTION The purpose of the project was to build a system that configures network in order to satisfy QoS requirements for large number of applications. The applications run in multiple virtual, isolated networks that share a single physical network. The system has to create connections inside these networks, with specified paths and guaranteed bandwidths, dynamically in response to applications requests. It means that the system must be aware of network content. Only well proven and commonly implemented algorithms, protocols and techniques were to be used because the system should operate in the network built with generic equipment. This paper describes selected resources virtualization method and compares its performance to solutions providing similar capability. Many techniques may have been applied in order for above mentioned goals to be achieved. First step of our work was to compare features of Multiprotocol Label Switching, IEEE 802.1ad (QinQ), IPv6-in-IPv6 tunneling and Provider Backbone Bridging. We decided to use VLANs to isolate virtual networks and to run dedicated virtual or physical machines for handling ISO OSI L3 networks inside these L2 networks. Then we decided to build the centralized system to manage network resources. The task of the system was to handle requests of applications and to create dedicated tunnels inside virtual networks in reply to the requests. Each connection between end nodes and each virtual network must have guaranteed bandwidth. A variety of traffic engineering methods were tested and the shaper of tc linux application was selected. The traffic of an end-to-end connection is limited in virtual interfaces (end-to-end tunnel entry points) by TBF classless queuing discipline. The traffic of virtual networks is limited in physical interfaces that send traffic into the network. u32 classifier filters packets belonging to a specific VLAN and the traffic is enqueued to the HTB classes to limit its bandwidth. Furthermore, the system ensures uninterrupted transmission without delay variation in case of tunnel path or bandwidth modification. The laboratory network was built using VMware ESXi server (Debian hosts run in virtual machines), Mininet service [4], Juniper EX4200 switch and two MikroTik RB800 devices. Mininet was designed as a network emulator for testing OpenFlow but we made some modification to make Mininet meet our needs. We used MikroTiks as precise bandwidth and latency testing tools. We experimented with the performance of the system both within the scope of delays in connection handling (creation, removal, modification) and within the scope of QoS ensuring, which means guaranteeing bandwidth requested by user s applications and preserving low packets flow latency in case of existing connections reconfiguration. In order to point advantages and disadvantages of our solution we compared its functionality to widespread implemented mechanisms like OpenFlow and Multiprotocol Layer Switching (MPLS). In order to choose the best solution we had analyzed capability of MPLS, QinQ and Provider Backbone Bridging (PBB) along with suitable traffic shaping and policing techniques before we decided to modify the solution described in [3] to meet our requirements. This solution uses VLANs and IPv6- in-ipv6 tunneling. MPLS was dismissed due to the fact that its implementations in modern network equipment do not fully support IPv6. For deployment of MPLS and IPv6, the protocol stack has to include following elements: MPLS, IPv4, VPLS and then IPv6. Furthermore, MPLS does not solve the problem of two applications, when each of them needs an end-to-end connection with the same pair of hosts using different QoS requirements. On the other hand, MPLS was a strong candidate because it offer simplest, fully automated configuration of the end-to-end connection (it does not require our system to log in to each node on the path separately). QinQ seemed to be easy in implementation due to widespread support of this standard but it has several limita-

99 92 tions, which are crucial in the context of our work. The most important disadvantage is a necessity to block communication of the host with the entire network while the host was connected to another host. It is an implication of the requirement that end-to-end connection must be isolated and we cannot require end hosts to implement QinQ. Then, it is impossible to fulfill these conditions and handle separate, concurrent endto-end connections for multiple applications on a single host. Provider Backbone Bridging might perform well in the core network but it is Layer 2 protocol so it is difficult to distinguish between multiple applications on a single host. Furthermore, PBB is a novel, advanced standard and unsupported by most of devices available for our research. A. Chosen virtualisation method for isolated parallel networks For creating parallel networks we used VLANs. End nodes were connected to specified VLANs and thus they can communicate only within definite part of the network. We assume that end nodes belong to a single virtual parallel network unless they have multiple network interfaces. End nodes in different networks may have a common physical gateway owing to which they can communicate. The end nodes can also be connected to external, unmanaged by us networks without QoS warranties (such as GSM). The gateway in above mentioned case must both support routed VLANs and run multiple virtual routers. Through VLAN tagging performed by switches, to which end nodes are directly connected, the traffic is forwarded only to a specified virtual router that is end nodes logical gateway inside virtual network. The use of physical router for serving the virtual network is possible in the case when it has routed VLAN interfaces or deal with only one virtual network. It is not recommended except that the situation when it is not possible to tag packets that come to the router from end nodes. The virtualization of the 1 level of the core network looks similar. The packets that belong to a particular network are sent from a logical gateway and are tagged by a physical node that hosts the logical router. In the next physical node analogous actions are performed. Based on tags, received traffic is forwarded to appropriate virtual router, which serves a particular network. In this way networks isolation is assured and delegation of virtual network management is possible. We can delegate network managing for external entities such as clients that would buy one virtual network and have full control of virtual routers in theirs network. There is also possibility to create a section of the logical network (core network, edge routers and end nodes) of a physical device by intentional configuring virtual logical connections between virtual machines, with the use of resources virtualizer. As you can see in Fig. 1, we use that to create testing purpose network with two physical hosts only. When configuring the network, it is required to use a dynamic routing protocol (in our case it is OSPFv3) both between physical nodes and within virtual networks for logical nodes. It ensures full reachability of all the hosts and also communication for the management system and the nodes on the all virtualization levels. The virtual networks of first level in the testing topology are built with the use of VMware ESXi virtualization platform, physical host with Mininet software, Juniper EX4200 switch and two MikroTik platforms RB800. ESXi served for virtualizing several access networks, end users machines and virtual management server. Mininet was used in order to create two virtual backbone networks made of routers running Ubuntu operating system. The MikroTik devices were connected as physical end nodes for simulating users computers and generating traffic for benchmark. The MikroTiks, through the Traffic Generator tool, which is built in RouterOS, collected statistics from transmitted/received data: packet loss rate and packet latency distribution. VLANs indicate which logical routers from access networks (ESXi) are allowed to communicate with specific logical routers in the core network (Mininet). In order to control Mininet host s incoming traffic, we had to modify Mininet scripts and create in this way properly defined virtual network with willful assignment of interfaces, addresses and connections. The Juniper EX4200 switch connects physically (GigabitEthernet) and logically (VLANs) the EX4200 machine and the host with Mininet. The testing topology consists of two parallel networks, whose only common point is a management node, which must be able to communicate with both virtual networks. The topology is presented in Fig. 1. II. NETWORK VIRTUALIZATION A. Chosen method of end-to-end connections virtualization End nodes operate in specific parallel network, so they can communicate with all other devices in this parallel network. At the time when they need to communicate, they do not start it directly but one node sends request for the Execution Management server. The management system analyzes QoS requirements for the new connection and available network resources and then it creates a proper IPv6-in-IPv6 tunnel. At the end of this process the application is informed that dedicated connection was established and it can start transmission. Virtual connections of second level are tunnels, which are dynamically created for each application use case. These tunnels are set on edge routers the gateway routers for end nodes inside parallel networks. We do not impose any (nonstandard for a generic IPv6 network node) requirements to end nodes owing to the approach in which traffic is directed to the tunnel by the mechanism implemented entirely in edge routers. This mechanism bases on packet filtering by following IPv6 and TCP headers fields: source address, destination address, source port, destination port. It is important that a computer, in case of having multiple active IPv6 addresses, uses for communication only the IPv6 address that was included earlier in the connection establishing request. In order to preserve integrity of the network, the tunnel is configured on virtual dedicated interfaces, which are subinterfaces of loopbacks on edge routers. The addressing schema within tunnels is calculated with the use of 48-bit subnet divided into 126-bit mask subnets. The external addresses of the virtual interfaces use reserved pool of routed addresses. It is worth to mention that we used the potential of IPv6 addressing which offers enough addresses to assign separate

100 93 Fig. 1. Testing topology addresses for each parallel network. The external addresses of the tunnels are used to route the packets of its tunnel through the fixed path. Owing to this approach we are able to control tunnel path easily, by applying static routing based on destination IPv6 only. It is significant advantage since we can use any generic router in the core network and we do not cause high CPU utilization. B. Limiting of network resources The first part of network resources managing was to limit the bandwidth of entire parallel network. Except assigning resources for particular networks, it was necessary to reserve some bandwidth for management traffic and OSPFv3 packets. We used the mechanisms built into networking hardware to limit bandwidth of an interface and a software tool that implemented HTB algorithm, in which appropriate classes based on VLAN tags were created. Then we associated bandwidth limits with these classes. It is advised for an administrator of an isolated network to use traffic shaping or policing mechanisms for isolating the class of traffic for signaling purposes. In order to achieve this, we propose the mechanism that classifies all the traffic that is not an IPv6-in-IPv6 tunnel (by checking corresponding IPv6 header field) and guarantees bandwidth for this class. The remaining traffic belongs to IPv6-in-IPv6 tunnels. Due to the use of dedicated virtual network interfaces in edge routers we can limit bandwidth of given tunnel simply by one classless tc qdisc discipline (we recommend Token Bucket Filter TBF) configured on this virtual tunnel interface. This approach does not require traffic classifying. Centralized management of dedicated connections, in the networks which resources we know, guarantees that all the connections have sufficient resources and thus QoS is ensured. If a new connection would exceed the capacity of the network, the system rejects the connection and informs application about the reason. The variant possibilities of tunnel bandwidth limiting are briefly described in chapter V. III. MANAGEMENT SYSTEM The management system, which was used for testing, was built especially for such purposes. It is made up of the Execution Management module, a database, and a supporting QoS module. The database stores i.a. the state of the network, which is data concerning all the nodes and connections on all the levels of virtualization. This is used to find an appropriate path for a new connection. The system receives requests via a web application (that was built for an administrator for network managing purposes) or directly from applications via XML messages. For communication with the management system a dedicated traffic class with guaranteed bandwidth for signaling is used. In order to make an application work independent on the network management system, it is possible to use intermediary layer. An example of such deployment is described in [1]. After receiving a request, the system verifies it and sends a query for an eligible path to the module that finds a path satisfying QoS. Then the system receives a reply and prepares scripts, which are subsequently sent parallel to network devices in order to configure proper services. The connection path may be modified in the case of increased QoS requirements or when given connection have to be released because of a resources rearrangement. Such situation occurs when a new connection must not be established unless the resources already occupied by another connection are released (and this connection may be routed another path that also satisfy its QoS). This process is fully automated and is imperceptible for the user. The process has following steps: creation of a new tunnel with new addresses and a new

101 94 Fig. 2. Execution Management engine performance 1 Fig. 4. Execution Management engine performance 3 Fig. 3. Execution Management engine performance 2 path, redirection of the traffic from the old tunnel to the new tunnel, removal of the old tunnel. The connection may also be removed in response to a removal request from the application, which used this connection. More about Execution Management and cooperating modules you can find in [2]. IV. PERFORMANCE EVALUATION A. Execution Management performance The performance of the management system is an important factor influencing QoE because it determines the time of connection establishment. Therefore, it was crucial to comply with ITU-T recommendations [5]. The tests were performed to determine the behaviour of the system for a large number of queries and under different load (from 1 up to 100 requests). As expected, the system was stable and retained full data integrity, even in the case of multiple simultaneous requests for the creation, modification, and removal of connections. Each series of tests were performed using 100 requests for a tunnel creation, 100 requests for removal and 50 requests for modification. Test results are based on the repetition of each series (1, 10 or 100 requests) 50 times. Requests for testing were randomly generated in network with 40 nodes. All time values are in milliseconds. Approximately 80% of the processing time in the case of creation or modification, and up to 92% of the removal time is consumed by database operations. Albeit these numbers are significant, the result is much better when comparing to the previously used external database implementation. It probably can be further improved by creating our own library for handling database queries created by EM. The total execution time depends primarily on the status of the network rather than on the performance of the virtualization tools. We might suppose that in some cases the times were very high due to temporary high load of Mininet host network operator or VMware Server. In such situations, it dramatically lengthened the waiting for SSH connection to the nodes. Tests exclude situation when two requests require a connection via SSH to the same node. In this case, the waiting time for connection can be extended up almost double. This is due to the sequential SSH connections handling by a device. If there is a need for multiple connections to the same device in a single request - commands to be sent to multiple connections are combined in one. However, there is not a mechanism responsible for this in the case of multiple requests for the same node because it would starve scripts from earlier demands by continually appended subsequent commands. There is not necessary to run the tunnel removals in parallel because the latency caused by removal process is not significant. Then each network is removed sequentially. B. Performance of network mechanisms This part of the chapter concerns the performance of implemented virtualized network environment. For testing we used MikroTik Traffic Generator tool. This is an advanced tool built into RouterOS that allows to evaluate performance of DUT (Device Under Test) or SUT (System Under Test). The tool can generate and send RAW packets over specific ports. It also collects latency and jitter values, tx/rx rates and counts lost packets. The goal of the first experiment was to examine the throughput of MikroTik RB800. We built the scenario as shown in Fig. 5. We used one stream of packet to examine unidirectional maximal throughput (in half duplex) and two parallel streams of packets to examine bidirectional maximal throughput (full duplex). As maximal throughput we understand the maximal transmission rate when network works stable. The real maximal transmission rates were higher but connection was unstable: there were high packets losses and high latency.

102 95 Fig. 5. Topology for testing MikroTik RB800 and Juniper EX4200 switch We arbitrary assumed that acceptable packet loss is 0.1%. Transmissions that have more packet loss were not considered. Results are presented in Table I. The values are rounded down to nearest 5 Mbps. Each test lasted for 90 seconds. The subsequent test concerned the performance of the laboratory virtual networks. We used the network with two MikroTik RB800 platforms connected as shown in Fig. 6. The logical topology is presented in Fig. 7. We used two RB800 devices since single RB800 could be a bottleneck in some cases. The results are presented in Table II. The results show that the performance of examined virtual network is comparable to the maximal throughput of physical devices when transmitted packets are large (1500 bytes, which equals MTU size). The significant difference in the case of two-way traffic is caused by the fact that the traffic that flows in one direction passes the physical link between ESXi and Debian twice. Thus the real throughput in this link is doubled. In the case of small packets (100 bytes) the difference is much bigger. It is caused by the fact, that this testing scenario involves much higher packets per second rates and each packet has to be served by each virtual router (7 times) so the CPU performance of the virtualizers is the bottleneck. The second parts of tests concern the IPv6-in-IPv6 tunnel and bandwidth limiting mechanism for this tunnel. The logical topology of this scenario is presented in Fig. 8. The gray, bold lines indicate the tunnel path. The routers R1 and R2 are tunnel entry/exit points. Table III presents the test results. The maximal packets used for these tests were smaller comparing to the packets in previous tests because the MTU of a tunnel is decreased by the additional IPv6 header (40 bytes). Nevertheless, the presented throughput does not involve these additional 40 bytes of data. With default configuration of IPv6-in-IPv6 tunneling in Debian 6, the displayed MTU of a tunnel interface is 1460 but the real MTU is 1452 bytes. These missing 8 bytes was reserved for an encapsulation limit extension header, which was confusing since this extension header was not transmitted. We explicitly disabled encaplimit in order to transmit full 1460 bytes packets. The Transmission rate column of the Table III means the fixed rate at which the MikroTik was sending data. Tunnel bandwidth means the value of tunnel bandwidth limit. Throughput is the rate at which packets were coming back to the MikroTik (after passing the tunnel). The results presented in Table III are comparable to the results presented in Table II. It means that implemented by us mechanisms does not introduce significant overhead to packet processing. The drop of the throughput is most noticeable in case of small packets and it is about 12.5%. Fig. 6. Fig. 7. Fig. 8. Physical topology for tests Logical topology for virtual environment tests Logical topology for tunnel testing TABLE I RESULTS OF PERFORMANCE TESTS OF MT RB800 AND JUNIPER EX4200 Direction Packet size [bytes] Throughput [Mb/s] Latency ether1 ether min: 109us avg: 6.5ms max: 11.5ms ether1 ether2 ether2 ether min: 32us avg: 255us max: 1.35ms ether1 ether min: 23us avg: 62us max: 773us ether1 ether2 ether2 ether min: 21us avg: 64us max: 883us

103 96 Paths TABLE II RESULTS OF VIRTUAL NETWORKS PERFORMANCE TESTS R1 MR1 MR5 MR12 MR7 MR2 R2 R1 MR1 MR5 MR12 MR7 MR2 R2 R2 MR2 MR7 MR12 MR5 MR1 R1 R1 MR1 MR5 MR12 MR7 MR2 R2 R1 MR1 MR5 MR12 MR7 MR2 R2 R2 MR2 MR7 MR12 MR5 MR1 R1 Packet size [bytes] Throughput [Mb/s] Latency min: 471us avg: 1.9ms max: 15.2ms min: 397us avg: 2.2ms max: 13.5ms min: 107us avg: 253us max: 8.14ms min: 110us avg: 354us max: 10.4ms V. BANDWIDTH LIMITING METHODS A. Limiting bandwidth of virtual network VLAN interfaces were chosen for implementation of parallel networks, which determined the methods of bandwidth limiting that might be used. It was important to limit bandwidth of virtual networks from outside of these networks. This implicates that virtual networks administrators do not need to take any action to limit bandwidth of their network. Because the limits are on different level of virtualization, they are invisible for the administrators and they cannot be exceed. Ensuring bandwidths for VLANs was implemented by setting bandwidth limits for all the VLAN interfaces on the node. Specific implementation is equipment dependent but almost each carrier-class switch or router is capable to perform this task and such implementation is fairly easy. B. Limiting bandwidth of end-to-end connection Recall that all the functions concerning QoS are performed on edge routers, which are gateways for end users. It is true in our case but the architecture of our system does not disenable the use of separate, hardware traffic shapers, which is the case in many professional applications. Guarantee of the bandwidth for end-to-end connections in described implementation is achieved by limiting the bandwidth for the IPv6-in- IPv6 tunnel interfaces. It is a mechanism similar to the above mentioned mechanism for limiting the bandwidth for VLANs: in both cases we limit the bandwidth of a virtual interface. Nonetheless, the actual implementation may vary significantly because handling VLANs is usually performed by switches and IPv6 tunneling by routers since most switches (even with L3 support) are not capable of IPv6-in-IPv6 tunneling. We focused on implementation end-to-end connection bandwidth limit in Debian 6 OS. We chose tc tool for performing traffic control because this is a very efficient and common tool, almost each traffic control application in linux bases on tc. Indeed, tc is very powerful and has functions that meet our needs. Paths R1 MR1 MR5 MR12 MR7 MR2 R2 R1 MR1 MR5 MR12 MR7 MR2 R2 R1 MR1 MR5 MR12 MR7 MR2 R2 R2 MR2 MR7 MR12 MR5 MR1 R1 R1 MR1 MR5 MR12 MR7 MR2 R2 R2 MR2 MR7 MR12 MR5 MR1 R1 R1 MR1 MR5 MR12 MR7 MR2 R2 R1 MR1 MR5 MR12 MR7 MR2 R2 R1 MR1 MR5 MR12 MR7 MR2 R2 R2 MR2 MR7 MR12 MR5 MR1 R1 R1 MR1 MR5 MR12 MR7 MR2 R2 R2 MR2 MR7 MR12 MR5 MR1 R1 TABLE III RESULTS OF TUNNEL PERFORMANCE TESTS Packet size [bytes] Transmission rate [Mb/s] Tunnel bandwidth [Mb/s] Throughput [Mb/s] Latency min: 1s avg: 1s max: 1.06s min: 494us avg: 1.2ms max: 9.4ms min: 1s avg: 1s max: 1.03s min: 353us avg: 4.5ms max: 19.4ms min: 1.01s avg: 1.01s max: 1.01s min: 137us avg: 514us max: 7.6ms min: 1s avg: 1s max: 1.02s min: 149us avg: 624us max: 36.2ms We used Token Bucket Filter (TBF) classless queuing disciplines. It suits our requirements best because we do not need hierarchical structure offered by classful disciplines (we have only one class in one interface, without involving priorities). TBF is processor friendly algorithm, which is important advantage for us because the tests showed that in some cases the performance of virtualizer CPU is bottleneck. These advantages make TBF the recommended qdisc for limiting the bandwidth of the entire interface. Here is an example configuration for the tunnel bandwidth limiting in Debian 6 for the tunnel interface named tunnel. The rate parameter is our bandwidth limit. tc qdisc add dev tunnel root tbf rate 2Mbit latency 1000ms burst Please note how tc calculates units: 1Mbit = 1000 Kbit = bps It is commonly confused that specified rate of 1Mbit equals

104 97 TABLE IV PERFORMANCE OF OPENFLOW CONTROLLERS BASED ON [8] Controller Average throughput [Mb/s] NOX fast NOX Maestro Mirage UNIX Mirage XEN Average latency [ms] 1024*1024 bytes. The latency parameter limits the buffer of the algorithm. It means that packets that would wait longer than 1000 ms are simply dropped. Instead of latency, it is possible to use limit parameter, which means the size of the buffer in bytes. These two parameters are mutually exclusive. The burst parameter is the size of the bucket, in bytes. We increased this value from default 5000 to in order to let algorithm handle high traffic rates (above 100 Mb/s). On the other hand, if buffer is too large, the algorithm s precision recede considerable. We chose the value by experiments. The only problem concerning traffic control that we were not able to solve is the lack of preciseness when the rates are very high (especially higher than 500 Mb/s). It is the result of the fact that traffic control is performed by CPU that operates with non-zero time slots and it follows with finite resolution of traffic control mechanisms. The lack of precision is up to 5% of declared rate while the value is Mb/s. In real applications it does not seem to have serious implications because such large bandwidth rates are rarely reserved for a single end-to-end connection. Yet even in this situation, the QoS is still ensured if 5% of mechanism inaccuracy is calculated in bandwidth allocation plan. If one needs perfect precision with high transmission rates, it is advisable to use a high-class hardware traffic shaper. VI. PERFORMANCE COMPARISON AND CONCLUSIONS A. Comparison of the performance of the system with the performance of alternative solutions We did not conduct experiments but we used the results delineated in [8] in the case of OpenFlow and in [6] in the case of MPLS. Regarding to benchmarking OpenFlow, two factors are to be considered. The former is the performance of a controller and the latter is the bandwidth of switches. Table IV presents the performance of different controllers running in one-thread mode on a highly efficient machine with 16-cores processor AMD, 40GB RAM memory and Debian Wheezy operating system, which hosted simultaneously 16 switches with 100 unique MAC addresses each. The controllers were limited to one-thread operation because the controller Mirage does not support operation on many cores simultaneously. As shown in the table, the performance of different controllers may vary widely, up to 10 times. It means that the system in the worst case may be not able to handle a new flow that is directed to the controller, hence it will not ensure QoS. Execution Management does not have this issue because it informs an application whether the connection may be established. The application may start to transmit only when a dedicated tunnel with guaranteed bandwidth is active. Due to this proceeding, an end user waits a little bit longer to start for example VoIP conversation, but after the connection is established he has ensured sufficient bandwidth for a VoIP transmission with required quality. The second, important dimension of the performance is the throughput of devices. In the case of OpenFlow use, it is required to covey to the controller instructions for dealing with a particular connection. It requires the middleware that would communicate with applications and the controller. In such case, the throughput is not constrained by switches. Due to the hardware handling of traffic on low implementation level, the performance of OpenFlow switches does not diverge from the performance of a generic hardware switch. Another issue is limiting resources. The newest OpenFlow version enables the use of shapers that are built in switches, which means that the mechanism is capable of limiting bandwidth according to requirements even if there are large number of separate traffic flows. Execution Management uses classles queuing disciplines of TBF type to limit bandwidth of tunnels. It works well for low bandwidth rates. When large bandwidth is configured (more than 500 Mbps) it is possible that mechanism would accept the transmission with higher rate than configured. The order of magnitude of the difference is several percent and it depends on configured bandwidth limit. The general rule is that the higher limit is set the bigger is inaccuracy, although slight anomalies may occur while experimenting with narrow range of bandwidth values. We increased the burst parameter of TBF discipline (using tc qdisc tool) to 10 times MTU size, which cause that the real maximal bandwidth is always equal or higher than configured value. It means that application always has its requested bandwidth and sometimes it may have slightly larger bandwidth than requested. This difference (5% of configured value) is included in QoS arrangement of all the available bandwidth of the network. In the case of applying OpenFlow, the accuracy of bandwidth limiting depends on the accuracy of traffic control mechanisms implemented in particular device that was chosen by the controller, providing that on the path there is a device that implements traffic control mechanisms. The second mechanism chosen for comparison is MPLS. In this case there is also lacking a module that allows application to communicate with a network management system in order to establish requested connections automatically. Furthermore, MPLS does not have measures to distinguish between different applications running on a single host. On the other hand, the performance tests shows advantage of MPLS over other solutions. Due to the hardware implementation of packets switching in network nodes, the performance of MPLS based network is the same as the performance of routed network (with hardware routers). For traffic management LDP and RSVP protocols might be used. In case of applying both protocols for path and

105 98 bandwidth managing it is possible to achieve our goals (limiting bandwidth of end-to-end connections, determining the connection s path) but it may cause significant packet loss. According to research [6], the use of above mentioned protocols causes the packet loss of 0.03% to 0.17%. For comparison our solution does not influence the packet loss rate. Only in the extreme situation, while switching large traffic (more than 100 Mb/s) from one tunnel to another, the loss of up to 16 packets may occur. Such packet loss does not occur with each iteration (with most iterations no packet is lost). Even in this extreme situation, the packets loss in a second of tunnel path changing is no more than 0.02%. B. Conclusion For the performance tests of virtual networks with resources guarantees for dedicated connections or for entire isolated virtual networks, we used the Execution Management system for end-to-end connections establishment. We focused on existing solutions and techniques implemented in common network equipment. Owing to this fact, our solution can be deployed in almost any network, for example as point of reference in benchmarking. Except for testing cases, Execution Management is also suitable for business purposes. It allows selling multiple particular, limited network resources (access to virtual networks) of a single physical infrastructure. The performance of our system is slightly lower comparing to low level hardware-based mechanisms like MPLS and OpenFlow. The advantages of our system are the simple architecture and capability of traffic engineering (in terms of path and bandwidth) in a heterogeneous environment made of generic equipment. The time needed for new connections establishing is not excessively high and rises only slightly in the case of handling large number of requests simultaneously. Moreover, the larger network is served, the higher is probability that multiple request of new connection will be handled faster, providing constant number of parallel requests. It is worth to mention that Execution Management has several significant functions that are unavailable in competitive solutions, such as discrimination multiple applications on a single host, creation many independent end-to-end connections between a pair of hosts and full support for IPv6. Furthermore, the system cooperates with comfortable, graphical web client, which may be used by the administrator or shared with virtual networks administrators. The presented system does not provide excellent performance in the case of large number of connections with high QoS requirements. Nevertheless, the system despite its disadvantages meets the assumed requirements and offers functions that other systems are lacking. On this ground, the system is suitable for reference testing and for creation of laboratory, content aware networks. Telekomunikacji i Informatyki Politechniki Gdaskiej], pp (in polish) [2] Petrecki Damian, Dabiski Bartomiej, witek Pawe R: Prototype of selfmanaging content aware network system focused on QoS assurance. W: Information systems architecture and technology [Dokument elektroniczny]: networks design and analysis / eds. Adam Grzech [i in.]. Wrocaw: Oficyna Wydawnicza Politechniki Wrocawskiej, pp [3] Chudzik Krzysztof, Kwiatkowski Jan, Nowak Kamil: Virtual networks with the IPv6 addressing in the Xen virtualization environment. W: Computer Networks / Andrzej Kwiecie, Piotr Gaj, Piotr Stera (eds.). Berlin ; Heidelberg : Springer, cop pp [4] Bob Lantz, Brandon Heller, and Nick McKeown A network in a laptop: rapid prototyping for software-defined networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks (Hotnets- IX). ACM, New York, NY, USA, Article 19. [5] ITU-T Recommendation E.271 (10/1976). [6] Guangyi Liu Dept. EE, Tsinghua Univ., Beijing Xiaokang Lin: MPLS performance evaluation in backbone network. W: Communications, ICC IEEE International Conference. p vol.2. [7] C. Guo, G. Lu, H. J. Wang, S. Yang, C. Kong, P. Sun, W. Wu, and Y. Zhang, SecondNet: a data center network virtualization architecture with bandwidth guarantees, in Proc. ACM CoNext, pp [8] Charalampos Rotsos, Richard Mortier, Anil Madhavapeddy, Balraj Singh, Andrew W. Moore: Cost, Performance & Flexibility in OpenFlow: Pick Three, url: rmm/papers/pdf/iccsdn12- mirageof.pdf, retrieved [9] Tarasiuk H. et al., Architecture and mechanisms of Parallel Internet IPv6 QoS, Przegld Teleko-munikacyjny, Wiadomoci Telekomunikacyjne. 2011, vol. 84, no. 8/9. pp (in polish) REFERENCES [1] Gawor Dawid, Klukowski Piotr, Petrecki Damian, witek Pawe R: Prototyp systemu zdalnego monitoringu parametrw przyyciowych czowieka w sieci IPV6 z gwarancj QOS. W: ICT Young 2012 : [II Konferencja Studentw i Doktorantw Elektroniki, Telekomunikacji i Robotyki] : materiay konferencyjne, [Gdask, maja 2012]. Gdask : [Wydzia Elektroniki,

106 99 A Combined Algorithm for Maximum Flow Trees Optimization in Overlay Multicast Michał Kucharzak and Krzysztof Walkowiak Department of Systems and Computer Networks Wroclaw University of Technology, Poland Department of Systems and Computer Networks Wroclaw University of Technology, Poland michal.kucharzak@pwr.wroc.pl, krzysztof.walkowiak@pwr.wroc.pl Abstract Recently, overlay networks has become more and more popular approach to content delivery via the Internet. Moreover, a lot of multicast applications based on such overlays have been proposed as efficient techniques for simultaneous stream distribution to a group of users. Maximum Flow Trees (MFT) problem in overlay multicast networks is to find a streaming topology that embraces a given number of multicast spanning trees, where the total multicast stream is maximized. This paper focuses on solving the MFT in overlay multicast networks. We present an original Combined Algorithm that utilizes greedy search for constructing multicast spanning trees and linear programming techniques with Fractional Spanning Trees packing problem (FST) to assign optimal flows to the trees. The algorithm is evaluated in relation to results yielded by CPLEX solver, greedy algorithm and random benchmarks. Keywords Optimization, Overlay Multicast, Heuristics I. INTRODUCTION Multicast defines a relation where one source communicates with an arbitrary number of destinations in a single transmission which is demanded in many streaming and multimedia applications. The architecture of multicast communications employs one set of packets directed to all the destinations and dedicated intermediate network elements which are responsible for data packets replication in order to achieve higher transmission performance. Initially, multicast was designed and developed over two decades ago for the Internet routing and the first well known implementation of it is referred to as IP Multicast [4]. This network-aware architecture implements multicast functionality in the IP layer and not only requires a dedicated addressing scheme but also specialized routers must be implemented. An alternative approach that slightly revolutionizes multicast-based communication and tackles some problems of network-aware IP Multicast is a multicast defined for overlays (see Fig. 1). It comprises systems which realize multicast routing by forming virtual topologies on the top of network and transportation layers. This virtual-oriented approach expands so-called end-system multicast [2], [3], [13], [16] and provides potential scalability and ease of deployment of new, network-layer-independent protocols at relatively low costs. Contrariwise to IP-based multicast, in overlay multicast data packets are replicated and forwarded by participating nodes instead of IP routers and logical multicast flows are realized as multiple unicast flows at the level of the network Fig. 1: Implementations of multicast communications: network layer multicast (left), overlay multicast (right). layer. Many differentiated concepts towards implementation of overlay multicast have been already proposed [1], [10], [11], [14], [15], [17]. Since the overlay multicast utilizes the same network link multiple times, it is less efficient than IP multicast in terms of network redundancy. In this paper we focus on overlay multicast system aimed at throughput maximization and the problem of Maximum Flow Trees (MFT) defined in our previous work [6]. To solve the problem we propose a technique that combines greedy algorithm for building a topology of multicast trees and Fractional Spanning Trees packing (FST), which is a relaxation of the problem, in order to assign optimal flows to the constructed routing topology. We evaluate the time and result efficiency of the algorithm in comparison to linearbased techniques provided by CPLEX, greedy algorithm and random benchmarks. Our new algorithm is at least as good as greedy search, albeit it also may improve optimization results in approximately similar computational time. The paper is organized as following. Next section describes the problem of maximum flow assignment in overlay multicast systems with single source of the content. It encompasses a formal mixed integer definition of the problem with a polynomial number of variables and constraints. Section III presents heuristic algorithms that solve the problem: greedy algorithm, random search and a new combined algorithm. The combined algorithm combines greedy search and linear techniques in order to improve pure greedy algorithm defined in our prior work. Next in Section IV effectiveness of the heuristics and CPLEX solver is presented and discussed in the meanings of results qualities and time efficiency. Finally, the paper is concluded in Section V.

107 100 II. MAXIMUM FLOW TREES PROBLEM DESCRIPTION We consider an overlay multicast system with a single server and multiple receivers, where the achievable streaming rate needs to be sustained for all receivers in the session. In addition, we address the problem for multicast systems that are relatively static, i.e. all participants are known in advance and no unexpected joins or disconnections of nodes are possible. Even though one could imagine overlay-based multicast systems as rather dynamic in their behavior, some static versions might be easily distinguished. Let us bring into our discussion, e.g., VoD or IPTV systems based on set-top boxes, where users who paid for such services are intended to exist in a multicast group; moreover, systems applied for dissemination of critical information might comprise well known set of nodes, for instance, weather forecasting system with weather stations; software updates for static set of servers; stock exchange data with defined market players; traffic information system with fixed topology and defined number of receiving agents; e-lecture or teleconference systems, etc. Without loss of generality, such an assumption is also valid for quasi-static systems with relatively slow changes, where time required for routing convergence is shorter than topology changes and in cases a new peer appears or disappears from the system, a new multicast structure is assigned. Additionally, multicast routing is assigned in a centralized way, i.e. a multicast server is responsible for calculating multicast flows and propagating them to other participants. We assume the multicast stream can be split into several separate substreams and the substreams might be illustrated as separate spanning trees rooted in the same vertex (source). Such a concept assures the load in the network is balanced and user resources are utilized in more effective way. Moreover, multiple trees might satisfy some coding requirements or even provide reliability of the systems. Eventually, the maximum flow trees problem defined for overlay systems describes multicast routing with maximum achievable streaming rate among all overlay nodes and a single source, where overlay nodes have limited access link capacities. A. Problem Formulation The MFT problem is defined into mixed-integer program and its complete description is widely discussed in [6]. The main goal of the problem is to create spanning trees rooted in the same source s for routing multicast substreams and assign maximum non-negative flows to them, simultaneously. Let t to be an index of a tree, then a set of variables r t represents flows routed on every tree t. Maximum Flow Trees Problem (MFT) indices i, j = 1, 2,..., V, end hosts (peers, overlay system nodes) t = 1, 2,..., T, trees, multicast substreams constants s u i d j source, root node, server available upload capacity of node i available download capacity of node j variables r t flow assigned to tree t x ijt =1 if tree t contains arc (i, j); 0 otherwise (binary) f ijt conceptual flow on arc (i, j) in tree t (integral) flow assigned to overlay arc (i, j) in tree t y ijt objective constraints max r,x,f,y r t (1) t x ijt = 1 j s t (2) i j f ijt f jit = 1 j s t (3) i j i j,s f ijt (V 1)x ijt i j s, i t (4) y ijt u i x ijt i j s, i t (5) y ijt u i i (6) j s t r t min{d min, u s } (7) t y ijt = r t j s t (8) i j Overlay multicast flow (stream) is allowed to be split into at most T substreams that are realized on separate spanning trees. The main scope of the MFT formulation is twofold: it provides us with creating a set of overlay multicast trees x ijt as well as streaming rate r t assigned to each tree. Finally, maximized throughput in the system is expressed by the objective (1). Equation (2) refers to the completion constraint and assures each node except of the root has exactly one predecessor in each tree t. Constraint (3) is derived from the flow conservation concept and guarantees the structure of fractional flow t is a directed spanning tree rooted at node s. The equation assures that there is conserved only one logical inflow for every receiver j (j relays all flows but one directed from s to j). Formula (4) bounds logical flow f variables with tree variables x. The set of inequalities represents that arc (i, j) cannot be traversed by more logical flows than V 1 (to all receivers) only if tree t contains this arc. Having created T trees (feasible x), formulas (5) and (6) refer to as upload capacity constraints. Only non-zero flow y ijt can be assigned to overlay link (i, j) in tree t if and only if the link is used in the tree (x ijt = 1), in such a case, y ijt cannot exceed upload capacity limit of node u i. Additionally, total throughput transmitted from node i among all trees is also subject to u i. Eventually, each tree t is supposed to route r t traffic flow in the way that every link of tree t realizes the

108 101 same volume of flow. Since every receiver is guaranteed by (2) to have single predecessor in each tree, constraints (8) ensure each link of tree t carries flow r t. Therefore, the total system stream carried by all multicast substreams equals to t r t. In order to provide feasible throughput, t r t cannot be greater than the minimum download capacity limit among all nodes and available upload capacity of the source node u s (7). III. ALGORITHMS The performance of general linear programming techniques cannot often reach acceptable levels while the problem size grows fast. This section presents 3 heuristic algorithms for solving the Maximum Flow Trees problem. A. Greedy Algorithm Greedy algorithm was proposed in our prior work and its detailed description might be found in [6]. Herein, the algorithm is used as one of benchmarks. B. Combined Algorithm Combined algorithm combines greedy algorithm in order to create a set of multicast trees and it additionally employs linear techniques and a relaxed version of the problem - Fractional Spanning Trees packing that assigns optimal flows to the spanning multicast trees. Algorithm 1 illustrates a pseudocode of the Combined Algorithm. It takes as its input a number of overlay nodes V with index of the source s, trees limit T and capacity vectors u with d. At the beginning (line 2), the algorithm sets remaining capacity limit u as available capacity u. Next, since all nodes are required to obtain the same multicast traffic, in line 3 variable d min gets a minimum value of download capacity limit among all receivers. Next, the main loop in lines 4-12 solves the maximum flow trees problem in a greedy way. Until T spanning trees are created or remaining node capacities are exhausted, the algorithm creates trees and assigns maximum achievable flow to them. In line 5 a new spanning tree is Algorithm 1 Combined Algorithm 1: procedure COMBINEDALGORITHM(V, T, u, d, s) 2: u u 3: d min min{d} 4: while ( TREES T ) (u s > 0) (d min > 0) do 5: t ConstructTree() 6: TREES TREES t 7: r t GetMaxFlow(t) 8: d min d min r t 9: for all k s do 10: u P (k) u P (k) r t 11: end for 12: end while 13: r 0 14: r SolveFST(V, T, u, d,trees) 15: return r 16: end procedure created by using a ConstructTree procedure. Next, in line 6, a set of fractional trees TREES is augmented with this new tree t and in line 7, variable r t is assigned with maximum possible flow (procedure GetMaxFlow). Lines 8-11 update current system state, i.e., d min variable is decreased by already allocated r t, and the next loop for every receiving node k modifies upload capacity limit of its parent P (k), where u represents remaining upload capacity of nodes. Note that remaining capacities of nodes are decreased with respect to the number of their children in the tree. When T multicast trees are created in a greedy way, combined algorithm employs FST linear program (line 14) in order to assign the trees with optimal flows. The FST problem is defined in the following way. Fractional Spanning Tree Packing Problem (FST) indices i, j = 1, 2,..., V overlay nodes t = 1, 2,..., T predefined trees constants s source, root node, server u i available upload capacity limit of node i d j available download capacity limit of node j number of i s children in tree t; 0 if i is a leaf β ti variable r t flow assigned to tree t (continuous) objective constraints max r r t (9) t β ti r t u i i (10) t r t min{d min, u s } (11) t The objective function (9) maximizes summarized flow of all fractional streams assigned to multicast spanning trees spread among V nodes with a single source of the content s. Next, node capacity constraints are introduced and each node i can neither contribute to the system exceeding its upload capacity nor download more than its download limit. Constraints (10) formulate upload limit based on available upload capacities u i of every node in the system. Note that node i is a parent node in tree t, it uploads content of size r t exactly β ti times. Therefore i s total upload given by t β tir t cannot exceed its upload limit u i. Upload capacity limit is defined for every node in the system, therefore there are exactly V inequalities. By analogy to upload bound (10), a set of constraints (11) is introduced in order to guarantee download limits of nodes are not surpassed. In general, a minimum value among d min or u s limits overall flow packed into the multicast system. It is worth referring to some previous works on overlay multicast that employ Fractional Spanning Trees linear formulation, e.g., see [7], [8], [9].

109 102 Algorithm 2 Construct Tree 1: procedure CONSTRUCTTREE(V, u, s) 2: Sort RECEIVERS in descending order of u i 3: TREE s 4: for all k RECEIVERS do 5: p i TREE: u i /(deg+ (i) + 1) = 6: max{u /(deg + + 1)} 7: set p as parent of k 8: end for 9: return TREE 10: end procedure The most important part of the combined algorithm is a logic of successive tree construction. We illustrate a pseudocode description and the principle of ConstructTree procedure as Algorithm 2. At the beginning, receivers are sorted from high-upload node to low-upload node, i.e. u k u k+1 for all consecutive k RECEIVERS, where u i represents remaining available upload capacity of node i. Before joining nodes to the tree, set TREE contains only the source node s. Next in lines 4-8, every node from RECEIVERS is attached to node i of TREE with the highest remaining upload that can be provided (line 6, where deg + (i) is outdegree of i). It reflects the fact that, if any new node connects i, node i would relay at most u i divided into all its children plus 1 in a single tree, hence every node connecting the tree cannot receive more than the maximum value among all u i in TREE. In case more than one node satisfy this condition, a node different than the source s should be taken into consideration. It provides greater upload capacity of s for a construction of next tree. After joining a new node k to the tree, k is included to the set TREE. In Fig. 2 we illustrate an example of the ConstructTree (a) (b) procedure. Before no tree is created, u equals to u and only s is connected to the tree (Fig. 2a). Then in Fig. 2b, node k = 1 with maximum u is attached to s, and utility based on u s updates to 10/2=5. Next step, node k = 3 becomes a successor of 1, due to u 1/1 = 12 is maximum value among all nodes in TREE and consequently utility if 1 changes to u 1/2 = 6. By doing so, node 2 joins the tree by attaching to node 1 and tree is created as shown in Fig. 2d. Considering initial u given before tree construction, the tree in Fig. 2d might carry flow equal to at most 6 (we assume d j u s j s). The greedy approach assigns maximum flow to every created tree and next, a new tree is built with respect to updated initial u. C. Random Algorithm Random Algorithm is implemented in order to provide some benchmarks representing random nature of overlays. It randomly creates T multicast trees. Next, it employs FST linear program (9)-(11) for optimal flow allocation in the same way as combined algorithm in line 14 of pseudocode 1. IV. RESULTS Computational experiments were carried out on an Intel Core 2 Duo CPU with 2.13 GHz clock and 4GB RAM, with x64 Windows 7 Professional. All algorithms were implemented in C++ under Micrsoft Visual Studio Besides the heuristics, we use CPLEX 12.3 libraries (see, e.g., [5] for former version) in order to provide additional benchmarks of the MFT mixed integer problem and to solve the FST problem as a part of the combined algorithm. Execution time of heuristics is obtained with GetTickCount() function from Windows.h library. We examine overlay multicast system comprising nodes connected to the network using ADSLs (assymmetric digital subscriber lines). Nodes are proportionally distributed among 10 Polish Internet Service Providers (ISPs) that guarantee best average upload capacities and the available upload and download capacities of the nodes are taken from recent SpeedTestbased reports that are provided by Net Index [12]. The values of the capacities are presented in Table I. Available upload capacity of nodes are distributed among values from 2.20 to Mbps, where the source s belongs to an ISP that provides Mbps. Download capacities are, depending on the ISPs, from 5.67 to Mbps. Since the total flow of the stream remains the same for all nodes, the upper bound of maximum multicast flow equals to 5.67 Mbps. The important TABLE I: Average available download and upload capacities. (c) Fig. 2: An illustrative example of the ConstructTree procedure. (d) Provider Upload Mbps Download Mbps Lanet Sp. z o.o Internetia Sp. z o.o Stream Communications Sp. z o.o GTS Poland Sp. z o.o UPC Polska Exatel S.A ASTER Sp. z o.o TK Telekom Telefonia Dialog Sp. z o.o INEA S.A

110 103 TABLE II: Results. Results (Mbps) Computational time (s) V T MFT uppbnd Greedy gap Combined gap Random gap (avg std) MFT Greedy Combined Random % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) % % % ( ) optimality guaranteed point to note about the capacity limit is that the values d and u might take into consideration some background traffic and eventually they represent residual download and upload capacity of nodes. Table II reports flow optimization results for overlay systems comprising 10, 20, 30, 40 and 50 nodes (column V ); and different available number of multicast trees (column T ). The first part of the table refers to the obtained flows in Mbps, whereas the second shows computational time required for solving the problem. Columns MFT report results for CPLEX solver; Greedy, Combined and Random regard heuristics in their meanings. In order to obtain results by CPLEX solver in reasonable time we limited its time for computations parameter to one hour, hence column uppbnd describes the best reached upper bound. It might be easily noticed that there are only 4 instances among 50 where mixed-integer MFT was solved to optimality, moreover, it was possible only for the smallest instances with V = 10 nodes. While systems sizes grow, relative MIP gap defined, in general, by upper bound divided by best feasible solution, also increases. For V = 20 nodes, a distance from feasible solutions and upper bounds equals to 10-13%; it grows to 18-22% for V = 30; reaches 22-38% for V = 40 nodes and exceeds even 103% for instances embracing 50 nodes. Greedybased algorithms are very efficient techniques to solve the MFT problem. For overlay systems with V = 10 nodes they yield about 5.9% relative gap in relation to feasible solution by CPLEX for T = 2 (see columns gap ); and reach even

111 104 less than 1% if more multicast trees might be used. Therefore, the relative distance to upper bound equals % for greedy algorithm and % for combined algorithm and V = 10. Starting from V = 20 and T = 3, efficiency of mixed integer programs by CPLEX decreases and both greedy approaches yield better multicast flow assignment by even more than 3% within fraction of seconds, whereas CPLEX solver solves the problem at least one hour. In comparison to upper bound, results fall below 3% and even 0.6% for greedy and combined algorithm, respectively. If overlay multicast systems comprise V = 30 nodes, greedy searches are at least as good as MFT-designed linear techniques, and moreover, they provide more than 18% result improvement for T = 10. In such cases, relative distance to upper bound is even less than 0.5% (greedy) and 0.2% (combined). Finally, for bigger systems heuristics based on greedy approach yield explicitly better results than CPLEX and provide solutions that exceed CPLEX s results by about 3-33% (greedy) and 4-36% (combined) for V = 40; and reach 97% (greedy) or 101% (combined) for V = 50 nodes, what provides about 3% and 1% relative distance to upper bound for systems with 40 and 50 nodes. Results of random algorithm refer to best case among 100 independent tests for every instance, thereby, columns avg and std stand for arithmetic average and standard deviation, respectively. Although, random-based approach is completely inefficient for the MFT problem and usually yields worse results than other techniques and even it also requires more computational time than greedy and combined algorithm, it exhibits a level of untapped resources in random nature overlays, where routing topology is not optimized. In general, random results are worse than CPLEX by 13% for V = 10; 32% for V = 20; 40%, 41% and 38% for 30, 40 and 50 nodes, respectively. V. CONCLUDING REMARKS This paper addressed the issue of optimal flows assignment in overlay multicast systems and proposed a combined algorithm for solving the MFT problem. The algorithm incorporates greedy search for multicast trees construction, where the trees represent fractional flows. Next it employs linear programming techniques and solves the FST model in order to allocate optimal flows. The combined algorithm was evaluated in relation to results yielded by CPLEX solver, pure greedy algorithm and random search that represents random nature of overlay systems. Experimentations showed the combined algorithm is very effective approach to tackling the MFT problem. It is at least good as greedy search and additionally it slightly improves the results for instances with more than 4-5 multicast trees. In general, combined algorithm along with greedy search explicitly outperform pure random search and even exceeds results provided by CPLEX solver for instances of size 20 nodes and more. Both of the algorithms require fractions of seconds in order to return a feasible solution, whereas MFTbased CPLEX program remains inefficient in the meaning of computational time. However, it is worth concluding that there is still some space to develop the combined algorithm while its result for V = 10 nodes with tree limit T = 2 yields at least about 5.9% gap to the optimal solution. REFERENCES [1] B. Akbari, H. R. Rabiee, and M. Ghanbari, An optimal discrete rate allocation for overlay video multicasting, Computer Communications, vol. 31, no. 3, pp , [2] Y. Chu, S. G. Rao, and H. Zhang, A case for end system multicast, in in Proceedings of ACM Sigmetrics, 2000, pp [3] Y. Cui, Y. Xue, and K. Nahrstedt, Optimal resource allocation in overlay multicast, in in Proc. of 11th International Conference on Network Protocols, ICNP, [4] S. E. Deering and D. R. Cheriton, Multicast routing in datagram internetworks and extended lans, ACM Transactions on Computer Systems, vol. 8, pp , [5] IBM ILOG CPLEX 12.1, User s Manual for CPLEX, [6] M. Kucharzak and K. Walkowiak, Maximum flow trees in overlay multicast: Modeling and optimization, in Proc. 2nd Baltic Congress Future Internet Communications (BCFIC), 2012, pp [7], Fractional spanning tree packing problem with survivability constraints for throughput maximization in overlay multicast networks, in Proc. 3rd Int Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT) Congress, 2011, pp [8], An improved annealing algorithm for throughput maximization in static overlay-based multicast systems. in HAIS (1), ser. Lecture Notes in Computer Science, E. Corchado, M. Kurzynski, and M. Wozniak, Eds., vol Springer, 2011, pp [9], On modelling of fair throughput allocation in overlay multicast networks, in NEW2AN, ser. Lecture Notes in Computer Science, S. Balandin, Y. Koucheryavy, and H. Hu, Eds., vol Springer, 2011, pp [10] L. Lao, J. hong Cui, and M. Gerla, Multicast service overlay design, in In Proc. of Second International Symposium on Wireless Communication Systems, ISWCS 05, Philadelphia, Pennsylvania, USA, [11] B. Leuf, Peer to Peer: Collaboration and Sharing over the Internet. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., [12] Ookla. Net index. [Online]. Available: [13] A. Sentinelli, G. Marfia, M. Gerla, L. Kleinrock, and S. Tewari, Will iptv ride the peer-to-peer stream? peer-to-peer multimedia streaming, Communications Magazine, IEEE, vol. 45, no. 6, pp , [14] S. Shi and J. S. Turner, Multicast routing and bandwidth dimensioning in overlay networks, IEEE Journal on Selected Areas in Communications, vol. 20, pp , [15] C. Wu and B. Li, On meeting p2p streaming bandwidth demand with limited supplies, in In Proc. of the Fifteenth Annual SPIE/ACM International Conference on Multimedia Computing and Networking, [16] G. Wu and T. Chiueh, Peer to peer file download and streaming. rpe report, tr-185, [17] Y. Zhu and B. Li, Overlay networks with linear capacity constraints, IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 2, pp , 2008.

112 105 Incremental Approach to Overlay Network Design Problems Maciej Szostak and Krzysztof Walkowiak Department of Systems and Computer Networks Wroclaw University of Technology, Wroclaw, Poland Department of Systems and Computer Networks Wroclaw University of Technology, Wroclaw, Poland Abstract Looking at the increase of the number of the Internet users does not leave any doubt about the fact that the global consumer Internet traffic is going to rapidly increase. Expected annual growth in the years is at the rate of 31%, from PB per Month in 2011, to over PB per Month in 2016 [1]. This contributes to emerging of new approaches of content distribution which take into account investment of novel infrastructures, as well as the best exploitation of existing systems and physical infrastructure which in turn leads to limiting bandwidth bottlenecks. Traditional IP systems are being substituted with overlay systems, which are more scalable, efficient and resilient. Based on previous works, in this paper we focus on the incremental approach to the overlay network design problem. Our objective is to minimize the cost of building the network, expressed in Euro/Month. In numerical experiments we use real ISPs price lists. To ilustrate our approach we present results obtained by CPLEX solver [2]. I. INTRODUCTION Very important aspect of any kind of design which should be taken into consideration at the time of planning is expansion. This especially concerns network planning, due to the pace of user s number increase, as well as bandwidth demanding services and applications. Newly created networks should incorporate such criteria as low cost of deployment, transport efficienty and fault tolerance. In this paper we focus on the first and the second factor. Keeping that in mind lead us to the following network increment scenarios: introducing more users to participate in the transmission and requirement for higher capacity links for the existing system. There are two approaches to distribution of the content (such as audio, video, etc.) onver the internet - streaming and downloading. Overlay multicast seems to be the best solution to meet all their requirements without violation of physical core [6]. In this paper we consider multimedia streaming which not only isn t flouting the artists copyright but also has a definite advantage over Internet s major sharing mechanism in which user can access file only when it has been fully downloaded. The video stream can increase or decrease in a bit rate, depending on the network infrastructure capabilities. This method is called Adaptive Bit Rate Streaming, however it is designed to use unicast or anycast connections. The main reason of upgrading the network is growing bandwidth of streaming services, e.g., users wanting higher quality of the video stream that require higher bit rate. Thus, the existing network must be incremented (with higher link capacities) to answer this demand. In this paper we present both of previously described incremental scenarios to the problem of designing network in overlay multicast, which aims to minimize the cost of upgrading the system. We present only basic mathematical model formulation for considered scenarios, which can be easily modified to reflect real systems, by introducing additional constraints, eg. tree delay. We assume that the overlay multicast is applied for relatively static applications with low membership change rate, e.g., videoconferencing, personal video broadcast in small groups, distance learning, collaborated workgroup, delivery of important messages (stocks, weather forecast, emergency alerts) [7]. This means that participants of the overlay system are stable connected and there are no dynamic changes of the structure as in file-sharing P2P systems. The rest of the paper is organized as follows. Next Section presents related work. Section III presents mathematical formulation of three incremental approaches to the overlay network design problem. In Section IV we present and discuss the results obtained from CPLEX solver. Finally, last section concludes this work. II. RELATED WORK There is extensive literature about the topology design, well covered in [13]. Also, since there is a growing need for applications that will both stream real time content and retrieve on-demand content, many surveys on application layer multicasting have been carried out [15]. Most of the approaches however, focus on how to construct a overlay multicast topology in the optimal way ([4], [5], [8], [10], [18]). Only few studies concern incremental approach to the network design ([3], [9], [11], [12], [13], [17]). Main aim of those studies is to propose algorithms for network design problems considering number of different constraints and objectives. Both topology design and capacity increment coupled with routing changes approaches are presented. III. MULTICASTING OVERLAY NETWORK DESIGN PROBLEMS We are going to develop the mathematical formulation based on [16] to model three extended problems for Multicasting

113 106 Overlay Network Design Problem in the case of incremental network, taking into consideration the following scenarios: system capacity increment system capacity increment with new flow allocation system capacity increment with new flow allocation and additional nodes. All of the models are formulated in the same manner. The network nodes are modeled using index v = 1, 2,..., V. For each node one of candidate links k = 1, 2,..., K v is to be selected. The access link is used not only for streaming - upload and download background traffic related to other network services (e.g., WWW, , ftp, etc.) are given and denoted as a v and b v, respectively. Our objective is to determine how much resource capacity is needed for each peer and how to economically distribute the streaming content in the overlay network using overlay multicast. The former goal comes to selection of the access link among link types offered by Internet Service Providers. For each of access link types we know the download and upload capacity (denoted as d vk and u vk respectively) and cost (denoted by ξ vk ). The latter goal is to construct the overlay multicast trees, taking into account capacity constraints. Our model is an overlay tree distribution graph rooted at the source of the content, in which we assume division of the main stream into substreams. To assure different qualities of the end stream, multiple delivery trees t = 1, 2..., T are created, each tree carrying a different substream. This also prevents establishment of leaf nodes, which do not contribute to the overall distribution. In presented models, each node must be connected to all of the trees, however this approach can be easily modified to consider a scenario where some nodes receive the streaming of lower quality, i.e., nodes are connected only a subset of substream trees. Overlay multicast networks are built on top of a general Internet unicast infrastructure rather than point-to-point links, therefore the problem of overlay network design is somewhat different than in networks that do have their own links [14]. In this section, we present three new basic mathematical models of the incremental overlay network design problem. In the first scenario we consider incrementing streaming rate of the system without any changes to the tree structure. Our objective is to minimize the cost of upgrading access links of the nodes participating in the transmission, therefore we assume that there is no downgrade of the access link type. Moreover, following from real systems, it is not possible to downgrade already chosen link type without breaking the agreement with ISP and paying extra charges - only upgrades are allowed. Capacity Increment Model (CIM) The following type of decision variable is necessary to chose access link types for the participating nodes: y vk, which equals to 1, if node v is connected to overlay network by a link of type k; 0, otherwise. given: existing tree structure, streaming rate of the content to be distributed, nodes capacity requirements, pool of access link types, nodes previously selected access link types minimize: cost of upgrading the network over: overlay multicast subject to: capacity constrains, every node can choose only one access link type, new access link type capacity cannot be lower than previously selected indices v, w, e = 1, 2,..., V overlay nodes k = 1, 2,..., K v access link types for node v t = 1, 2,..., T multicast trees constants a v download background transfer of node v (kbps) b v upload background transfer of node v (kbps) ξ vk cost of link type k for node v (Euro/Month) d vk download capacity of link type k for node v (kbps) u vk upload capacity of link type k for node v (kbps) q t streaming rate of the tree t (kbps) t vk =1, if node v was connectedo the the overlay network by a link of type k; 0 otherwise C cost of the existing network (Euro/Month) z wvt =1, if there exists a link between node w and v in multicast tree t; 0 otherwise variables y vk objective constraints =1, if node v is connected to the overlay network by a link of type k; 0 otherwise a v + t min F = v y vk ξ vk C (1) k y vk = 1 v = 1,..., V (2) w q t k b w + z wvt v w t k t vk ξ vk k k y vk d vk v = 1,..., V (3) y wk u wk w = 1,..., V (4) y vk ξ vk v = 1,..., V (5) The objective (1) is to minimize the cost of upgrading access links of the overlay P2P multicast network taking into account cost of the previous network that is upgraded. Constraint (2) assures that each node v = 1, 2,..., V can have only one access link. Formula (3) is a download capacity constraint, i.e. download capacity of node v = 1, 2,..., V greater or equal to background traffic of node v and streaming rates of all the multicast trees. Analogously, (4) is the upload capacity

114 107 constraint of node w = 1, 2,..., V, and is equal to the summary upload transfer of w which follows from the number of children nodes, the streaming rate and the background traffic. Condition (5) assures that there is no downgrade of the link type of existing nodes. Capacity Increment Mode with Additional Nodea and New Flow Allocation (CIMNFA) This scenario takes into consideration increasing capacity and new flow allocation for the whole system. The following additional types of decision variables are necessary to construct multicast trees: x wvet, x wvt. Let x wvet equal to 1, if there is a path from the root node to node e, and it traverses through the link between nodes w and v in the tree t; 0, otherwise. The last decision variable, x wvt, equals to 1, if there is a link from node w to node v (no other nodes in between) in the multicast tree t; 0 otherwise. given: streaming rate of the content to be distributed, nodes capacity requirements, pool of access link types, nodes previously selected access link types minimize: cost of upgrading the network over: overlay multicast subject to: capacity constrains, every node can choose only one access link type, new access link type capacity cannot be lower than previously selected additional constants r v =1, if node v is the root of the tree; 0 otherwise additional variables x wvet x wvt =1, if there is a path from the root node to node e and it traverses through the link between nodes w and v in multicast tree t; 0 otherwise =1, if there is a link from node w to node v (no other peer nodes in between) in multicast tree t; 0 otherwise objective min F = y vk ξ vk C (6) v k constraints(2) - (3) and x wvt = (1 r v ) v = 1,..., V t = 1,..., T (7) w v x wvet x wvt w, v, e = 1,..., V w v t = 1,..., T (8) x vwet = 1 v = e x wvet w v w v, e = 1,..., V t = 1,..., T (9) x vwet = 0 v e r v = 0 x wvet w v w v, e = 1,..., V t = 1,..., T (10) x vwet = 1 r v = 1 x wvet w v w v, e = 1,..., V t = 1,..., T (11) b w + x wvt q t v w t k t vk ξ vk k k y wk u wk w = 1,..., V (12) y vk ξ vk v = 1,..., V (13) The objective (6) is to minimize the cost of upgrading the network. To already existing constraints (2) - (3) we introduce the following additional ones (7) - (12). Since for each tree t = 1, 2,..., T each node v = 1, 2,..., V - except for the source node of the tree t(r v = 1) - must have exactly one parent node, we introduce condition (7). This also means that each node downloads all of the substreams. Formula (8) guarantees that there is a path from root node to node e traversing through link between nodes w and v only if this link exists. Conditions (9), (10) and (11) represent flow conservation control for the nodes being destination node, traversing node and root, respectively. (12) is the upload capacity constraint of node w = 1, 2,..., V, and is equal to the summary upload transfer of w which follows from the number of children nodes, the streaming rate and the background traffic.constraint (13) assures that there is no downgrade of the link type of the nodes. Capacity Increment Mode with Additional Nodes and New Flow Allocation (CIMANNFA) This scenario considers introducing additional nodes and new flow allocation for the whole system. The following additional type of continuous decision variable is necessary to build the system: s v, equal to the cost of upgrading the node v. given: streaming rate of the content to be distributed, nodes capacity requirements, pool of access link types, nodes previously selected access link types minimize: cost of upgrading the network over: overlay multicast subject to: capacity constrains, every node can choose only one access link type, new access link type capacity cannot be lower than previously selected additional indices

115 108 ISP TP Dialog UPC TABLE I INTERNET SERVICE PROVIDERS PRICE LIST Price Download limit Upload limit [Euro/Month] [kbps] [kbps] v, w, e = 1, 2,..., W, W +1, W +2,..., V overlay nodes, where nodes 1, 2,..., W are the existing nodeas, and nodes V +1, V +2,..., W are additional nodes k, a = 1, 2,..., K v access link types for node v additional variables s v objective the cost of upgrading the node v min F = v s v (14) constraints (2) - (3), (7) - (12) and t vk ξ vk y vk ξ vk v = 1,..., W (15) k k y vk t va (ξ vk ξ va ) s v v = 1,..., W (16) k a y vk ξ vk s v v = W + 1, W + 2,..., V (17) k The objective (14) is to minimize the cost of upgrading the network. To already existing constraints (2) - (3), (7) - (12) we introduce the following additional ones (15) - (17). (15) assures that there is no downgrade of the link type of the nodes. Condition (16) is in the model to minimize the cost of upgrading the existing nodes, whereas constraint (17) takes into consideration nodes that have been newly added. IV. RESULTS The goal of numerical experiments was to examine the performance of presented three approaches against the traditional approach, obtained by CPLEX solver. In all experiments we use DSL price lists of three ISPs operating in Poland (TP, Dialog and UPC - Table I ) with prices in Euro/Month. To each node we randomly assign one of the ISPs, so that access link can be chosen from the pool of available options. The values of download background traffic were selected at random in TABLE II TRADITIONAL APPROACH VS CAPACITY INCREMENT APPROACH FOR 576P STREAM QUALITY V T T A IA GAP ,03% ,50% ,56% ,19% ,54% ,03% ,13% ,46% ,02% ,54% ,61% ,63% ,92% ,34% ,72% TABLE III CIMNFA UPGRADE COST FROM 320P STREAM QUALITY V T IP UP 320p 480p 576p 720p the range from 1024 to 2048 kbps. Analogously, the values of upload background traffic were selected at random in the range from 256 to 512 kbps. In order to solve proposed three models in optimal way and obtain computation time we used CPLEX 12.4 solver. Due to the complexity of the problem, we decided to test networks consisting of up to 25 overlay nodes (peers), in order to obtain close to optimal results in a reasonable time. The streaming rate in examined system was divided proportionally to 1-4 substreams. We examined networks consisting of 5-25 nodes. The tests were run for a fixed root node location and selection of ISP. For our investigation we have considered four different sreaming bit rates corresponding to four different qualities

116 109 TABLE IV CIMANNFA UPGRADE COST FROM 320P STREAM QUALITY FOR INITIAL COST OF 60 EURO/MONTH, 3 TREES AND 5 INITIAL NODES V V S UP IA T A p p p p p p p p p p p p p p p TABLE V CIMANNFA UPGRADE COST FROM 480P STREAM QUALITY FOR INITIAL COST OF 116 EURO/MONTH, 2 TREES AND 10 INITIAL NODES V V S UP IA T A p p p p p p p p p We analyze video quality of 576p, 5-25 nodes and building 1-3 trees. For CIM, quality is upgraded from 480p to 576p. Using the CIM, cost of building the network is 3% to over 15% higher than using traditional approach. This is caused by the assumptions of this incremental approach, meaning we model only access link types without changing the structure of the tree. As already stated, it is impossible to upgrade to HD video quality from low quality using CIM. Table III shows experiments for CIMNFA. Column 1 represents number of nodes in the system, column 2 shows number of trees, column 3 gives the initial cost of building the network for 320p stream quality. Columns 4-6 are related to increment cost to 480p, 576p and 720p, respectively. We show upgrade cost for 5-25 nodes, 1-4 trees when upgrading from the 320p stream quality. We can easily notice that there is a mere price difference when upgrading from low to one level higher stream quality. Incrementing to 576p is on average about twice as much as incrementing to 480p, however when we want to switch to HD the cost is much higher, up to over eight times as much as upgrade to 480p. Both Table IV and Table V show CIMANNFA experimental results. Column 1 represents number of nodes after upgrade, column 2 gives final video stream quality, column 3 relates to the increment price, whereas columns 4 and 5 describe cost of building the network using incremental approach and traditional approach, respectively. Table IV analyzes 3 trees, initial video quality of 320p and 5 initial nodes, whereas Table V concerns 10 initial nodes, 2 trees and initial video quality of 480p. Surprisingly, cost of upgrading the video quality using CIMANNFA is equal to the cost of traditional approach. This is caused due to the fact that the tree topology is recreated, however, we have to keep in mind that the experiments have been carried out for networks consisting of up to 25 nodes, and this schema can change for larger networks. of the video stream: 320p, 480p, 576p and 720p (HD). To compute the streaming rate to be distributed, we have used On2 VP6 video codec, NTSC frame rate, average motion, 16:9 aspect ratio, mp3 audio codec, stereo channels, medium audio quality and 44,1 khz sampling rate. Due to the limitation of maximal upload capacities delivered by Internet Service Providers (ISP), we couldn t test FULL HD quality stream using overlay multicasting approach. In total 600 different scenarios were considered. We introduced computation time limit of one hour, therefore in some cases only feasible solution instead of optimal one was found. Note, that in most experiments for Capacity Increment Model, regardles of the number of participating nodes in the transmission, when changing video quality from 320p to 576p or 720p, the model was infeasible - this upgrade was impossible. Table II shows comparison of the cost of building the network using traditional approach and Capacity Increment Model. Column 1 represents number of nodes in the system, column 2 shows number of trees, columns 3 and 4 give the cost of building the network in Euro/Month obtained using traditional approach and incremental approach, respectively. Column 5 is related to comparison of those two approaches. V. CONCLUSION This paper has addressed the problem of incremental approach to network design problems in overlay multicasting, and has provided three new models for further development and investigation. The objective of the optimization was to minimize the cost of upgrading the system. We have ilustrated our approach by presenting results obtained using CPLEX solver. In numerical experiments we have shown that incrementing streaming system is more expensive than building the structure from the scratch. Capacity Increment Model with Additional Nodes and New Flow Allocation however, seems not to match this rule. This can be caused by small systems cosnisitng of up to 25 nodes. Also, in most experiments for Capacity Increment Model, changing video quality from low to medium or HD was impossible. In our experiments we have taken into consideration fixed root node ISP and parameters such as number of multicast trees and number of nodes participating in the system. We have investigated cost difference for upgrading the system from 5 to up to 25 nodenetwork with change form low quality to HD quality video stream.

117 110 REFERENCES [1] Cisco visual networking index: Forecast and methodology, [2] Ilog cplex 12.4 user s manual. [3] Analysis of Optimization Issues in Multi-Period DWDM Network Planning, ser. IEEE INFOCOM, [4] On Meeting P2P Streaming Bandwidth Demand with Limited Supplies, ser. Proceedings of the Fifteenth Annual SPIE/ACM International Conference on Multimedia Computing and Networking, [5] B. Akbari, H. Rabiee, and M. Ghanabari, An optimal discrete rate allocation for overlay video multicasting, Computer Communications, vol. 31, pp , [6] S. Banerjee, C. Kommareddy, K. Kar, B. Bhattacharje, and S. Khuller, Omni: An efficient overlay multicast infrastructure for real-time applications, Computer Networks: The International Journal of Computer and Telecommunications Networking, vol. 50, Issue 6, pp , [7] J. Bufford, H. Yu, and E. Lua, P2P Networking and Applications, D. Clark, Ed. The Morgan Kaufmann Series in Networking, [8] Y. Cui, Y. Xue, and K. Nahrstedt, Optimal resource allocation in overlay multicast, IEEE Transactions on Parallel and Distributed Systems, vol. 17, no. 8, pp , [9] S. Gopal and K. Jain, On network augumentation, IEEE Transactions on Reliability, vol. 35, no. 5, pp , [10] M. Kwon and S. Fhamy, Path-aware overlay multicast, Computer Networks, vol. 47, no. 1, pp , [11] H. Lee and D. Dooly, Heuristic algorithms for the fiber optic network expansion problem, Telecommunication Systems, vol. 7, pp , [12] A. Meyerson, K. Munagala, and S. Plotkin, Designing networks incrementally, in 42nd IEEE symposium on Foundations of Computer Science, [13] M. Pioro and D. Medhi, Routing, Flow and Capacity Design in Communication and Computer Networks, D. Clark, Ed. The Morgan Kaufmann Series in Networking, [14] S. Shi and J. Turner, Multicast routing and bandwidth dimensioning in overlay networks, IEEE Journal in Selected Areas in Communications, vol. 45, no. 8, pp , [15] T. Small, B. Li, and B. Liang, Outreach: Peer-to-peer topology construction towards minimized server bandwidth costs, IEEE Jo, vol. 25, no. 1, pp , [16] M. Szostak and K. Walkowiak, Two approaches to network design problem fo overlay multicast with limited tree delay - model and optimal results, International Journal of Electronics and Telecommunications, vol. 57, no. 3, pp , [17] A. Tero, S. Takagi, T. Saigusa, K. Ito, D. Bebber, M. Fricker, K. Yumiki, R. Kobayashi, and T. Nakagaki, Rules for biologically inspired adaptive network design, Science, vol. 327, pp , [18] Y. Zhu and B. Li, Overlay networks with linear capacity constraints, IEEE Transactions on Parallel and Distributed Systems, vol. 19, no. 2, pp , 2008.

118 111 Enabling determination of packet scheduling algorithms in network processors Maciej Demkiewicz, Andrzej Kozik, Radosław Rudek, Patryk Schauer, Paweł Świa tek Faculty of Computer Science and Management Wrocław University of Technology Wy. Wyspiańskiego 27, Wrocław, Poland {maciej.demkiewicz, andrzej.kozik, radoslaw.rudek, patryk.schauer, Abstract The most recent approaches concerning developing modern computer networks focus on distinguishing many simultaneous isolated Parallel Internets sharing a common medium. Due to some requirements (e.g., following QoS) each Parallel Internet demands a dedicated time window for its processing that is guaranteed by time-division multiplexing. Nevertheless, there is the lack of specialized network devices that allow to implement such approaches and to keep the high efficiency of packet processing. Thus, in this paper, we propose a solution that bridges the traffic processing capabilities of a network processor with programming flexibility of the NetFPGA cards concerning in particular time-division multiplexing. The experimental analysis confirmed the efficiency of the proposed solution that allows to determine (chose) packet scheduling algorithms others than embedded in network processors. I. INTRODUCTION The most recent approaches concerning developing modern computer networks focus on distinguishing many simultaneous isolated Parallel Internets (PI) sharing a common medium, e.g., best-effort delivery (BES), quality of service (QoS) (see [1], [2]). Due to some requirements (e.g., following QoS) each PI demands a dedicated time window for its processing that is guaranteed by time-division multiplexing (TDM). One of the main problems in implementing such PIs is the lack of specialized network devices. Namely, we can use a fully programmable network processor NP-3 [3] offering integrated traffic managers, packet processing, advanced flowbased bandwidth control, etc. Nevertheless, from the perspective of PIs it is crucial to focus on the limitation of NP-3 caused by its traffic managers, where Deficit Round Robin (DRR)/ Weighted Round Robin (WRR) are applied. Thus, it is impossible to implement TDM as it can be assumed by one of the PIs (e.g., QoS). On the other hand, a device that is free of such limitations is the NetFPGA card [4]. It is the low-cost reconfigurable hardware platform optimized for highspeed networking. The NetFPGA includes the all of the logic resources, memory, and Gigabit Ethernet interfaces necessary to build a complete network device. However, its resources are insufficient to build a device, which is comparable with the NP-3 in aspects other than programming flexibility. Thus, in this paper, we propose a solution that bridges the traffic processing capabilities of the network processor NP- 3 with programming flexibility of the NetFPGA cards (i.e., TDM). The remainder of this paper is organized as follows. In the next section the considered problem is formulated. Next the proposed solution is described followed by the experimental analysis of its efficiency. Finally, the last section concludes the paper. II. PROBLEM DESCRIPTION There are given four Parallel Internets (PIs) denoted as PI A, PI B, PI C, PI D. It is assumed that the scheduler is depicted in Fig. 1. Within each PI different classes of traffic are distinguished that are processed in L2 layer performing shaping followed by Weighted Round Robin (WRR) scheduling algorithm. The aggregated flows of PIs are scheduled in L1 layer by L1 scheduler (i.e., TDM). Layer L2 is efficiently implemented in the NP-3 network processor, whereas layer L1 in NP-3 is implemented only as WRR. This allows to merge the PIs only statistically, but it does not satisfy the strict QoS requirements. On the other hand, the resources of NetFPGA are sufficient to implement any reasonable scheduling algorithm. III. THE PROPOSED SOLUTION In this section the proposed solution is described together with its physical (hardware) realization. A. NetFPGA The NetFPGA [4] is the low-cost reconfigurable hardware platform optimized for high-speed networking. The NetFPGA includes the all of the logic resources, memory, and Gigabit Ethernet interfaces necessary to build a complete network device. The entire datapath is implemented in hardware, so the system can support back-to-back packets at full Gigabit line rates and has a processing latency measured in a few clock cycles. The NetFPGA includes fully programmable by the user Field Programmable Gate Array (FPGA) logic: Xilinx Virtex-II Pro 50, 53,136 logic cells, 4,176 Kbit block RAM, up to 738 Kbit distributed RAM, 2 x PowerPC cores,

119 112 PI A L2 scheduler PI B L2 scheduler PI C L2 scheduler PI D L2 scheduler shaper shaper shaper shaper shaper shaper shaper shaper shaper shaper shaper shaper Weighted Round Robin scheduler Weighted Round Robin scheduler Weighted Round Robin scheduler Weighted Round Robin scheduler shaper shaper shaper shaper L1 scheduler PI output port Fig. 1. A concept of L1/L2 scheduler and 4 Gigabit Ethernet networking ports: interfaces with standard Cat5E or Cat6 copper network cables using Broadcom PHY, wire-speed processing on all ports at all time using FPGA logic (8 Gbps throughput). B. EZappliance EZappliance is equipped with a highly flexible network processor NP-3 offering integrated traffic managers, packet processing and advanced flow-based bandwidth control. It provides a variety of applications such as L2 switching, Q-in- Q, VPLS, MPLS and IPv4/IPv6 routing. Furthermore, NP-3 is managed and controlled by dedicated PowerPC CPU system with Linux. Since EZappliance has 30-Gigabit throughput (NP-3) and offers 24x1-Gigabit and 2x10-Gigabit Ethernet ports, thus, can be successfully used to implement an efficient network node. For more details see [3]. C. Realization The general concept of the proposed solution is presented in Fig. 2. It is composed of two subsystems: NP-3 (PI classification, processing, outgress switching and L2 scheduler) that is embedded in the EZappliance device (see [3]), NetFPGA (L1 scheduler). The 24 Gigabit interfaces of EZappliance are divided into two groups: Parallel Internet Ports (PIP i) and Parallel Internet Inner Ports (PIIP A i, PIIP B i, PIIP C i, PIIP D i) for i = 0,..., 3. Each PI port, say ethi (i = 0,..., 3), has assigned four PIIPs (i.e., PIIP A i, PIIP B i, PIIP C i, PIIP D i), forming i-th Virtual Inner Port (VIP i). Note that each PIIP is designated to carrier particular Parallel Internet (PI A, PI B, PI C, PI D). The network traffic that comes to EZappliance through PIP is classified according to PI type, processed and switched to PIIP depending on a destination address, and next through L2 scheduler to VIP. For instance, if packets of PI A incoming to eth0 have to be directed to eth3, then they are classified as PI A and switched to PIIP A 3. In the proposed solution, instead of merging four PI streams statistically (by shaping followed by WRR), we redirect each PI stream by a separate PIIP through particular VIP to the NetFPGA card. Each of the four input interfaces of the NetF- PGA card is associated with a particular PI. This eliminates the need of packet classification within NetFPGA, leaving its logic resources to other purposes. Thus, in NetFPGA, the L1 layer scheduling algorithm is implemented (L1 scheduler). The algorithm queues packets incoming from input interfaces (one queue per port). Packets are taken from input queues and scheduled to output port in a TDM fashion. It is assumed, that the first NetFPGA network interface (eth0) is the output port for the TDM scheduling algorithm. Next, such merged stream is routed back to the EZappliance by the first PIIP interface of the assigned VIP. All packets received by VIP are sent, without any processing, to the corresponding PIP. In such a way, EZappliance and four NetFPGA cards could be used to implement PI node with four ports that allow to serve four Parallel Internets. Moreover, four free ports of EZappliance can be used for some further utilization. IV. EXPERIMENTS The experimental analysis focus on resource requirements and time efficiency of the proposed solution. It was performed

120 113 Ezappliance/NP-3 NetFPGA PIP 0 output PIP 0 input eth0 L2 scheduler PIIP A 0 PIIP B 0 PIIP C 0 PIIP D 0 VIP 0 eth4 eth5 eth6 eth7 VIP 0 eth0 eth1 eth2 eth3 L1 scheduler (TDM) PIP 3 output PIP 3 input eth3 Classification and Switching L2 scheduler PIIP A 3 PIIP B 3 PIIP C 3 PIIP D 3 VIP 3 eth16 eth17 eth18 eth19 NetFPGA VIP 3 eth0 eth1 eth2 eth3 L1 scheduler (TDM) Fig. 2. A functional description of implementation of L1/L2 scheduler based on the NP-3 network processor and the NetFPGA cards using EZappliance device and two NetFPGA cards. To evaluate the time parameters of the proposed solution, we used a measurement infrastructure based on DAG network capturing card [5]. Namely, in the experiments we used DAG 7.5G2 [5] card produced by Endace. This card has two SFP ports which support 10/100/1000 Mb/s Ethernet on copper and and 1000 Mb/s Ethernet on fiber. The card communicates with host through PCI Express x4 which allows 8Gb/s of raw data communication. This Card support IP packet length from 48 bytes to 9600 bytes with 100% capture into host memory. Advantage of this card on which we based is speed of frames processing. Not only frames can be transmitted with full rate of link, but also frames are receiving at the full speed. The most important for our experiments was zero packet loss while data receiving. In comparison with NIC (Network Interface Card) this card consuming minimal amount of the host computing resources while moving data from network link to memory. Software provided with DAG card allows generating, transmitting and capturing frames. All of this applications are using ERF (Extensible Record Format) files. This generator allows to prepare frames with needed header and also payload which, for example, can be used as packet counter. In our experiments it was more important to use timestamp field of ERF while receiving data. This is 64-bit number representing the number of seconds since 1st January 1970, where the lowest 32- bits contain the binary fraction of the second. Precision of this number exceeds frequency of receiving frames on 1Gb link. DAG card adds time of frame arrival using own clock (crystal oscillator). This mechanism demarcates timestamp value from host resources and performance. If we were used NIC, processing time would degenerate results of our tests. Frame would be processed by operation system and noticed timestamp would come from host clock. Transmitter software from DAG allows to tune some parameters of transmitted stream. In our test we used also inter burst delay. This parameter set time space between sending every single frame, it can be set with accuracy of microseconds. DAG card allowed us to send, capture and observe frames. All of this operation were done with big time precision which provided to specify time characteristic of examined network architecture. A. Experiment 1 We made two regular tests. The difference between this two tests was delay value. In each test following frames were transmitted with constant previously defined delay. We used two streams: 2µs delay in the first test and 30µs delay in the second one. Figures 3 and 4 show graphs with time of receiving following frames, where two timeslots are given. In multiplexer module timeslot value was set to 450µs, for both tests. We can observe that in both cases first few frames in each slot came in burst, following frames came with constant time space. This behavior is connected with queues in NetFPGA. After time gap, frames from queue are transmitted as fast it is possible. Next frames came with regular spaces. Obviously in both tests observed timeslots have the same length with an accuracy to delay. If frames are transmitted faster, then there is more frames in one timeslot. For 2µs delay, about nine frames in one timeslot is received, for 30µs delay there are only seven frames. B. Experiment 2 In this experiment we used flood traffic, frames were transmitted without any delay. We made two tests in this experiment. In these tests we changed timeslot set in multiplexer module. The first measurement was captured for 50µs timeslot value, the second one for 120µs. Figures 5 and 6 show graphs with time of receiving following frames. Despite different timeslot value in multiplexer module in both tests, quantity of frames and length of real timeslot are equal. In both cases in one timeslot are about 47 frames. This behavior is connected with transmitting queue capacity and processing time. Equal

121 114 Fig. 3. Frames receiving time - 2µs delay Fig. 6. Frames receiving time - flood, timeslot 120µs Fig. 4. Frames receiving time - 30µs delay in particular time-division multiplexing. The experimental analysis confirmed the efficiency of the proposed solution that allows to determine (chose) packet scheduling algorithms others than embedded in network processors. Furthermore, the proposed solution allows us also to implement Parallel Internet nodes described in [2]. Our future research will focus on implementing the proposed device using the NP-3 processor and FPGA cards. Furthermore, we will implement different packet scheduling algorithms (layer L1) and analyse their impact on traffic parameters. ACKNOWLEDGMENT The research presented in this paper has been partially supported by the European Union within the European Regional Development Fund program no. POIG /09. REFERENCES Fig. 5. Frames receiving time - flood, timeslot 50µs quantity of observed frames demonstrate capacity of output queue. We use only one generator and thus all frames came from one multiplexer input. When output queue felt up due to processing time in NetFPGA, then it is longer than needed for full link speed. Therefore, the following frames have to wait in the input queue or they are dropped. Furthermore, if input transmission is stopped because of full queue or timeslot end, then frames are still transmitted by output port. It was observed that timeslot ends when output queue is empty and before new timeslot. If we set in multiplexer module short timeslot, output queue has less time to empty and thus observed space on graph is shorter. [1] T. Anderson, L. Peterson, S. Shenker, and J. Turner, Overcoming the internet impasse through virtualization, Computer, vol. 38(4), pp , [2] W. Burakowski and et al., The IIP system: architecture, parallel Internets, airtualization and applications, in Future Network and Mobile Summit Conference, Warszawa June, [3] NP-3 Network Processor, Architecture and Instruction Set User Manual, Document Version 3.6. [4] Specification of the NetFPGA, 2012, [accessed: ]. [Online]. Available: [5] Endance, 2012, [accessed: ]. [Online]. Available: V. CONCLUSION In this paper, we proposed a solution that bridges the traffic processing capabilities of the network processor with programming flexibility of the NetFPGA cards concerning

122 115 Quality Aware Virtual Service Delivery System Mariusz Fras and Jan Kwiatkowski Institute of Informatics Wroclaw University of Technology, Wrocław, Poland {mariusz.fras, Abstract The problem of providing support for quality of service (QoS) guarantees is studied in meny areas of information technologies. In recent years the evolution of software architectures led to the rising prominence of the Service Oriented Architecture (SOA) concept. For Web-based systems there are three attributes that directly relate to everyday perception of the QoS for the end user: availability, usability, and performance. The paper focuses on performance issues of service delivery. The architecture of Virtual Service Delivery System (VSDS), a tool to serve requests for synchronized services is presented. It is proposed suitable monitoring technique used for estimation of values of service parameters and allocation of communication and execution resources by means of service distribution. The paper also presents results of experiments performed in real environment that show effectiveness of proposed solutions. I. INTRODUCTION For Web-based systems there are three attributes that directly relate to everyday perception of the quality of service for the end user: availability, usability, and performance. The performance issues of information systems are very widely explored in different contexts. For SOA-based systems solutions concerning the quality of services have been generally developed in the context of Web services, usually proposing useful standards for quality of service mechanisms, such as WS-Policy [2] and WSLA [5]. To support the quality of service delivery some selection of service algorithms are also proposed. For example in the work [11] the service selection based on utility function on attributes assigned to services (such as price, availability, reliability and response time) has been proposed. However, many of these works assume that values of service parameters does not change dynamically. On the other hand the quality of services can be considered in the context of the quality of the resource utilization. Among the others the virtualization is already being used as a common and proven way to decrease the overall hardware needs and costs, still the hardware utilization is around 20% and storage utilization does not go above 60% [8]. Using virtualization gives very promising results, but as stated in [10] it is still not enough. Virtualization stopped and is not pushing forward. Mission critical services are used as before due to the easier maintenance, controlling and monitoring. What is more, reduced budgets made it much more complicated for real virtualization adaptation since - especially at the beginning - costs of implementation are higher than those of keeping everything as is. In the paper the Virtual Service Delivery System (VSDS), a tool for monitoring and efficient allocation of communication and execution resources to serve requests for synchronous services is presented. The service requests are examined in accordance to the SOA request description model. The functional and non-functional requirements in conjunction with monitoring of execution of services and communication links performance data are used for requests distribution and for resource allocation. At the lower layer virtualization is used to control efficient resource allocation to satisfy service requests. The paper is organized as follows. Section 2 briefly describes the main ideas used during designing and developing presented in the paper Virtual Service Delivery System. The architecture and functionalities of Virtual Server Manager are presented in section 3. In the section 4 the main service quality issues are discussed. Section 5 describes ways how service monitoring and evaluation is done by the Broker and Virtual Server Manager. In the next section the results of the first experiments of implemented system are presented. Finally, section 7 outlines the work and discusses the further works. II. THE CONCEPT OF QUALITY AWARE SERVICE DELIVERY The concept of effective and quality-aware infrastructure is based on the idea of Virtual Service Delivery System (VSDS) capable to handle client s requests taking into account service instance non-functional parameters. The main components of the system are network service broker (further called Broker) and Virtual Server Manager (VSM). They are built as a component of a SOA. The main assumptions for operation of both modules are: the Broker delivers to clients the set of J services (so called atomic services) as j, j [1, J], the Broker knows execution systems es m, m [1, M], where real services (service instances) are available, the Broker monitors execution of client s requests and collects the monitoring data, the Broker acts as a service proxy - it hides real service instances, and distribute client s requests for services to proper instances according to some distribution policy, the Virtual Server Manager is responsible for creation of service instances, the Virtual Server Manager is responsible for service execution, the Virtual Server Manager offers the access to hypervisor actions that is independent on any used virtualization system by using libvirt toolkit,

123 116 Fig. 1. C call Virtual Service Layer time Virtual Resource Layer es 1 VS 1 mapping es 2 VS i VS J T DTRANS cl2 cl m The layers of Virtual Service Delivery System the Virtual Server Manager offers information about particular physical servers as wellbroker as running virtual service instances. t RA The Broker implements the Virtual Service Layer (Fig. T 1). The Virtual Service Layer (VSL) DM virtualizes real services available on service execution systems T DNS (servers). The services that are hidden from client point of view, and the client deals with virtual service V S j that ist an TCPC atomic service as j mapped to service instances is ACK SYN j,m on known servers es m. The client of the system C calls the Broker for a service, and the Broker distribute the request to one, chosen service instance. T FBYTE At the Virtual Resource Layer (VRL) the management of POST available execution systems at the lowest level is performed. Virtual Server Manager (VSM) that implements VRL is responsible for efficient allocation of execution resources to DATA services using more and more popular virtualization techniques [7]. There are two aims of using VRL, which can, and in most cases would be, mutually exclusive. First of all the manager shall provision the instances of services with proper resources to ensure the fulfilment of requirements for requested service. Secondly it shall increase the utilization of the available resources, so that overall capacities are used to the highest possible degree. Managing the resources can be described in three distinct steps: provisioning of resources, adjusting and freeing the resources. The largest difference between VSM and other similar solutions is coming from another targets standing behind our proposition. While most of other solutions are strictly devoted to manage the infrastructure, VSM is devoted to properly dispatch the requests, placing the virtualization management on the second place. Nonetheless one can point a number of similarities starting from common modular architecture with possibilities to customize the software easily. Furthermore just like other solutions libvirt is used to overcome the problem with communication with various hypervisors. The role of VSM as a dispatcher means that some of the functionalities are redundant. Under such situation one is j,2 es m is j,m es M may put offering Amazon compatible API, billing integration, number of control panels and so on. On the other hand the functionality is extended to understand the SOAP messages, identify which services are capable of performing them and finally running those services and dispatching the requests. Analogical software is found as an addition on top of Open- Nebula and offers service orchestration and deployment or service management as a whole [9]. Both layers are defined as the tuple < ES, CL, AS, IS >. ES = {es 1,..., es m,..., es M } is the set of execution systems es m, where: m [1, M], M - the number of execution systems. The execution systems can be placed at different geographic locations. CL = {cl 1,..., cl m,..., cl M } is the set of communication links cl m from the Broker to execution systems. The Broker delivers the set of J atomic services AS = {as 1,..., as j,..., as J }. Each atomic service as j available at the Broker is mapped to one or more instances that form instance subset IS j. Instances of given atomic service can be localized at different execution systems. IS = {IS 1,..., IS j,..., IS J } is the set of all instances of services, where: IS j the subset of instances of service as j, is j,m the mth instance Server of j th service as j localized in given execution system, M j the number of instances of j th service, M j < M. The instances of given atomic service are functionally the same and differ only in the values of non-functional parameters ψ(is SYN j,m ) = {ψj,m 1,..., ψf j,m,..., ψf j,m }, where ψf j,m is f th non-functional parameter of m th instance of j th atomic service. Two kinds of service parameters may be distinguished: static parameters - constant in long period of time (e.g. service ACK price), and dynamic parameters - variable in short period of time (e.g. the completion time of execution of the service instance may be the case). From the client point of view, the very important service parameter is response time, which is T PROCESS usually quite variable parameter. In the network environment it consists actually of two components: data transfer time and execution time on the processing server (later called execution time for short). III. THE VIRTUAL SERVER MANAGER Virtual Server Manager is built as a component of a SOA. The architecture of VSM is very flexible and gives opportunity to compose the processes from services publicly or privately available. VSM offers two interfaces to interact with the virtualized environment. One is XML-RPC based that is used mainly for communication between internal VSM modules. More important is the possibility to direct SOAP calls to services to be handled by the VSM. Each and every request is then redirected to proper service instance based on the requirements it has. Proper instance is either found from the working and available ones or the new one is started to serve the request. Such an approach gives the possibility to manage the virtualization automatically with minimal manual interaction. The architecture of the VSM is presented in Fig. 2, as for now there is a number of independent modules offering the XML-RPC interfaces to interact with them.

124 117 Fig. 2. Database service request service response find match Matchmaker match Manager start/stop instance MonitorIng Unit Virtualization Unit service request service response The architecture of the Virtual Server Manager start/stop instance Manager - manages all other modules and routes the requests to the services, Virtualization Unit - offers the access to hypervisor actions, uses libvirt to execute commands what gives the independence from particular hypervisor, Database - module used to store monitoring data, images of available services (capsules) and information about available execution systems, Monitoring Unit - offers information about particular physical servers (execution system) as well as about available virtual service instances, Matchmaker - module responsible for the properly match the requirements of the request with capabilities of the environment and current state of it. Each SOAP request coming to the system is directed to the Manager module which extracts requirements passed in the header section of the message to properly handle the request. Process of request handling starts with SOAP message coming from outside through the Broker that is an actor initiating the process service execution. This is in accordance to the general idea of placing VSM inside Service Oriented Architecture where Broker is common module for such purpose. The Manager module has a role of being the gateway to the system and hides all of the heavy lifting from outside world. It extracts the requirements passed in the SOAP message, in its header section. The requirements are extracted and converted to simple text form which is used by the Matchmaker module. Handling the request can lead to one of three situations. There is a running service instance, which can perform it and it will be returned as the target to which the SOAP request shall be forwarded. There is no running service instance, but there is an image which satisfies the conditions. In such a case the image will be instantiated and it will be used as the one to perform the request. Last possibility is the lack of proper service and image in which case the error will be thrown and finally returned as a SOAP Fault message to the client. As it was already mentioned, virtualization management is based on open source libvirt toolkit. It offers the virtualization API supporting number of the most popular hypervisors. From the point of view of this paper it is less important how technically the management is performed. It is more important to note what are the capabilities of the management and how it is understood here. The virtualization management does not mean simply to start or stop the virtual machine, it is much more complex problem. The complexity is coming first of all from the decision making problem. Firstly the correct service instance or service image should be found, by means of fulfilling specified in the service request requirements. In the case when the new instance of service has to be created the proper resources should be allocated and finally make processing as minimal footprint as possible. The first performed experiments with VSM, in case when VSM is lack of automatic resource freeing is very promising. During these experiment called service is configured to consume 80% of CPU for 60 seconds what simulates some relatively exhausting operation. The requests are incoming almost in the same time and they require at least 30% CPU capacity to be free. The outcome of such a simple simulation is increasing number of instances of the service to handle sudden peak in requests, yet there is no mechanism to limit the number of the instances afterwards. Currently, using VSM the following features are available: mechanism for the creation and use of services - using the SOA paradigm and virtualization. This makes services independent from the available hardware architecture, and ensures the efficient use of hardware resources, method of delivery of services in a virtual machine environment - are taken into account the performance parameters of the virtual machine, service and equipment on which a virtual service instance is installed, tool architecture and its constituent modules is open, communication takes place via defined interfaces using XML-RPC for internal communication and the SOAP protocol for external communication. IV. SERVICE QUALITY ISSUES In order to assure proper quality of service delivery the three requirements are to be considered: effective and suitable service parameters monitoring and estimation, proper service request distribution according to current service parameters estimation and resource utilization, and service execution resources management. The quality of network services depends on communication link properties and effectiveness of request processing on the server. Both affect the quality of each service instance separately and can be expressed by values of service instance nonfunctional parameters. To satisfy requested service parameters distribution of the request to proper service instance must be performed. The problem of service request distribution generally can be stated using criterion function Q: is j,m arg m (ψ 1 j,m,..., ψ f j,m,..., ψf j,m) (1) It is the task to select such instance is j,m to serve request for service as j that criterion Q is satisfied. In the particular case it is the task of finding extreme of the criterion function. To satisfy proper service instance selection the current values of each instance parameters should be known. This requires methods of estimation and/or forecasting of values of such parameters. The Broker uses two approaches: statistical methods based on time series analysis and method based

125 118 on artificial intelligence approach - using fuzzy-neural network and monitoring of parameters characterizing execution environment. For both approaches the estimation of values of previous executions is required what is described in the subsequent sections. On the basis of forecasted and/or monitored values of parameters several approaches to service distribution algorithms can be adopted. Generally, the fully controlled environment case and not fully controlled environment case can be distinguished. The first one refers to the use of dedicated links and VSMs in all execution systems. The second one is when there is no full control of communication links. It is the most common condition for delivery of service in the Internet according to SOA paradigm. For the VSDS the best effort based algorithms for the uncontrolled environment are implemented by now. Two most commonly considered service parameters are used: data transfer time and completion time of service execution in the processing server. The selection of service instance for requested service as j is performed according to criterion (2): is j,m arg m min(t j,m P ROCESS + T j,m T RASF ER ) (2) where T j,m P ROCESS is forecasted execution time of service instance is j,m and T j,m T RASF ER is forecasted data transfer time for fulfilling request for service instance is j,m. As mentioned above, the algorithms uses time series analysis based forecasting or fuzzy-neural network based forecasting shortly described later. V. EVALUATION OF VALUES OF SERVICE PARAMETERS A. Service Monitoring and Estimation in the Broker The Broker performs request distribution based on the actual values of service instance parameters forecasted from the monitored and collected data of previous executions. The two basic service parameters, the data transfer time and completion time of service execution in the processing server, are obtained in two ways: with use of SOAP based cooperation between the Broker and the system which executes the service instance, and with use of the monitoring of TCP session which handles service request to the processing server. In the first case the execution system must be able to interpret the specific additional data in Broker calls for service, and include additional specific data in service response. The execution systems controlled by VSM has such ability. The Broker records the arrival time of each request for the service, the start time of call for service to the server which executes the service instance, and the time of end of processing of the service response from the server. The difference of the last two times establishes the total time of the request processing. The execution time of the service instance (processing time in the server) is delivered in the service response SOAP message. The service data transfer time is assumed as a difference between the total time of the request processing and service instance execution time. This time includes the time of resolving DNS address of processing server and all pre-transfer operations. However, time Fig. 3. T FBYTE T DM t RA T DNS T TCPC T DTRANS Broker ACK SYN POST DATA SYN The TCP session of handled client request ACK Server T PROCESS these components of request intervals are measured by the Broker and can be excluded as described latter. When cooperation between the Broker and the execution system is not possible, the values of essential service parameters are obtained with use of the analysis of the TCP session that handles the Brokers request for the service to the execution system. The client request arrives at the moment t RA (Fig. 3). The interval T DM is the time of choosing the service instance (or server) that will process the request. Starting from this point the Broker measures the following time intervals of TCP session of call to processing server: the time of resolving DNS name address T DNS, the time of establishing TCP connection (TCP Connect time) T T CP C, the time to receive the first byte of transferred data from the server executing the service T F BY T E, the total time of the request processing T SUM = T F BY T E + T DT RANS. The Broker also records the number of sent bytes B S and the number of received bytes B R during the session. It is assumed, that services are delivered using SOAP standard and the server responds after receiving all necessary data from the Broker. If we assume that transfer rate to and from the server are similar (what can be not true in general case), the service execution time T P ROCESS can be evaluated with formula (3): T P ROCESS = T F BY T E T DNS B S B R (T SUM + T F BY T E ) 2 T T CP C (3) Very often the request for service does not transmit other data in addition to those that fully identifies the service, so the time of this transmission can be neglected. Because the processing servers are registered in the broker earlier their IP addresses can be known, and the T DNS time can be usually also neglected. In such case the service execution time

126 119 T P ROCESS is more accurately calculated according to formula (4): T P ROCESS = T F BY T E 2 T T CP C (4) The data transfer time is the difference between the total time of the request processing and service execution time T P ROCESS. The service delivery time (for the client which requests the service from the Broker) includes also the decision making time T DM, which according to distribution algorithm may be neglected or not. In more general case some pre-transfer operations (e.g. SSL connect/handshake) must be also taken into account. The Broker can measure such operations too. It must be noted, that in case of lack of cooperation between the Broker and the execution system the estimation procedure is possible only when data transfer time from the Broker to the server is negligible, or data transfer rates to and from the server can be compared, or the data transfer time to the server can be measured separately. Forecasting of values of service execution time and data transfer time for incoming requests is also performed in First, with use of time series analysis i.e.: moving average of recorded times of previous executions: ˆt n j,m = 1 L k L k=n 1 w k t k, moving median of recorded times of previous executions: ˆt n j,m = med(t k 1, t k 2,..., t k L ), where: ˆt n j,m - forecasted time for n th request served by service instance is j,m, L - the length of the observation window, w k - window function, t k - the times of previous requests served by instance is j,m, n - the index of current request. When cooperation with the server is possible, i.e. when VSM is applied, the evaluation of service transfer and execution times can be performed with use of the concept of fuzzy-neural controller built with use of 3-layered fuzzy-neural network [1], [3], [4], [6]. The fuzzy-neural controllers model each communication link and each service instances separately (Fig. 4). The adaptive model, described in detail [3] evaluate the output value (transfer or execution time) using two input values of parameters characterizing communication link or execution system. The input of the communication link model is, by now, link throughput of sample value of data downloaded from execution system, and link latency (namely TCP Connect Time), both derived with use of measurements and time series analysis. The input of service instance model can be any of monitored parameter by VSM or the counted number of being processed calls in the server. B. Service Execution Monitoring in Execution System The inputs of the service instance model are received from VSM. For this aim VSM is equipped with Monitoring Unit that is able to collect different parameters related to service Fig. 4. a n b n Service instance Object 1,11 Object 1 model Model Model Service instance Object 1,m 1 Object 1 model Model Model Service instance Object 1,M1 Object 1 model Model Model ˆ n t1, 1 ˆ n t j, m ˆ n t J, M Modeling service instances as fuzzy-neural controlers execution and state of execution systems, depending on Broker request. Monitoring agent shall accompany with any currently active service. The frequency of measures or agreed values of attributes are present in the contract thus the agent shall simply check if the operations are done accordingly. To ensure efficient resource utilization, incoming requests to VSM are attributed to execution classes. The functional and non-functional requirements are considered. The VSM exploits the combining the service orientation with automatic management using constraints attached to the requests what increases overall reliability, response time and constraints fulfilment, reducing the need for manual work in the same time. Currently two different ways of performing monitoring can be in use. First solution based on using Munin, a tool, which is used to monitoring service execution and state of execution systems. Unfortunately this approach although highly efficient does not allow direct monitoring of virtual machines used for service instance execution. It s why it was decided to introduce an alternative way of monitoring. The second available solution based on using Xen-stat, which is an integral part of the package Xen virtualizer. Using Xen-stat allows to monitor the server and each virtual machine at intervals specified by the administrator. In particular, it is possible to monitor: CPU consumption by each virtual machine individually, CPU usage on the server, RAM memory usage of each virtual machine individually, RAM usage on a server. For storing monitoring data simple MySQL database is used. Implemented database consists of three tables: measurements - contains information about the virtual machine load, capsules - contains data about virtual machines, servers - contains information about the server and its current load. To collect and record information about the system load special module implemented in PERL is used. The module is run every minute and takes measurements at intervals set by the administrator. Data from the monitoring of virtual

127 120 machines and servers available are displayed by XENTOP command, and then the results are parsed and stored into a database. Then it is possible to visualized the results of the monitoring of resource usage by running virtual machines using Xen Graph tool. Concluding current solution is very flexible because depending on the needs of monitoring the system usage and service execution gives the opportunity of using two different tools: Munin and Xen-stat. VI. EFFECTIVENESS TESTS In the preliminary experiments the selected proposed solutions were tested in real environment - in the Internet. The broker has been implemented as fully operational tool in Java technology. It supports all described functionalities and serves its services in accordance with SOA standards. The experiments has been focused on testing usefulness of monitoring and evaluation of service instance execution time, however data transfer time was also tested. The Broker served a number of clients requesting fixed set of network services from five servers. There were established six test services. Each service was running on each server giving a total of 30 service instances. The services generated different amount of data to transfer: from 50kb to 200kB. The service instances differed in basic execution time i.e. the minimal time of processing not including any server service handling overhead (e.g. queuing delay). The times varied from 2 seconds to 6 seconds. The clients which requested services and the Broker were located at Wroclaw University of Technology campus. The servers that run service instances were located in five different countries of Europe as shown below. planetlab2.rd.tut.fi, ( ), Finland, ple1.dmcs.p.lodz.pl, ( ), Poland, planetlab1.unineuchatel.ch, ( ), Switzerland, planetlab4.cs.st-andrews.ac.uk, ( ), UK, planet1.unipr.it, ( ), Italy. The research scheme was the following: the clients requested all six services in a round-robin fashion, each client in a different order, the number of clients increased from 0 to 80 during 3 hour test, the requests were distributed by the Broker according to round-robin algorithm, it were measured essential moments of each TCP session handling requests for services, and recorded service instance execution times received in server responses, for each request the fuzzy-neural controller calculated forecasted value of service instance execution time and data transfer time, estimated and forecasted times were compared against measured ones. All requests transmitted no additional data to the server. It was assumed that the true real values of service parameters were: service instance execution time measured in the server T P ROCESS REAL, and the difference of the total time of request processing T SUM and the service instance execution MAPE [%] 3 2,5 2 1,5 1 0,5 0 Fig. 5. MAPE [%] 2,5 2 1,5 1 0, The MAPE of T P ROCESS time estimation for all service instances Fig. 6. The MAPE of fuzzy-neural forecasting of T P ROCESS time for all service instances time measured in the server T SUM T P ROCESS REAL, assumed as real value of data transfer. The test showed that none of five servers were overloaded by requests for services. The figure 5 shows Mean Absolute Percentage Error (MAPE) calculated for estimation of service execution time T P ROCESS, using monitoring of TCP session only. The figure shows MAPE for all 30 instances grouped in the following manner: first six instances from first server (each of different service), next six instances from second server, and so on. The total MAPE for all estimation is very good, and is equal 1,31%. It is interesting to see the visible difference of error level for particular servers. For server 2 (instances 7 to 12) the total MAPE is the smallest and is 0,51%. For server 3 (instances 13 to 18) the total MAPE is the largest and is 2,04%. This could be caused by different server queue thresholds (the number of requests processed in parallel), however should be thoroughly examined. The Fig.6 shows effectiveness of forecasting service instance execution times T P ROCESS with use of fuzzy-neural controller. The total error of prediction MAPE is equal 1,32%. This is very good value. However, it must be noted that experiment was performed for stable server operation. In this case no obvious dependency of error level on particular server is visible. The forecasting of data transfer times were performed using moving average method and with use of fuzzy-neural controller. The forecasting errors were much greater than forecasting execution times. The total MAPE for fuzzy-neural controller was 13,5% (for Weighted Moving Average even greater, about 19,3%).

128 121 MAPE [%] Fig. 7. The MAPE of fuzzy-neural forecasting of data transfer time for all service instances It was due to the fact, that very small amount of data was transmitted and there were short transfer times they varied from about 50ms to hundreds of milliseconds. At the same time it was found that one of the link (to server 2 service instances 7 to 12) was apparently problematic and significantly expanded total error as shown in Fig. 7. It must be noted that preliminary experiment was not focused on testing fuzzy-neural controller for data transfer time forecasting, and learning parameters of the controller were not tuned too. When this conditions will be met the better forecasting is expected. REFERENCES [1] L. Borzemski, A. Zatwarnicka, K Zatwarnicki, Global distribution of HTTP requests using the fuzzy-neural decision-making mechanism, Proc. of 1st Int. Conf. on Comp. Collective Intelligence, Lecture Notes in AI, Springer, [2] D. Box, F. Curbera, M. Hondo, C. Kaler, D. Langworthy, A. Nadalin, N. Nagaratman, M. Nottingham, Web Services Policy Framework (WS- Policy), http : //public.dhe.ibm.com/software/, [3] M. Fras, A. Zatwarnicka, K. Zatwarnicki, Fuzzy-neural controller in service request distribution broker for SOA-based systems, Proc. of Int. Conf. Computer Networks 2010, A. Kwiecien, P. Gaj, P. Stera P. (eds), Berlin, Heidelberg, Springer, [4] L. Jain, N. Martin, Fusion of neural networks, fuzzy sets, and genetic algorithms industrial applications, CRC Press LLC, London, [5] A. Keller, H. Ludwig, The WSLA Framework: Specifying and Monitoring Service Level Agreements for Web Services, Journal of Network and Systems Management, Volume 11, Number 1, Springer, New York, [6] E. Mamdani, Application of Fuzzy Logic to Approximate Reasoning Using Linguistic Synthesis IEEE Trans. on Comp., vol. C-26, Issue 12, [7] D. Rosenberg, Analyst: Virtualization management key to success, http : //news.cnet.com/ html, [8] P. Sargeant, Data centre transformation: How mature is your it?, http : // [9] P. Sempolinski, D. Thain, A Comparison and Critique of Eucalyptus, OpenNebula and Nimbus, Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference, [10] B. Snyder, Server virtualization has stalled, despite the hype, InfoWorld, http : // [11] L. Zeng, B. Benatallah, A. Ngu, M. Dumas, J. Kalagnanam, H. Chang H, QoS-aware middleware for Web services composition, IEEE Transactions on Software Engineering, vol. 30, no. 5, VII. CONCLUSION Quality of services in Service Oriented Architectures yields a number of issues which involves suitable monitoring and estimation of values of service parameters, distribution of service requests to selected execution systems running instances of services, forecasting values of service parameters and virtualization management. All this at the same time sticking to SOA paradigms. Automation of such process requires well designed architecture and procedures of service quality aware system. The presented solutions for solving mentioned problem are still under study. First experiments are very promising. The effectiveness of service request distribution algorithms depends on precise evaluation of values of service instance parameters. The two basic ones, execution time and data transfer time, are the key to satisfy Quality of User Experience (QoE). The experiment showed that the evaluation of execution time is very good. However, the case of sending large amount of data from the client must be also tested. The evaluation of data transfer time is to be more explored and requires well prepared extensive experiments. It is worth to note that presented solutions are implemented as fully operational tool ready for use in real environment that applies SOA standards and Simple Object Access Protocol (SOAP). ACKNOWLEDGMENT The research presented in this paper has been partially supported by the European Union within the European Regional Development Fund programs no. POIG /09-00 and POIG /08.

129 122

130 123 ILP Modeling of Many-to-Many Transmissions in Computer Networks Krzysztof Walkowiak, Damian Bulira and Davide Careglio Department of Systems and Computer Networks Wroclaw University of Technology, Wrocław, Poland Advanced Broadband Communication Center (CCABA) Universitat Politecnica de Catalunya, Barcelona, Spain Abstract Information flow in nowadays networks spreads in the exponential way. This fact is caused by the intensive changes in the Internet communication paradigms. Transmission in the Internet evolved from the simple one-to-one communication to the more sophisticated data exchange methods. In this paper, the authors concentrate on many-to-many (m2m) communication, that is mainly driven by growing popularity of on-line conferences and telepresence applications. We propose several ILP models concerning different videoconferencing schemes. Moreover, proposed models are based both on overlay and joint scenarios. We formulate an overlay model where m2m flows are optimally established on top of a given set of network routes and a joint model where the network routes and the m2m flows are jointly optimized. Each model is being followed by a comprehensive description and is based on real teleconference system. Keywords ILP modeling, many-to-many communication, network optimization I. INTRODUCTION With a constant growth of the network traffic, new communication paradigms appear. The only way of communication flow in Global Network in its first phase of operation was one-to-one transmission. An example of this transmission is fetching a website from a server or simple one-to-one Voice over IP call. To optimize the traversal of the same information from one host to the group of others, one-to-many (multicast) applications were introduced. A good example of that is IPTV streaming in triple play services (Internet, phone and TV) [1] or synchronization messages exchange in Network Time Protocol [2]. Furthermore, one-to-one-ofmany (anycast) can be distinguished. In anycast, packets are routed to one of many servers that can be represented by a common address with the lowest path cost from a source to a destination. Such distributed networks are called Content Delivery Networks (CDN) and they play the main role in current Internet-based business [3]. In this paper, the authors focus on many-to-many (m2m) communication as one of the fastest emerging paradigms and propose ILP models of offline problems related to optimization of m2m flows. In this type of transmissions all the hosts exchange the information with every other host in the m2m group. The examples of The work was supported in part by the Polish National Science Center (NCN) under the grant which is being realized in years Damian Bulira is also with Advanced Broadband Communication Center (CCABA), Universitat Politecnica de Catalunya, Barcelona, Spain. such traffic are: video and teleconferencing, distance learning, multiplayer on-line gaming, distributed computing, etc. We focus on videoconferencing as the widespread and demanding example of m2m service. Moreover, a business need for videoconference system is not anymore a nice to have feature for the enterprise, but an essential day-to-day tool that makes the business more effective and successful. According to Cisco, business videoconferencing will grow six fold between 2011 and 2016 [4]. The authors of the report claim, that business videoconferencing traffic is growing significantly faster than overall business IP traffic, at a compound annual growth rate of 48 percent over the forecast period. The main contributions of this paper are ILP models for many-to-many transmission in computer networks where rendezvous points are used. We propose overlay and joint models assuming combined optimization of overlay and underlying networks. The models support video conference applications, but can be easly redefined for other type of m2m traffic. The remainder of this paper is organized as follows. Section II provides related works study on many-to-many communication. Section III describes the m2m communication in computer networks. In section IV, an ILP model for overlay network is presented. Section V contains similar model for joint m2m system, moreover, two model notations are provided link-path and node-link. Finally, the paper is concluded in section VI. II. RELATED WORKS The idea of many-to-many communication in the networks is not the recent invention. The author in [5] predicted that teleconferences will be as popular as television. After many years we know, how true was this prediction. Extended view on m2m applications in background of multicast is presented in [6]. The authors define m2m traffic as a group of hosts, where each of them receive data from multiple senders while it also sends data to all of them. They also highlight that this communication paradigm may cause complex coordination and management challenges. The examples of m2m applications are, among others: multimedia conferencing, synchronizing resources, distributed parallel processing, shared document editing, distance learning or multiplayer games, to name a few. Moreover, the paper presents a brief comparison of delay

131 124 tolerance and mention that m2m applications characterize in a high delay intolerance. In [7], the authors propose scheduling architecture for m2m traffic in switched HPC (High Performance Computing) networks. The paper also mentions other applications of m2m communications in data centers, in example process and data replication [8], dynamic load-balancing [9] or moving virtual machine resources between servers connected into a cloud [10]. In [11], the authors presents optimal and nearly optimal hot potato routing algorithms for many-to-many transmissions. In hot potato (deflection) routing, a packet cannot be buffered, and is therefore always moving until it reaches its destination. This scenario is mostly applicable in parallel computing applications. Many-to-many communication is also extensively investigated in the area of radio networks. Overview on this topic is presented in [12]. The authors of [13] propose a M2MC middleware system architecture for m2m applications in broadcast networks (both radio and wired). Because of broadcast orientation, M2MC do not require any resource consuming routing protocols. The system architecture comprises of Message Ordering Protocol, Member Synchronization Protocol and protocols for processes to join and leave the groups. Other application of m2m communication exist in a field of online gaming [14], [15], [16], [17]. All the players need to exchange with the others the current state of the game. In dynamic games delay tolerance is crucial, and online gaming protocols are designed to transfer small portions of data in often transmitted packets. When more servers exists, the game world is usually spitted into several zones and users are assigned to the server, taking under account a zone in which their avatar currently exists. MILP formulation for many-to-many traffic grooming in WDM networks is presented in [18] and [19]. The authors not only formulate MILP problems, but also present approximated heuristic algorithms. Both solutions are considered for nonsplitting networks, where optical-electronic-optical conversion is used and in networks capable of splitting the signal in optical domain. In WDM networks, due to wide optical spectrum even broadband many-to-many multimedia streams may be aggregated (groomed) to use available bandwidth more efficiently. Many-to-many transmission in telepresence appliance is presented in [20]. The authors compare two architectures centralized and distributed. Moreover, the video transmission is encoded using scalable video coding (SVC) [21]. In SVC a stream consists of a base layer and several enhancement layers, that after merging with the base layer, improve a video quality. Every client receives as many layers, as the link that it is connected to the network can handle at low delay. Finally, different approaches to the video exchange during videoconferences has been presented in [22]. The authors proposed an algorithm to build separate tree for different enhancement layers in SVC based transmission. They make a theoretical analysis to show optimality of the algorithm and prove it during extensive simulations. In [23], the authors propose a flow control protocol based on cost-benefit approach. Practical realization of this protocol framework for many-to-many flow control in overlay networks is designed and tested both in extensive simulations and reallife experiments. Overlay networking is a subject of interest in numerous publications. An extensive work on overlay networks can be found in [24]. The author provides a complete introduction to the topic, followed by architecture description, requirements, underlying topologies, and routing information. The work is also supplemented with a discussion about security and overlay networks applications. III. MANY-TO-MANY COMMUNICATION As mentioned in the previous section, many-to-many communication is a paradigm of data exchange between group of hosts in a way that every group member gets information from the rest of hosts involved in the transmission. Basically, during the transmission every host in the group has the same set of information (i.e. all videoconference participants see video streams from other conference members). The overall set of m2m demands is known in advance and the problem consists of optimizing the establishment of the m2m flows to serve these demands. We divided this abstract model into two more specific problems for the communication in computer networks: Overlay model. In this model, the m2m flows are determined assuming a given set of network routes already established, i.e. the service layer is decoupled from the IP layer. This model is easier to deploy since there is no need of the network topology information and the traffic routing in the network layer. Joint model. In this model, the establishment of the m2m flows involves also the underlying network layers (e.g., IP layer, MPLS layer, optical layer, etc.). This model is harder to implement but allows optimizing network routes and m2m flows together in order to minimize bandwidth usage. IV. OVERLAY M2M SYSTEMS - OPTIMIZATION MODEL In this Section, we present the ILP model of the offline m2m flows allocation in overlay system. First, we introduce the main assumptions of an overlay system with m2m flows. We are given a set of users (overlay nodes) indexed v = 1, 2,..., V that participate in the system, i.e., each user generates some stream with rate h v (defined in bps) and receives the aggregated streams from other users. For instance in the context of teleconferencing system, the value h v depends on the selected coding standard and resolution. A special compression ratio α v is defined for each user - the user receives the overall stream compressed according to this ratio. This assumptions also follows from real teleconference systems [25], [26]. In the considered system, servers s = 1, 2,..., S are rendezvous point. In a nutshell, each user sends its flow to a one selected server. The server aggregates all received flows, and next provides the stream to each user with the compression ratio. Each server s = 1, 2,..., S has a limited upload and download capacity (u s and d s, respectively). Another possible model not addressed here

132 125 d s u s ζ vw h v α v N s download capacity (bps) of server s upload capacity (bps) of server s streaming cost on overlay link from node v to node w stream rate (bps) generated by node (client) v compression ratio of node (client) v maximum number of users that s can server Fig. 1. Many-to-many transmission model in overlay network variables z vs = 1, if user v is assigned to server s; 0, otherwise (binary) flow aggregated at server s (continuous) H s objective min F = v subject to s z vsh v ζ vs + v s α v(h s z vs h v )ζ sv (1) s z vs = 1 v = 1, 2,..., V (2) H s = v z vsh v s = 1, 2,..., S (3) is a case when servers exchange information with each other and the users receives the aggregated stream of all users from the one selected server. As an example Fig. 1 shows the considered overlay model in a network with 4 clients (users) and 2 servers. Clients v 1 and v 2 are sending their streams to server s 2 and clients v 3 and v 4 to s 1. Both upstream and downstream flows are presented and transmission volume is shown. For example client v 1 transmits stream with volume h 1 to server s 2 and receives two streams compressed with requested compression ratio α 1. The former comes from s 2 and consists of stream h 2 from client v 2 (its own steam is not sent back), the latter comes from s 1 and consists of streams h 3 and h 4 from corresponding clients v 3 and v 4. There are two sets of decision variables in the model. First, z vs denotes the selection of server s for demand v. The second variable H s is auxiliary and defines the flow of all users connected to server s. The objective is to minimize the overall streaming cost according to the allocation of users to servers. For each pair of overlay nodes (both users and/or servers) we are given constant ζ vw denoting the streaming cost of one capacity unit (e.g., Mbps) on an overlay link from node v to node w. The cost can be interpreted in many ways, e.g., as network delay (in ms), bandwidth consumption, number of Autonomous Systems (ASes) on the path, etc. To present the model we use notation as in [27]. indices v, w = 1, 2,..., V user (overlay nodes) s = 1, 2,..., S servers (overlay nodes) constants H s d s s = 1, 2,..., S (4) v α v(h s z vs h v ) u s s = 1, 2,..., S (5) v z vs N s s = 1, 2,..., S (6) The objective (1) is to minimize the streaming cost of transferring all m2m flows in the system. In more detail, function (1) compromises two elements. The first one (i.e., v s z vsh v ζ vs ) denotes the cost of streaming the data from users to servers. The second part (i.e., v s α v(h s z vs h v )ζ sv ) defines the cost of streaming the data in the opposite direction from each server to each user. Recall that for each user a special compression ratio α v is given. Moreover, if a particular server s is selected by user v (i.e., z vs = 1), the flow of this server is decreased by the flow of user v. Equation (2) assures that for each user v exactly one server is selected. (3) defines the flow served (aggregated) of each server, i.e., the sum of all users flows assigned to s. In (4) and (5) we define the download and upload capacity constrains for servers. Each server uploads the aggregated stream with the defined compression ratio to each user. Therefore, similarly to (1), the original flow of user v is not sent back to this node. Since the upload and download flows of users are constant, we do not formulate capacity constraint in the case of user nodes. Finally, constraint (6) bounds the number of users to be served by each server. This limit follows from real m2m systems (e.g., teleoconferencing systems) [26]. The presented model (1) (6) is strongly NP-hard problems since it is equivalent to the Multidimensional Knapsack Problem [28]. A special case of the overlay model (1) (6) is a scenario where only one server (S = 1) is applied to provide the m2m transmissions in the network. Notice that in this case, the model (1) (6) becomes an analytical model, since there are no variables as all users are assigned to the same server (variable z vs ). As a consequence, the aggregated flow at the server is constant and given by H 1 = v h v (7)

The Polish Teletraffic Symposium - A Review

The Polish Teletraffic Symposium - A Review Proceedings of the 14 th Polish Teletraffic Symposium Zakopane, 20-21 September 2007. EDITOR Tadeusz Czachórski Gliwice 2007 ORGANISER Institute of Theoretical and Applied Informatics of the Polish Academy

More information

An Efficient QoS Routing Protocol for Mobile Ad-Hoc Networks *

An Efficient QoS Routing Protocol for Mobile Ad-Hoc Networks * An Efficient QoS Routing Protocol for Mobile Ad-Hoc Networks * Inwhee Joe College of Information and Communications Hanyang University Seoul, Korea iwj oeshanyang.ac.kr Abstract. To satisfy the user requirements

More information

PFS scheme for forcing better service in best effort IP network

PFS scheme for forcing better service in best effort IP network Paper PFS scheme for forcing better service in best effort IP network Monika Fudała and Wojciech Burakowski Abstract The paper presents recent results corresponding to a new strategy for source traffic

More information

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc (International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan dr.khalidbilal@hotmail.com

More information

Attenuation (amplitude of the wave loses strength thereby the signal power) Refraction Reflection Shadowing Scattering Diffraction

Attenuation (amplitude of the wave loses strength thereby the signal power) Refraction Reflection Shadowing Scattering Diffraction Wireless Physical Layer Q1. Is it possible to transmit a digital signal, e.g., coded as square wave as used inside a computer, using radio transmission without any loss? Why? It is not possible to transmit

More information

Multi-service Load Balancing in a Heterogeneous Network with Vertical Handover

Multi-service Load Balancing in a Heterogeneous Network with Vertical Handover 1 Multi-service Load Balancing in a Heterogeneous Network with Vertical Handover Jie Xu, Member, IEEE, Yuming Jiang, Member, IEEE, and Andrew Perkis, Member, IEEE Abstract In this paper we investigate

More information

Real-Time Communication in IEEE 802.11 Wireless Mesh Networks: A Prospective Study

Real-Time Communication in IEEE 802.11 Wireless Mesh Networks: A Prospective Study in IEEE 802.11 : A Prospective Study January 2011 Faculty of Engineering of the University of Porto Outline 1 Introduction 2 3 4 5 in IEEE 802.11 : A Prospective Study 2 / 28 Initial Considerations Introduction

More information

CHAPTER - 4 CHANNEL ALLOCATION BASED WIMAX TOPOLOGY

CHAPTER - 4 CHANNEL ALLOCATION BASED WIMAX TOPOLOGY CHAPTER - 4 CHANNEL ALLOCATION BASED WIMAX TOPOLOGY 4.1. INTRODUCTION In recent years, the rapid growth of wireless communication technology has improved the transmission data rate and communication distance.

More information

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility

More information

QoS issues in Voice over IP

QoS issues in Voice over IP COMP9333 Advance Computer Networks Mini Conference QoS issues in Voice over IP Student ID: 3058224 Student ID: 3043237 Student ID: 3036281 Student ID: 3025715 QoS issues in Voice over IP Abstract: This

More information

EQ-BGP: an efficient inter-domain QoS routing protocol

EQ-BGP: an efficient inter-domain QoS routing protocol EQ-BGP: an efficient inter-domain QoS routing protocol Andrzej Beben Institute of Telecommunications Warsaw University of Technology Nowowiejska 15/19, 00-665 Warsaw, Poland abeben@tele.pw.edu.pl Abstract

More information

Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network

Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network Jianguo Cao School of Electrical and Computer Engineering RMIT University Melbourne, VIC 3000 Australia Email: j.cao@student.rmit.edu.au

More information

Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks

Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks 1 Mr. Praveen S Patil, 2 Mr. Rabinarayan Panda, 3 Mr. Sunil Kumar R D 1,2,3 Asst. Professor, Department of MCA, The Oxford College of Engineering,

More information

SBSCET, Firozpur (Punjab), India

SBSCET, Firozpur (Punjab), India Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Layer Based

More information

Communication Networks. MAP-TELE 2011/12 José Ruela

Communication Networks. MAP-TELE 2011/12 José Ruela Communication Networks MAP-TELE 2011/12 José Ruela Network basic mechanisms Introduction to Communications Networks Communications networks Communications networks are used to transport information (data)

More information

Express Forwarding : A Distributed QoS MAC Protocol for Wireless Mesh

Express Forwarding : A Distributed QoS MAC Protocol for Wireless Mesh Express Forwarding : A Distributed QoS MAC Protocol for Wireless Mesh, Ph.D. benveniste@ieee.org Mesh 2008, Cap Esterel, France 1 Abstract Abundant hidden node collisions and correlated channel access

More information

ICTTEN6172A Design and configure an IP- MPLS network with virtual private network tunnelling

ICTTEN6172A Design and configure an IP- MPLS network with virtual private network tunnelling ICTTEN6172A Design and configure an IP- MPLS network with virtual private network tunnelling Release: 1 ICTTEN6172A Design and configure an IP-MPLS network with virtual private network tunnelling Modification

More information

VoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications

VoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications VoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications Veselin Rakocevic School of Engineering and Mathematical Sciences City University, London, UK V.Rakocevic@city.ac.uk

More information

RESOURCE ALLOCATION FOR INTERACTIVE TRAFFIC CLASS OVER GPRS

RESOURCE ALLOCATION FOR INTERACTIVE TRAFFIC CLASS OVER GPRS RESOURCE ALLOCATION FOR INTERACTIVE TRAFFIC CLASS OVER GPRS Edward Nowicki and John Murphy 1 ABSTRACT The General Packet Radio Service (GPRS) is a new bearer service for GSM that greatly simplify wireless

More information

Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation

Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation ABSTRACT Ángel Cuevas Rumín Universidad Carlos III de Madrid Department of Telematic Engineering Ph.D Student

More information

PERFORMANCE ANALYSIS OF VOIP TRAFFIC OVER INTEGRATING WIRELESS LAN AND WAN USING DIFFERENT CODECS

PERFORMANCE ANALYSIS OF VOIP TRAFFIC OVER INTEGRATING WIRELESS LAN AND WAN USING DIFFERENT CODECS PERFORMANCE ANALYSIS OF VOIP TRAFFIC OVER INTEGRATING WIRELESS LAN AND WAN USING DIFFERENT CODECS Ali M. Alsahlany 1 1 Department of Communication Engineering, Al-Najaf Technical College, Foundation of

More information

VoIP QoS. Version 1.0. September 4, 2006. AdvancedVoIP.com. sales@advancedvoip.com support@advancedvoip.com. Phone: +1 213 341 1431

VoIP QoS. Version 1.0. September 4, 2006. AdvancedVoIP.com. sales@advancedvoip.com support@advancedvoip.com. Phone: +1 213 341 1431 VoIP QoS Version 1.0 September 4, 2006 AdvancedVoIP.com sales@advancedvoip.com support@advancedvoip.com Phone: +1 213 341 1431 Copyright AdvancedVoIP.com, 1999-2006. All Rights Reserved. No part of this

More information

Research Article Average Bandwidth Allocation Model of WFQ

Research Article Average Bandwidth Allocation Model of WFQ Modelling and Simulation in Engineering Volume 2012, Article ID 301012, 7 pages doi:10.1155/2012/301012 Research Article Average Bandwidth Allocation Model of WFQ TomášBaloghandMartinMedvecký Institute

More information

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs).

More information

Computer Networking Networks

Computer Networking Networks Page 1 of 8 Computer Networking Networks 9.1 Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office

More information

MetroNet6 - Homeland Security IPv6 R&D over Wireless

MetroNet6 - Homeland Security IPv6 R&D over Wireless MetroNet6 - Homeland Security IPv6 R&D over Wireless By: George Usi, President, Sacramento Technology Group and Project Manager, California IPv6 Task Force gusi@sactechgroup.com Acknowledgement Reference:

More information

PERFORMANCE STUDY AND SIMULATION OF AN ANYCAST PROTOCOL FOR WIRELESS MOBILE AD HOC NETWORKS

PERFORMANCE STUDY AND SIMULATION OF AN ANYCAST PROTOCOL FOR WIRELESS MOBILE AD HOC NETWORKS PERFORMANCE STUDY AND SIMULATION OF AN ANYCAST PROTOCOL FOR WIRELESS MOBILE AD HOC NETWORKS Reza Azizi Engineering Department, Bojnourd Branch, Islamic Azad University, Bojnourd, Iran reza.azizi@bojnourdiau.ac.ir

More information

Faculty of Engineering Computer Engineering Department Islamic University of Gaza 2012. Network Chapter# 19 INTERNETWORK OPERATION

Faculty of Engineering Computer Engineering Department Islamic University of Gaza 2012. Network Chapter# 19 INTERNETWORK OPERATION Faculty of Engineering Computer Engineering Department Islamic University of Gaza 2012 Network Chapter# 19 INTERNETWORK OPERATION Review Questions ٢ Network Chapter# 19 INTERNETWORK OPERATION 19.1 List

More information

TCP in Wireless Networks

TCP in Wireless Networks Outline Lecture 10 TCP Performance and QoS in Wireless s TCP Performance in wireless networks TCP performance in asymmetric networks WAP Kurose-Ross: Chapter 3, 6.8 On-line: TCP over Wireless Systems Problems

More information

AN ANALYSIS OF DELAY OF SMALL IP PACKETS IN CELLULAR DATA NETWORKS

AN ANALYSIS OF DELAY OF SMALL IP PACKETS IN CELLULAR DATA NETWORKS AN ANALYSIS OF DELAY OF SMALL IP PACKETS IN CELLULAR DATA NETWORKS Hubert GRAJA, Philip PERRY and John MURPHY Performance Engineering Laboratory, School of Electronic Engineering, Dublin City University,

More information

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter

More information

Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks

Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks Vasilios A. Siris and Despina Triantafyllidou Institute of Computer Science (ICS) Foundation for Research and Technology - Hellas

More information

ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP

ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP ENSC 427: Communication Networks ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP Spring 2010 Final Project Group #6: Gurpal Singh Sandhu Sasan Naderi Claret Ramos (gss7@sfu.ca) (sna14@sfu.ca)

More information

Analysis of IP Network for different Quality of Service

Analysis of IP Network for different Quality of Service 2009 International Symposium on Computing, Communication, and Control (ISCCC 2009) Proc.of CSIT vol.1 (2011) (2011) IACSIT Press, Singapore Analysis of IP Network for different Quality of Service Ajith

More information

Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications

Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications Sarhan M. Musa Mahamadou Tembely Matthew N. O. Sadiku Pamela H. Obiomon

More information

ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks

ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks ENSC 427: Communication Networks Spring 2010 OPNET Final Project Analysis of Voice over IP performance on Wi-Fi networks Group 14 members: Farzad Abasi (faa6@sfu.ca) Ehsan Arman (eaa14@sfu.ca) http://www.sfu.ca/~faa6

More information

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities)

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities) QoS Switching H. T. Kung Division of Engineering and Applied Sciences Harvard University November 4, 1998 1of40 Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p

More information

Figure 1: Bandwidth and coverage of wireless technologies [2].

Figure 1: Bandwidth and coverage of wireless technologies [2]. Simulation and Performance Evaluation of WiFi and WiMAX using OPNET Ravinder Paul, Sukhchandan Lally, and Ljiljana Trajković Simon Fraser University Vancouver, British Columbia Canada E-mail: {rpa28, lally,

More information

VOIP TRAFFIC SHAPING ANALYSES IN METROPOLITAN AREA NETWORKS. Rossitza Goleva, Mariya Goleva, Dimitar Atamian, Tashko Nikolov, Kostadin Golev

VOIP TRAFFIC SHAPING ANALYSES IN METROPOLITAN AREA NETWORKS. Rossitza Goleva, Mariya Goleva, Dimitar Atamian, Tashko Nikolov, Kostadin Golev International Journal "Information Technologies and Knowledge" Vol.2 / 28 181 VOIP TRAFFIC SHAPING ANALYSES IN METROPOLITAN AREA NETWORKS Rossitza Goleva, Mariya Goleva, Dimitar Atamian, Tashko Nikolov,

More information

CSCI 491-01 Topics: Internet Programming Fall 2008

CSCI 491-01 Topics: Internet Programming Fall 2008 CSCI 491-01 Topics: Internet Programming Fall 2008 Introduction Derek Leonard Hendrix College September 3, 2008 Original slides copyright 1996-2007 J.F Kurose and K.W. Ross 1 Chapter 1: Introduction Our

More information

NEW applications of wireless multi-hop networks, such

NEW applications of wireless multi-hop networks, such 870 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 17, NO. 3, JUNE 2009 Delay Aware Link Scheduling for Multi-Hop TDMA Wireless Networks Petar Djukic, Member, IEEE, and Shahrokh Valaee, Senior Member, IEEE

More information

Analysis of Performance of VoIP

Analysis of Performance of VoIP ENSC 427 Communication Networks Analysis of Performance of VoIP Over various scenarios OPNET 14.0 Spring 2012 Final Report Group 11 Yue Pan Jeffery Chung ZiYue Zhang Website : http://www.sfu.ca/~ypa11/ensc%20427/427.html

More information

Management of Telecommunication Networks. Prof. Dr. Aleksandar Tsenov akz@tu-sofia.bg

Management of Telecommunication Networks. Prof. Dr. Aleksandar Tsenov akz@tu-sofia.bg Management of Telecommunication Networks Prof. Dr. Aleksandar Tsenov akz@tu-sofia.bg Part 1 Quality of Services I QoS Definition ISO 9000 defines quality as the degree to which a set of inherent characteristics

More information

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Recent Advances in Electrical Engineering and Electronic Devices Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Ahmed El-Mahdy and Ahmed Walid Faculty of Information Engineering

More information

The Affects of Different Queuing Algorithms within the Router on QoS VoIP application Using OPNET

The Affects of Different Queuing Algorithms within the Router on QoS VoIP application Using OPNET The Affects of Different Queuing Algorithms within the Router on QoS VoIP application Using OPNET Abstract: Dr. Hussein A. Mohammed*, Dr. Adnan Hussein Ali**, Hawraa Jassim Mohammed* * Iraqi Commission

More information

A Wireless Mesh Network NS-3 Simulation Model: Implementation and Performance Comparison With a Real Test-Bed

A Wireless Mesh Network NS-3 Simulation Model: Implementation and Performance Comparison With a Real Test-Bed A Wireless Mesh Network NS-3 Simulation Model: Implementation and Performance Comparison With a Real Test-Bed Dmitrii Dugaev, Eduard Siemens Anhalt University of Applied Sciences - Faculty of Electrical,

More information

DESIGN AND DEVELOPMENT OF LOAD SHARING MULTIPATH ROUTING PROTCOL FOR MOBILE AD HOC NETWORKS

DESIGN AND DEVELOPMENT OF LOAD SHARING MULTIPATH ROUTING PROTCOL FOR MOBILE AD HOC NETWORKS DESIGN AND DEVELOPMENT OF LOAD SHARING MULTIPATH ROUTING PROTCOL FOR MOBILE AD HOC NETWORKS K.V. Narayanaswamy 1, C.H. Subbarao 2 1 Professor, Head Division of TLL, MSRUAS, Bangalore, INDIA, 2 Associate

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction 1.1. Motivation Network performance analysis, and the underlying queueing theory, was born at the beginning of the 20th Century when two Scandinavian engineers, Erlang 1 and Engset

More information

NEW WORLD TELECOMMUNICATIONS LIMITED. 2 nd Trial Test Report on 3.5GHz Broadband Wireless Access Technology

NEW WORLD TELECOMMUNICATIONS LIMITED. 2 nd Trial Test Report on 3.5GHz Broadband Wireless Access Technology NEW WORLD TELECOMMUNICATIONS LIMITED 2 nd Trial Test Report on 3.5GHz Broadband Wireless Access Technology Issue Number: 01 Issue Date: 20 April 2006 New World Telecommunications Ltd Page 1 of 9 Issue

More information

Whitepaper. 802.11n The Next Generation in Wireless Technology

Whitepaper. 802.11n The Next Generation in Wireless Technology Whitepaper 802.11n The Next Generation in Wireless Technology Introduction Wireless technology continues to evolve and add value with its inherent characteristics. First came 802.11, then a & b, followed

More information

Packet Queueing Delay in Wireless Networks with Multiple Base Stations and Cellular Frequency Reuse

Packet Queueing Delay in Wireless Networks with Multiple Base Stations and Cellular Frequency Reuse Packet Queueing Delay in Wireless Networks with Multiple Base Stations and Cellular Frequency Reuse Abstract - Cellular frequency reuse is known to be an efficient method to allow many wireless telephone

More information

Influence of Load Balancing on Quality of Real Time Data Transmission*

Influence of Load Balancing on Quality of Real Time Data Transmission* SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 3, December 2009, 515-524 UDK: 004.738.2 Influence of Load Balancing on Quality of Real Time Data Transmission* Nataša Maksić 1,a, Petar Knežević 2,

More information

IRMA: Integrated Routing and MAC Scheduling in Multihop Wireless Mesh Networks

IRMA: Integrated Routing and MAC Scheduling in Multihop Wireless Mesh Networks IRMA: Integrated Routing and MAC Scheduling in Multihop Wireless Mesh Networks Zhibin Wu, Sachin Ganu and Dipankar Raychaudhuri WINLAB, Rutgers University 2006-11-16 IAB Research Review, Fall 2006 1 Contents

More information

Extended-rtPS Algorithm for VoIP Services in IEEE 802.16 systems

Extended-rtPS Algorithm for VoIP Services in IEEE 802.16 systems Extended-rtPS Algorithm for VoIP Services in IEEE 802.16 systems Howon Lee, Taesoo Kwon and Dong-Ho Cho Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and

More information

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set

More information

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2 On the Traffic Capacity of Cellular Data Networks T. Bonald 1,2, A. Proutière 1,2 1 France Telecom Division R&D, 38-40 rue du Général Leclerc, 92794 Issy-les-Moulineaux, France {thomas.bonald, alexandre.proutiere}@francetelecom.com

More information

Comparative Analysis of Congestion Control Algorithms Using ns-2

Comparative Analysis of Congestion Control Algorithms Using ns-2 www.ijcsi.org 89 Comparative Analysis of Congestion Control Algorithms Using ns-2 Sanjeev Patel 1, P. K. Gupta 2, Arjun Garg 3, Prateek Mehrotra 4 and Manish Chhabra 5 1 Deptt. of Computer Sc. & Engg,

More information

Modelling Quality of Service in IEEE 802.16 Networks

Modelling Quality of Service in IEEE 802.16 Networks 1 Modelling Quality of Service in IEEE 802.16 Networks Giuseppe Iazeolla1, Pieter Kritzinger2 and Paolo Pileggi2 1 Software Engineering and System Performance Modelling Group University of Roma Tor Vergata,

More information

CSE331: Introduction to Networks and Security. Lecture 6 Fall 2006

CSE331: Introduction to Networks and Security. Lecture 6 Fall 2006 CSE331: Introduction to Networks and Security Lecture 6 Fall 2006 Open Systems Interconnection (OSI) End Host Application Reference model not actual implementation. Transmits messages (e.g. FTP or HTTP)

More information

The Analysis and Simulation of VoIP

The Analysis and Simulation of VoIP ENSC 427 Communication Networks Spring 2013 Final Project The Analysis and Simulation of VoIP http://www.sfu.ca/~cjw11/427project.html Group #3 Demet Dilekci ddilekci@sfu.ca Conrad Wang cw11@sfu.ca Jiang

More information

A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks

A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks Mohammad HossienYaghmae Computer Department, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Iran hyaghmae@ferdowsi.um.ac.ir

More information

PART II. OPS-based metro area networks

PART II. OPS-based metro area networks PART II OPS-based metro area networks Chapter 3 Introduction to the OPS-based metro area networks Some traffic estimates for the UK network over the next few years [39] indicate that when access is primarily

More information

The Impact of QoS Changes towards Network Performance

The Impact of QoS Changes towards Network Performance International Journal of Computer Networks and Communications Security VOL. 3, NO. 2, FEBRUARY 2015, 48 53 Available online at: www.ijcncs.org E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print) The Impact

More information

Enhanced Power Saving for IEEE 802.11 WLAN with Dynamic Slot Allocation

Enhanced Power Saving for IEEE 802.11 WLAN with Dynamic Slot Allocation Enhanced Power Saving for IEEE 802.11 WLAN with Dynamic Slot Allocation Changsu Suh, Young-Bae Ko, and Jai-Hoon Kim Graduate School of Information and Communication, Ajou University, Republic of Korea

More information

Voice Service Support over Cognitive Radio Networks

Voice Service Support over Cognitive Radio Networks Voice Service Support over Cognitive Radio Networks Ping Wang, Dusit Niyato, and Hai Jiang Centre For Multimedia And Network Technology (CeMNeT), School of Computer Engineering, Nanyang Technological University,

More information

Introduction to LAN/WAN. Network Layer

Introduction to LAN/WAN. Network Layer Introduction to LAN/WAN Network Layer Topics Introduction (5-5.1) Routing (5.2) (The core) Internetworking (5.5) Congestion Control (5.3) Network Layer Design Isues Store-and-Forward Packet Switching Services

More information

How To Make A Multi-User Communication Efficient

How To Make A Multi-User Communication Efficient Multiple Access Techniques PROF. MICHAEL TSAI 2011/12/8 Multiple Access Scheme Allow many users to share simultaneously a finite amount of radio spectrum Need to be done without severe degradation of the

More information

KNX IP using IP networks as KNX medium

KNX IP using IP networks as KNX medium KNX IP using IP networks as KNX medium Dipl.-Ing. Hans-Joachim Langels Siemens AG Industry Sector Building Technologies Control Products and Systems Regensburg hans-joachim.langels@siemens.com Overview

More information

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,

More information

1.1. Abstract. 1.2. VPN Overview

1.1. Abstract. 1.2. VPN Overview 1.1. Abstract Traditionally organizations have designed their VPN networks using layer 2 WANs that provide emulated leased lines. In the last years a great variety of VPN technologies has appeared, making

More information

EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP

EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP Scientific Bulletin of the Electrical Engineering Faculty Year 11 No. 2 (16) ISSN 1843-6188 EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP Emil DIACONU 1, Gabriel PREDUŞCĂ 2, Denisa CÎRCIUMĂRESCU

More information

Keywords Wimax,Voip,Mobility Patterns, Codes,opnet

Keywords Wimax,Voip,Mobility Patterns, Codes,opnet Volume 5, Issue 8, August 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Effect of Mobility

More information

R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol?

R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol? Chapter 1 Review Questions R1. What is the difference between a host and an end system? List several different types of end systems. Is a Web server an end system? 1. There is no difference. Throughout

More information

Figure 1. The Example of ZigBee AODV Algorithm

Figure 1. The Example of ZigBee AODV Algorithm TELKOMNIKA Indonesian Journal of Electrical Engineering Vol.12, No.2, February 2014, pp. 1528 ~ 1535 DOI: http://dx.doi.org/10.11591/telkomnika.v12i2.3576 1528 Improving ZigBee AODV Mesh Routing Algorithm

More information

How To Determine The Capacity Of An 802.11B Network

How To Determine The Capacity Of An 802.11B Network Capacity of an IEEE 802.11b Wireless LAN supporting VoIP To appear in Proc. IEEE Int. Conference on Communications (ICC) 2004 David P. Hole and Fouad A. Tobagi Dept. of Electrical Engineering, Stanford

More information

Requirements of Voice in an IP Internetwork

Requirements of Voice in an IP Internetwork Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.

More information

Performance Evaluation of The Split Transmission in Multihop Wireless Networks

Performance Evaluation of The Split Transmission in Multihop Wireless Networks Performance Evaluation of The Split Transmission in Multihop Wireless Networks Wanqing Tu and Vic Grout Centre for Applied Internet Research, School of Computing and Communications Technology, Glyndwr

More information

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Wireless Networks 6 (2000) 375 379 375 An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Jian Ma a, Jussi Ruutu b and Jing Wu c a Nokia China R&D Center, No. 10, He Ping Li Dong Jie,

More information

Network management and QoS provisioning - QoS in the Internet

Network management and QoS provisioning - QoS in the Internet QoS in the Internet Inernet approach is based on datagram service (best effort), so provide QoS was not a purpose for developers. Mainly problems are:. recognizing flows;. manage the issue that packets

More information

The IP QoS System. 1. Introduction. 2. Mechanisms and Algorithms Required to Guarantee Strict QoS in the Network

The IP QoS System. 1. Introduction. 2. Mechanisms and Algorithms Required to Guarantee Strict QoS in the Network Paper The IP QoS System Wojciech Burakowski a, Jarosław Śliwiński a, Halina Tarasiuk a, Andrzej Bęben a, Ewa Niewiadomska-Szynkiewicz b,c, Piotr Pyda d, and Jordi Mongay Batalla a a,, Warsaw, Poland b

More information

1 Introduction to mobile telecommunications

1 Introduction to mobile telecommunications 1 Introduction to mobile telecommunications Mobile phones were first introduced in the early 1980s. In the succeeding years, the underlying technology has gone through three phases, known as generations.

More information

Deployment of VoIP in the IEEE 802.11s Wireless Mesh Networks

Deployment of VoIP in the IEEE 802.11s Wireless Mesh Networks Deployment of VoIP in the IEEE 802.11s Wireless Mesh Networks Master Research Report June 6, 2008 Submitted By: Abbasi Ubaid Supervised By:Adlen Ksentini Team: DIONYSOS (IRISA) Abstract Wireless mesh networks

More information

Analysis of Basic Quality of Service Mechanism for Voice over IP In Hamdard University Network Campus

Analysis of Basic Quality of Service Mechanism for Voice over IP In Hamdard University Network Campus Analysis of Basic Quality of Service Mechanism for Voice over IP In Hamdard University Network Campus Shahbaz Akhatar Siddiqui Student MS (Telecom) Hamdard University Karachi Junior Lecturer in National

More information

Optical interconnection networks with time slot routing

Optical interconnection networks with time slot routing Theoretical and Applied Informatics ISSN 896 5 Vol. x 00x, no. x pp. x x Optical interconnection networks with time slot routing IRENEUSZ SZCZEŚNIAK AND ROMAN WYRZYKOWSKI a a Institute of Computer and

More information

Basic Multiplexing models. Computer Networks - Vassilis Tsaoussidis

Basic Multiplexing models. Computer Networks - Vassilis Tsaoussidis Basic Multiplexing models? Supermarket?? Computer Networks - Vassilis Tsaoussidis Schedule Where does statistical multiplexing differ from TDM and FDM Why are buffers necessary - what is their tradeoff,

More information

Definition. A Historical Example

Definition. A Historical Example Overlay Networks This lecture contains slides created by Ion Stoica (UC Berkeley). Slides used with permission from author. All rights remain with author. Definition Network defines addressing, routing,

More information

Load Balanced Optical-Network-Unit (ONU) Placement Algorithm in Wireless-Optical Broadband Access Networks

Load Balanced Optical-Network-Unit (ONU) Placement Algorithm in Wireless-Optical Broadband Access Networks Load Balanced Optical-Network-Unit (ONU Placement Algorithm in Wireless-Optical Broadband Access Networks Bing Li, Yejun Liu, and Lei Guo Abstract With the broadband services increasing, such as video

More information

PERFORMANCE ANALYSIS OF AODV, DSR AND ZRP ROUTING PROTOCOLS IN MANET USING DIRECTIONAL ANTENNA

PERFORMANCE ANALYSIS OF AODV, DSR AND ZRP ROUTING PROTOCOLS IN MANET USING DIRECTIONAL ANTENNA International Research Journal of Engineering and Technology (IRJET) e-issn: -00 Volume: 0 Issue: 0 Oct-01 www.irjet.net p-issn: -00 PERFORMANCE ANALYSIS OF AODV, DSR AND ZRP ROUTING PROTOCOLS IN MANET

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

Assessment of Traffic Prioritization in Switched Local Area Networks Carrying Multimedia Traffic

Assessment of Traffic Prioritization in Switched Local Area Networks Carrying Multimedia Traffic Assessment of Traffic Prioritization in Switched Local Area Networks Carrying Multimedia Traffic F. A. Tobagi, C. J. Fraleigh, M. J. Karam, W. Noureddine Computer Systems Laboratory Department of Electrical

More information

Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks

Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Guru P.V. Thodime, Vinod M. Vokkarane, and Jason P. Jue The University of Texas at Dallas, Richardson, TX 75083-0688 vgt015000,

More information

CS263: Wireless Communications and Sensor Networks

CS263: Wireless Communications and Sensor Networks CS263: Wireless Communications and Sensor Networks Matt Welsh Lecture 4: Medium Access Control October 5, 2004 2004 Matt Welsh Harvard University 1 Today's Lecture Medium Access Control Schemes: FDMA TDMA

More information

Performance analysis and simulation in wireless mesh networks

Performance analysis and simulation in wireless mesh networks Performance analysis and simulation in wireless mesh networks Roberto Cusani, Tiziano Inzerilli, Giacomo Di Stasio University of Rome Sapienza INFOCOM Dept. Via Eudossiana 8, 84 Rome, Italy Abstract Wireless

More information

All Rights Reserved - Library of University of Jordan - Center of Thesis Deposit

All Rights Reserved - Library of University of Jordan - Center of Thesis Deposit iii DEDICATION To my parents, my wife, my brothers and sisters, and my son for their encouragement, and help during this thesis. iv ACKNOWLEDGEMENT I would like to thank my supervisor prof. Jameel Ayoub

More information

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science Examination Computer Networks (2IC15) on Monday, June 22 nd 2009, 9.00h-12.00h. First read the entire examination. There

More information

Role of Clusterhead in Load Balancing of Clusters Used in Wireless Adhoc Network

Role of Clusterhead in Load Balancing of Clusters Used in Wireless Adhoc Network International Journal of Electronics Engineering, 3 (2), 2011, pp. 283 286 Serials Publications, ISSN : 0973-7383 Role of Clusterhead in Load Balancing of Clusters Used in Wireless Adhoc Network Gopindra

More information

Lecture 17: 802.11 Wireless Networking"

Lecture 17: 802.11 Wireless Networking Lecture 17: 802.11 Wireless Networking" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Lili Qiu, Nitin Vaidya Lecture 17 Overview" Project discussion Intro to 802.11 WiFi Jigsaw discussion

More information

COMPARATIVE ANALYSIS OF ON -DEMAND MOBILE AD-HOC NETWORK

COMPARATIVE ANALYSIS OF ON -DEMAND MOBILE AD-HOC NETWORK www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 5 May, 2013 Page No. 1680-1684 COMPARATIVE ANALYSIS OF ON -DEMAND MOBILE AD-HOC NETWORK ABSTRACT: Mr.Upendra

More information

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Purvi N. Ramanuj Department of Computer Engineering L.D. College of Engineering Ahmedabad Hiteishi M. Diwanji

More information