AN EFFECTIVE BANDWIDTH MANAGEMENT BY COMPRESSION TECHNIQUE FOR PERFORMANCE ENHANCEMENT IN A SYSTEM
|
|
|
- Maximillian Sparks
- 9 years ago
- Views:
Transcription
1 670 AN EFFECTIVE BANDWIDTH MANAGEMENT BY COMPRESSION TECHNIQUE FOR PERFORMANCE ENHANCEMENT IN A SYSTEM 1 Ashok Kumar Tripathi, 2 Ramesh Bharti 1 M.Tech Research Scholar, Dept of Electronics & Communication Engineering, Jagannath University, Jaipur, India 2 Associate Professor, Dept of Electronics & Communication Engineering, Jagannath University, Jaipur, India ABSTRACT Bandwidth management is a generic term that describes the various techniques, technologies, tools and policies employed by an organization to enable the most efficient use of its bandwidth resources Efficient use means both the minimization of unnecessary bandwidth consumption and the delivery of the best possible levels of service to users. Bandwidth resources refers to the bandwidth of a network, which might be a local network, a Regional Area Network or fields units. HTTP compression is a tool for a web-server to compress content before sending it to the client, thereby reducing the amount of data sent over the network. In addition to typical savings of 60%- 85% in bandwidth cost, HTTP compression also improves the end-user experience by reducing the load latency. For these reasons, HTTP compression is considered an essential tool in today s web supported by all web-servers and browsers, and used by over 95% of the leading websites. Further to add more bandwidth there is need of putting extra hardware which will cost more. Data compression plays an important role in improving the performance of low bandwidth network. Among other performance enhancement techniques, data compression is the most suitable and economical way to further improve network. This is because data compression technique is much simpler and can be implemented easily and freely available and no complicated hardware configuration is required. Thus, data compression is adopted in the proposed case study. Keywords: Bandwidth Management, Organisational Networking System (ONS), Compression Techniques, Intranet Usage Policy, Compression Ratio. I. INTRODUCTION The organisational networking system is one of the most essential assets for any organisation to support and deliver numerous key services. This is also an integral part of computing environment that supports organizational business activity and official works [3]. It is not viable for any big organization to operate if it is not connected to the entire offices spread across the country side. This means, the survival is dependent on good working access of Intranet. Effective Intranet Access requires an information delivery chain consisting of four essential links: (i) content, (ii) connection, (iii) local resources, and (iv) bandwidth management. The content must be in a form accessible by the user. The connectivity is essential to access the content. Furthermore, the resources are required to deliver the content to the end users. These include the local network, computers, necessary tools and skills of network administration team. But bandwidth management is the core issue that attracts attention to provide seamless flow of information. In the current paper, one of the core issues of network management system, i.e. to make available adequate bandwidth is discussed in detail by considering case study of Chanakya Organisation. II. NECESSITY OF BANDWIDTH AND ITS MANAGEMENT T[he bandwidth simply represents the capacity of the communication media to transfer data from source to destination. Wider the route/path for data transmission, more packets of information will be transmitted to the user's Intranet enabled devices. Bandwidth is a gross measurement, taking the total amount of data transferred in a given period of time at a particular rate, without taking into consideration the quality of the signal itself [4]. Furthermore, the bandwidth is responsible for data transfer speed and commonly used in Intranet connections. Bigger the bandwidth quota is, the higher the connection speed and hence quicker it will be to upload and download information. The basic measurement unit of bandwidth is bits per second (bps) and it can be kilobits per second (kbps), megabits per second (mbps) and gigabits per second (gbps). Various Intranet connections are offering different bandwidth standards. For instance, the traditional dial-up Intranet Connection provides a very narrow bandwidth limit of about 56 kbps, while the current broadband connections allow data transfer at much higher speed ranging from 128 kbps to 2mbps. Bandwidth is both absolutely and relatively much more expensive for any organisation. Chanakya organization is found that it does not have reliable, usable Intranet Access for their offices and staff despite considerable investment. Improving the performance of the information delivery chain is urgent if organisation and staffs are to be benefited from the Intranet and take part in the organisational activity [5]. The performance of the existing Intranet Connection can be enhanced by monitoring and controlling mechanism and this is known as bandwidth management.
2 671 III. BACKGROUND OF THE PROBLEM Chanakya is a virtual org having large network infrastructure which is spread across the country. The initial networks was functioning on 2 Mbps for voice and file transfer purpose. Automation of organisation started on By 2012 the various networks users increased manifold. The users were using networks for voice, data transfer, video, photo, s, file transfer, website and online database accesses activity. The BW has been increase up to 8 mbps by different via media like line, wireless, microwave and VSAT. Now the network is facing congestion during day time from morning 9 am to 6 pm and mission critical web applications are not accessible across the users of Chanakya. Chanakya is under pressure to provide their staff and offices with reliable Intranet access. As Intranet connectivity is increasingly becoming a strategic resource for firm management, a robust Branch office network with good connectivity to the Intranet is no longer a luxury to a Chanakya, in actual fact, it is now a basic necessity. The use of the Intranet can enhance the efficiency and capacity of Chanakya.Intranet connectivity provides a gateway to vast amounts of information from the headquarters to area offices and thus provides support and enhances recourses and activity management by both staff and officials. Chanakya have set aside a significant fraction of their budgets towards increasing their bandwidth and upgrading their networks. Despite considerable investment in bandwidth, many of these Area offices are still finding themselves not having reliable, usable Intranet access for their staffs and offices. The demand for bandwidth within Area office is constantly rising and the overall bandwidth usage continues its upward trend. A definite trend is continuing towards p2p, multimedia websites, which contain bandwidth-hungry images, video, animations, and interactive content. Staffs in offices use the Intranet in many different ways, some of which are inappropriate or do not make the best use of the available bandwidth. [14] As the popularity and usage of heavy bandwidth consuming applications grows and the number of network users multiplies, the need for a concerted and co-ordinate effort to monitor bandwidth utilization and implementation of effective bandwidth management strategies becomes increasingly important to ensure excellent service provision. Policy based bandwidth management was adopted and BW was optimized up to some extent. However in spite of all, the data base web services are still not able to get access in far flung of field unit/offices. Since bandwidth is a strategic resource, the efficient usage and management of such a resource should always be a priority. Without bandwidth management, mission critical applications would be starved of bandwidth, disrupting services that impact the operational activities of a Chanakya. As such, this study sought to illuminate the bandwidth management strategies that were being employed by Chanakya in his large network. The case study concentrated on technology based I.e. compression techniques to optimize the bandwidth at server end and browser end. [13]. IV. THE PROBLEM The demand for bandwidth within Chanakya is on a constant rise. The available bandwidth is generally not enough to meet demands and to support optimal usage. Area office is faced with a major challenge in their use of the Intranet. The major challenges experienced by Chanakya are: (a) Database web application not accessible at field/mobile units/offices. (b) Violations of network due to peer to-peer (P2P) file sharing. (c) Poor application performance and Quality of Experience (QoE) during congestion. (d) Mission critical application not able to access in last end office. (e) Experience of sluggish network speed during peak office hours. (f) Sudden very less bandwidth availability at user end and non response of web enabled hosted applications. [14]. V. OBJECTIVES OF STUDY 1. To apply data compression techniques to compress the data and optimise the web browser to save the bandwidth. 2. To ensure that mission critical application are available at last field units/office of the Chankaya and hence enhance and optimise overall bandwidth for performance enhancement on a organisation network. VI. APPROACH OF STUDY As the popularity and usage of heavily bandwidth consuming applications grows and the number of network users multiplies over the coming years, the need for a concerted and coordinated effort to monitor bandwidth utilization and implement bandwidth management will become increasingly important to ensure excellent service provision. As already stated, it is believed that this effort will be greatly facilitated by the formulation and implementation of an organizational bandwidth management strategy. To set the scene, a broad review of technology-based and policy-based bandwidth management can be undertaken, followed by an overview of network monitoring. Bandwidth management components fall into three broad categories: technology-based, policy-based and monitoring. The policy based bandwidth management was adopted to filter the unwanted traffic and utilised the best available bandwidth. However, still the mission critical web
3 672 application was not accessible fully. As compression techniques are simple and freely available, hence it is taken as case study to optimise the bandwidth in chanakya.[12][24][25] VII. COMPRESSION TECHNIQUE-BASED BANDWIDTH MANAGEMENT Compression of data reduces the total amount of traffic that traverses the network, thereby reducing congestion. Network compression affects all (previously uncompressed) data prior to transmission, similar to the way WinZip reduces the size of individual files or folders. HTTP compression is an essential tool for web speed up and network cost reduction. Not surprisingly, it is used by over 95% of top websites, saving about 75% of web traffic. HTTP compression. HTTP compression[19][20] is a tool for a web-server to compress content before sending it to the client, thereby reducing the amount of data sent over the network. In addition to typical savings of 60%-85% in bandwidth cost, HTTP compression also improves the end-user experience by reducing the -load latency [1].The current most popular web-servers [16] (Apache, nginx, IIS) have an easily-deployable support for compression. Due to the significant CPU consumption of compression, these servers provide a configurable compression effort parameter, which is set as a global and static value. The problem with this configurable parameter, besides its inflexibility, is that it says little about the actual amount of CPU cycles required to compress the outstanding content requests. Furthermore, the parameter does not take into account important factors like the current load on the server, response size, and content compressibility. HTTP Compression is a technique supported by most web browsers and web servers. When enabled on both sides, it can automatically reduce the size of text downloads (including HTML, CSS and JavaScript) by 50-90%. It is enabled by default on all modern browsers, however all web servers disable it by default, and it has to be explicitly enabled. A surprising number of sites still do not have compression enabled, despite the benefits to users and the potential saving in bandwidth costs at the server-side.[1][20,21,22,23] A. Static vs. Dynamic Compression Static content relates to files that can be served directly from disk (images/videos/css/scripts etc.). Static compression pre-compresses such static files and saves the compressed forms on disk. When the static content is requested by a decompression-enabled client (almost every browser), the web server delivers the precompressed content without needing to compress the content upon the client s request [17]. This mechanism enables fast and cheap serving of content that changes infrequently. Dynamic content, in the context of this paper, relates to web s that are a product of application frameworks, such as ASP.NET, PHP, JSP, etc. Dynamic web s are the heart of the modern web [18]. Since dynamic s can be different for each request, servers compress them in real time. As each response must be compressed on the fly, the dynamic compression is far more CPU intensive than static compression. Therefore, when a server is CPU bound it may be better not to compress dynamically and/or to lower the compression effort. On the other hand, at times when the application is bound by network or database capabilities, it may be a good idea to compress as much as possible.[25][27] B. CPU vs. Bandwidth Trade-off The focus in this paper is on compression of dynamic content, namely unique HTML objects generated upon client request. The uniqueness of these objects may be the result of one or more of several causes, such as personalization, localization, randomization and others. On the one hand, HTML compression is very rewarding in terms of bandwidth saving, typically reducing traffic by 60-85%. On the other hand, each response needs to be compressed on-the-fly before it is sent, consuming significant CPU time and memory resources on the server side. Most server-side solutions allow choosing between several compression algorithms and/or effort levels. Generally speaking, algorithms and levels that compress better also run slower and consume more resources. For example, the popular Apache web-server offers 9 compression setups with generally increasing effort levels and decreasing output sizes. Figure 1a presents a typical bandwidth reduction achieved with all the 9 compression levels in an Apache site. We just mention at this point that the average size in this example is 120 KB. [1] C. Compression Location Optimizing data compression in the web server [26] is very promising, but simultaneously challenging due to the great richness and flexibility of web architectures. Even the basic question of where in the system compression should be performed does not have a universal answer fitting all scenarios. Compression can be performed in one of several different layers in the web server side. Figure 2 illustrates a typical architecture of a web application server, where each layer may be a candidate to perform compression: 1) the applicationserver itself, 2) offloaded to a reverse-proxy, or 3) offloaded to a central load-balancer. On first glance all these options seem equivalent in performance and cost implications. However, additional considerations must be taken into account, such as a potential difficulty to replicate application-servers due to software licensing costs, and the risk of running CPU-intensive tasks on central entities like the load-balancer.[22][23]
4 BW Saving % 120 kb Compression Latency(mpsec International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN latency latency Figure 1.Compression location alternatives in the web server side [7] VIII. IMPLEMENTATION Our main contribution to compression automation is software/infrastructure implementations that endow existing web servers with the capability to monitor and control the compression effort and utility. The main idea of the implemented compression-optimization framework is to adapt the compression effort to quality-parameters and the instantaneous load at the server.[26] This adaptation is carried out fast enough to accommodate rapid changes in demand, occurring in short time scales of seconds. In this section we provide the details of two alternative implementations and discuss their properties: Both alternatives are found in the project s site [14] and are offered for free use and modification. The compression techniques were implemented as follow: a. Web server compression at server end b. Web browser optimization at user end IX. RESULT AND PERFORMANCE ANALYSIS When compression is in use by the server, the TTFB tends to get higher. This is because today s dynamic servers usually perform the following steps in a pure sequential manner: 1) generation, 2) compression, and 3) transfer. Therefore, the larger the and the higher the compression level, the larger the TTFB. On the other hand, compression obviously reduces dramatically the complete download time of a. Altogether, although the compression increases the TTFB, there is no doubt that this extra delay pays itself when considering the complete download time [12]. The open question is how high the compression level should be to reap download-time benefits without sacrificing latency performance. For example, Figure 1c shows the compression time of a 120 KB, where the slowest level takes x3 more time than the fastest level, which is a typical scenario as we show in the sequel. Compression Level(1-9) Figure 2.Compression Latency grow in upper level It is seen that bandwidth saving improves in upper levels and service capacity shrinks in upper levels. Also compression latency grows in upper levels bandwidth saving Compression Level (1-9) bw Figure 3. Bandwidth saving in upper level A Compression at Low Bandwidth Scenario Figure 4 & 5 below shows the distribution of best compression rate over the congestion rate for Case 1(5%),2(10%) and 3(15%) in Scenario 1(Low Bandwidth) & 2(High Bandwidth). Best compression rate is the compression rate that yields the highest throughput, with the condition that its corresponding packet drop rate does not exceed the limit in each case. Notice that in both scenarios, the best compression rate increases with the congestion rate. This means that a larger compression buffer, which can accommodate more packets, is favoured to obtain a higher performance when the link is getting more and more congested. In Scenario 1, due to the packet drop rate constraints that limits the highest throughput that can be achieved, the best compression rate line of Case 1 is slightly lower than the line of Case 2 & 3, while Case 2 & 3 both achieve similar results (overlapped lines). In Scenario 2, all three cases favour the same compression rates.
5 674 3 are similar as illustrated in Figure 7, thus the corresponding packet drop rate values of these three cases are the same too. Figure 4.Compression at High Bandwidth Scenario As shown in the figures, low bandwidth scenario requires higher compression rates (1-232) while high bandwidth scenario requires lower compression rates (1-16). This shows that the proposed scheme performs better in low bandwidth scenario compared to high bandwidth scenario. This is because high bandwidth scenario has sufficient bandwidth to accommodate heavy flows of traffic, thus compression might not be needed, while in the case of low bandwidth, compression is mandatory as bandwidth limitation problem will cause the communication link to be severely congested B. Compression at high Bandwidth Scenario Figure 6. Packet Drop Rate at Low Bandwidth Scenario D. Packet Drop Rate at High Bandwidth Scenario Fig 7. Packet Drop Rate at high Bandwidth Scenario Figure 5. Compression at high Bandwidth Scenario C. Packet Drop Rate at Low Bandwidth Scenario Figure 6 & 7 shows the distribution of packet drop rate over the congestion rate. Notice that in both scenarios, the packet drop rate without compression increases with the congestion rate. This means that communication link with higher congestion value has higher packet drops. With the adoption of the proposed scheme, block compression reduces the heavy network load and hence avoiding network congestion. Thus, the packet drop rate can be reduced significantly as no packet being dropped due to buffer overhead at the router. The proposed scheme succeeds in reducing the packet drop rate by 1 92 percent in Scenario 1 and 1 83 percent in Scenario 2. In Scenario 1, due to the packet drop rate constraint of 5%, the packet drop rate line of Case 1 is slightly lower than the line of Case 2 & 3, which with the limits of 10% & 15%. Since the best compression rates for Case 1, 2 & E. Throughput at High Bandwidth Scenario Figure 8 & 9 below shows the distribution of packet throughput over the congestion rate for the simulation running without the proposed scheme and simulation running with the proposed scheme (Case 1, 2 and 3) in Scenario 1 & 2.The throughput for simulation without compression decrease with the congestion rate in both scenarios. The more congested the link, the more packets being dropped due to buffer overhead at the router, hence the lower the throughput. As shown in Figure 7 & 8, the proposed scheme succeeds in improving the throughput by percent in Scenario 1 and 5 65 percent in Scenario 2. This is because by applying block compression, more packets can be transmitted over the communication link at one time; hence the throughput can be greatly improved. Notice that the improvement of packet throughput in Scenario 1 is better than in Scenario 2. This also suggests that the proposed scheme is performing much better in a low bandwidth scenario compared to a high bandwidth scenario. This is because compression might not be necessary in high bandwidth scenario, as there is no bandwidth limitation problem and sufficient bandwidth is provided to accommodate heavy
6 675 flows. In contrast, applications are competing for the low and limited bandwidth when there are heavy flows in a low bandwidth scenario, thus, compression is required to further improve the network performance.. chanakya. The download times assume an average band width of 20kbps. Table 2. Comparison of compression of various web s Fig 8. Throughput at low Bandwidth Scenario Web s Web server Databade web Online news Training centre search results Size witho ut compr ession (kb) Size with comp ressi on (kb) Downl oad time without compre ssion (s) Downl oad time with compr ession (s) % Relative saving % % % % Fig 9. Throughput at high Bandwidth Scenario F. Improvement at Low & High Bandwidth Scenario To evaluate the performance and effectiveness of the proposed scheme, extensive simulations have been conducted using captured TCP traffic. The proposed scheme is evaluated under two main scenarios: low bandwidth and high bandwidth. Simulation results show that the proposed scheme succeeds in reducing the packet drop rate and improving the packet throughput significantly in both low and high bandwidth scenarios, as shown in Table 3. Table.1: Improvement in low and high bandwidth. H. The Effects of Compression After employing compression at server end and optimization at user end, the comparative bandwidth saving achieved using compression is tabulated tin table 9.2 here is an analysis of a range of popular web sites in X. CONCLUSIONS AND FUTURE WORK Bandwidth management is a serious and emerging challenge for almost all organisations in the present world of Information Technology. Lack of appropriate bandwidth management is preventing useful Intranet Access which in turn yielding low quality of organisational and offices works. Better management of bandwidth makes Intranet Access wider especially for those who need it in actual. In this we need to go for procurement of hardware and software which involve additional financial burden. It is a very complex and time consuming solution due to its expensive nature because it needs to get approvals from various authorities in a hierarchical manner within the organisation and also at the government level, and as a result bureaucratic delay for immediate solution. So it was decided to go on parallel for second option, i.e. compression based bandwidth Management for immediate relief to the genuine users and mission critical applications. Since compression techniques are simple and freely available then it was decided that compression technique will be utilised for bandwidth saving. It was employed at server end at main server farm and also at user end for web browser optimisation. By applying compression we could achieved bandwidth saving of aound 21% which could enable us to access dynamic web of database server at field unit and area offices. We have used gzip open software for compression, other
7 676 various freeware can be utilised. Other techniques which can be utilised as future works are like QOS techniques, load balancing and cashing techniques for better and effective bandwidth optimisation. REFERENCES [1] S. Pierzchala. Compressing Web Content with mod gzip and mod deflate. In LINUX Journal, April [2] Pu, I. (2006). Fundamental Data ompression, Butterworth- Heinemann, ISBN , Burlington, Massachusetts [3] Dualwan, (2009). Bandwidth management and Traffic Optimization. 12 Dec [4] Optimising Intranet Bandwidth in Developing Country Higher Education. (2006). International Network for Availability of Scientific Publications. Oxford. 10 Dec Uploaded documents/bmochap6. pdf> [5] The Need for Bandwidth Management: Taking Control of your Intranet Connection. (2010). 10 Dec [6] Decentralized Network Management at UW. (2004). Report from the Adhoc Committee on Network Management, Organisation of Waterloo. 12 Dec [7] Ocampo, Saturnino M., (2007). ICT in Philippine Higher Education and Training. 15th SEAMEO-RIHED Governing Board Meeting and back-to-back Seminar on ICT in Organisation Teaching/Learning and Research in Southeast Asian Countries held at the Millennium Sirih Hotel; Jakarta, Indonesia. August 23-24, [8] Rosenberg Diana, (2005). Digital Libraries. International Network for the Availability of Scientific Publications (INASP). 13 Dec [9] Flickenger R. Associate Editors: Belcher M., Canessa E., Zennaro M. How to Accelerate Your Intranet, (2006). 12 Dec /chapter2.pdf> Publishers: INASP/ICTP 2006, BMO Book Sprint Team First edition: October 2006, ISBN: [10] McGuigan, Brendan. (2010). "What is bandwidth". 10 Dec [11] Gray K., Thompson C., Clerehan R., et al. (2008). Web 2.0 Authorship: Issues of Referencing and Citation for Academic Integrity. The Intranet and Higher Education, 11, pp [12] J. Graham-Cumming. Stop worrying about time to first byte (ttfb) [13] Chanakya Virtual Organization Annual Technical Journal and IT Policy, [14] Chanakya Technical Networks, Components and Infrastructure Guide, 2014 [15] George Neisser, Bandwidth Management Advisory Service (BMAS), BMAS Good Practice Guide, ver 1, part 1, pp [16] Netcraft Web Server Survey. November [17] Microsoft IIS - Dynamic Caching and Compression/overview/ reliability/dynamic caching and compression. [18] A. Ranjan, R. Kumar, and J. Dhar. A Comparative Study between ynamicweb Scripting Languages. In Proc. of ICDEM, s , [19] J. Mogul, B. Krishnamurthy, F. Douglis, A. Feldmann, Y. Goland, A. van Hoff, and D. Hellerstein. Delta Encoding in HTTP IETF RFC [20] J. C. Mogul, F. Douglis, A. Feldmann, and B. Krishnamurthy. Potential benefits of delta-encoding and data compression for HTTP. In Proc. of the ACM SIGCOMM Conference, s , [21] Jacobson, V. (1990). Compression TCP/IP for Low-Speed Serial Link, RFC 1144, 1990 JCP-Consult. (2008). [22] JCP-C RoHC Headers Compression Protocol Stack, pp. 1-9, Cesson- Sevigne, France. [23]Chen, S.; Ranjan, S. & Nucci, A. (2008). IPzip: A Streamaware IP Compression Algorithm, Proceedings of IEEE Data Compression Conference, pp , ISBN , Snowbird, Utah, USA, March 25-27, [24] P. Deutsch. GZIP File Format Specification Version 4.3. RFC 1952, May [25] D. Harnik, R. Kat, D. Sotnikov, A. Traeger, and O. Margalit. To Zip or Not to Zip: Effective Resource Usage for Real-Time Compression. In Proc. of FAST, [26] R. Kothiyal, V. Tarasov, P. Sehgal, and E. Zadok. Energy and Performance Evaluation of Lossless File Data Compression on Server Systems. In Proc. of SYSTOR, [27] E. Zohar and Y. Cassuto. Elastic compression project home com/projects/ecomp/ Author Profile Ashok Tripathi received his B.E. degrees in Electronics Engineering from SGGS College of Engg &Technology, Nanded in 1996, MMS from Berhampur University,Odisha in Military Science and Technology in 2005 and PG Diploma in VLSI & Embedded System from C-DAC,Pune in 2012, respectively. Presently he is pursuing MTech from Jagannath University, Jaipur, India in Embedded System Technology. His areas of interst are embedded system, robotics, radar technology, web technology, computer Network and Security, Microprocessor and Microcontroller Based System Wireless and Mobile Computing and Database and Management Information System. Asso. Prof. Ramesh Bharti has received his M-Tech. Degree in Electronics & communication Engineering from Malviya Institute of Technology, Jaipur,India in 2010 and B-Tech from SKIT&G, Jaipur in 2004.He is pursuing his Ph.D from Jagannath University. He has got teaching experience of 11 years in same field. Presently he is working as Associate Professor in department of Electronics and Communication Engineering, Jagannath University, jaipur.
Policy Based an Effective and Efficient Bandwidth Optimisation for Performance Enhancement of an Organisation
Policy Based an Effective and Efficient Bandwidth Optimisation for Performance Enhancement of an Organisation Ashok Kumar Tripathi 1, Ramesh Bharti 2 1 MTech Research Scholar, Department of Electronics
Cisco Application Networking for Citrix Presentation Server
Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
SharePoint Performance Optimization
White Paper AX Series SharePoint Performance Optimization September 2011 WP_SharePoint_091511.1 TABLE OF CONTENTS 1 Introduction... 2 2 Executive Overview... 2 3 SSL Offload... 4 4 Connection Reuse...
Introduction Page 2. Understanding Bandwidth Units Page 3. Internet Bandwidth V/s Download Speed Page 4. Optimum Utilization of Bandwidth Page 8
INDEX Introduction Page 2 Understanding Bandwidth Units Page 3 Internet Bandwidth V/s Download Speed Page 4 Factors Affecting Download Speed Page 5-7 Optimum Utilization of Bandwidth Page 8 Conclusion
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Cisco Application Networking for IBM WebSphere
Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
Cisco Application Networking for BEA WebLogic
Cisco Application Networking for BEA WebLogic Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Network Performance Optimisation: The Technical Analytics Understood Mike Gold VP Sales, Europe, Russia and Israel Comtech EF Data May 2013
Network Performance Optimisation: The Technical Analytics Understood Mike Gold VP Sales, Europe, Russia and Israel Comtech EF Data May 2013 Copyright 2013 Comtech EF Data Corporation Network Performance
Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER
Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER Introduction to Faster Loading Web Sites A faster loading web site or intranet provides users with a more
Key Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
The Problem with TCP. Overcoming TCP s Drawbacks
White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,
Cisco Integrated Services Routers Performance Overview
Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
Influence of Load Balancing on Quality of Real Time Data Transmission*
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 3, December 2009, 515-524 UDK: 004.738.2 Influence of Load Balancing on Quality of Real Time Data Transmission* Nataša Maksić 1,a, Petar Knežević 2,
CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs
CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs).
FatPipe Networks Network optimisation and link redundancy for satellite communications
FatPipe Networks Network optimisation and link redundancy for satellite communications Next generation WAN acceleration Version 01 rev 05 Gregory Fedyk Regional Director UK/IRE, France and Benelux Isik
Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network
Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network Jianguo Cao School of Electrical and Computer Engineering RMIT University Melbourne, VIC 3000 Australia Email: [email protected]
Internet Content Distribution
Internet Content Distribution Chapter 2: Server-Side Techniques (TUD Student Use Only) Chapter Outline Server-side techniques for content distribution Goals Mirrors Server farms Surrogates DNS load balancing
Architecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern
Voice Over IP Performance Assurance
Voice Over IP Performance Assurance Transforming the WAN into a voice-friendly using Exinda WAN OP 2.0 Integrated Performance Assurance Platform Document version 2.0 Voice over IP Performance Assurance
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
ISP Checklist. Considerations for communities engaging Internet Services from an Internet Service Provider. June 21 2005. Page 1
ISP Checklist Considerations for communities engaging Internet Services from an Internet Service Provider. June 21 2005 Page 1 Table of Contents Terminology...3 1. What you need to know about working with
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
Per-Flow Queuing Allot's Approach to Bandwidth Management
White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008 When you buy a broadband Wide Area Network (WAN) you want to put the entire bandwidth capacity to
The Critical Role of an Application Delivery Controller
The Critical Role of an Application Delivery Controller Friday, October 30, 2009 Introduction In any economic environment a company s senior management expects that their IT organization will continually
LARGE-SCALE INTERNET MEASUREMENTS FOR DIAGNOSTICS AND PUBLIC POLICY. Henning Schulzrinne (+ Walter Johnston & James Miller) FCC & Columbia University
1 LARGE-SCALE INTERNET MEASUREMENTS FOR DIAGNOSTICS AND PUBLIC POLICY Henning Schulzrinne (+ Walter Johnston & James Miller) FCC & Columbia University 2 Overview Quick overview What does MBA measure? Can
1 Analysis of HTTPè1.1 Performance on a Wireless Network Stephen Cheng Kevin Lai Mary Baker fstephenc, laik, [email protected] http:èèmosquitonet.stanford.edu Techical Report: CSL-TR-99-778 February
Using email over FleetBroadband
Using email over FleetBroadband Version 01 20 October 2007 inmarsat.com/fleetbroadband Whilst the information has been prepared by Inmarsat in good faith, and all reasonable efforts have been made to ensure
Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc
(International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan [email protected]
Analysis of Effect of Handoff on Audio Streaming in VOIP Networks
Beyond Limits... Volume: 2 Issue: 1 International Journal Of Advance Innovations, Thoughts & Ideas Analysis of Effect of Handoff on Audio Streaming in VOIP Networks Shivani Koul* [email protected]
Test Methodology White Paper. Author: SamKnows Limited
Test Methodology White Paper Author: SamKnows Limited Contents 1 INTRODUCTION 3 2 THE ARCHITECTURE 4 2.1 Whiteboxes 4 2.2 Firmware Integration 4 2.3 Deployment 4 2.4 Operation 5 2.5 Communications 5 2.6
Cisco WAAS Express. Product Overview. Cisco WAAS Express Benefits. The Cisco WAAS Express Advantage
Data Sheet Cisco WAAS Express Product Overview Organizations today face several unique WAN challenges: the need to provide employees with constant access to centrally located information at the corporate
Study of Network Performance Monitoring Tools-SNMP
310 Study of Network Performance Monitoring Tools-SNMP Mr. G.S. Nagaraja, Ranjana R.Chittal, Kamod Kumar Summary Computer networks have influenced the software industry by providing enormous resources
Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow
International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter
EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP
Scientific Bulletin of the Electrical Engineering Faculty Year 11 No. 2 (16) ISSN 1843-6188 EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP Emil DIACONU 1, Gabriel PREDUŞCĂ 2, Denisa CÎRCIUMĂRESCU
Mobile Performance Testing Approaches and Challenges
NOUS INFOSYSTEMS LEVERAGING INTELLECT Mobile Performance Testing Approaches and Challenges ABSTRACT Mobile devices are playing a key role in daily business functions as mobile devices are adopted by most
Data Sheet. VLD 500 A Series Viaedge Load Director. VLD 500 A Series: VIAEDGE Load Director
Data Sheet VLD 500 A Series Viaedge Load Director VLD 500 A Series: VIAEDGE Load Director VLD : VIAEDGE Load Director Key Advantages: Server Load Balancing for TCP/UDP based protocols. Server load balancing
WhitePaper: XipLink Real-Time Optimizations
WhitePaper: XipLink Real-Time Optimizations XipLink Real Time Optimizations Header Compression, Packet Coalescing and Packet Prioritization Overview XipLink Real Time ( XRT ) is a new optimization capability
Scaling out a SharePoint Farm and Configuring Network Load Balancing on the Web Servers. Steve Smith Combined Knowledge MVP SharePoint Server
Scaling out a SharePoint Farm and Configuring Network Load Balancing on the Web Servers Steve Smith Combined Knowledge MVP SharePoint Server Scaling out a SharePoint Farm and Configuring Network Load Balancing
High Performance Cluster Support for NLB on Window
High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) [email protected] [2]Asst. Professor,
G.Vijaya kumar et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1413-1418
An Analytical Model to evaluate the Approaches of Mobility Management 1 G.Vijaya Kumar, *2 A.Lakshman Rao *1 M.Tech (CSE Student), Pragati Engineering College, Kakinada, India. [email protected]
Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02
Technical Brief DualNet with Teaming Advanced Networking October 2006 TB-02499-001_v02 Table of Contents DualNet with Teaming...3 What Is DualNet?...3 Teaming...5 TCP/IP Acceleration...7 Home Gateway...9
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK
Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP
ENSC 427: Communication Networks ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP Spring 2010 Final Project Group #6: Gurpal Singh Sandhu Sasan Naderi Claret Ramos ([email protected]) ([email protected])
Integration Guide. EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide
Integration Guide EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide August 2013 Copyright 2013 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate
STeP-IN SUMMIT 2014. June 2014 at Bangalore, Hyderabad, Pune - INDIA. Mobile Performance Testing
STeP-IN SUMMIT 2014 11 th International Conference on Software Testing June 2014 at Bangalore, Hyderabad, Pune - INDIA Mobile Performance Testing by Sahadevaiah Kola, Senior Test Lead and Sachin Goyal
A Study of Network Security Systems
A Study of Network Security Systems Ramy K. Khalil, Fayez W. Zaki, Mohamed M. Ashour, Mohamed A. Mohamed Department of Communication and Electronics Mansoura University El Gomhorya Street, Mansora,Dakahlya
Chapter 5. Data Communication And Internet Technology
Chapter 5 Data Communication And Internet Technology Purpose Understand the fundamental networking concepts Agenda Network Concepts Communication Protocol TCP/IP-OSI Architecture Network Types LAN WAN
Measure wireless network performance using testing tool iperf
Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,
VoIP QoS on low speed links
Ivana Pezelj Croatian Academic and Research Network - CARNet J. Marohni a bb 0 Zagreb, Croatia [email protected] QoS on low speed links Julije Ožegovi Faculty of Electrical Engineering, Mechanical
AKAMAI WHITE PAPER. Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling
AKAMAI WHITE PAPER Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling Delivering Dynamic Web Content in Cloud Computing Applications 1 Overview
Quality of Service (QoS) on Netgear switches
Quality of Service (QoS) on Netgear switches Section 1 Principles and Practice of QoS on IP networks Introduction to QoS Why? In a typical modern IT environment, a wide variety of devices are connected
An Active Packet can be classified as
Mobile Agents for Active Network Management By Rumeel Kazi and Patricia Morreale Stevens Institute of Technology Contact: rkazi,[email protected] Abstract-Traditionally, network management systems
VIDEOCONFERENCING. Video class
VIDEOCONFERENCING Video class Introduction What is videoconferencing? Real time voice and video communications among multiple participants The past Channelized, Expensive H.320 suite and earlier schemes
AERONAUTICAL COMMUNICATIONS PANEL (ACP) ATN and IP
AERONAUTICAL COMMUNICATIONS PANEL (ACP) Working Group I - 7 th Meeting Móntreal, Canada 2 6 June 2008 Agenda Item x : ATN and IP Information Paper Presented by Naoki Kanada Electronic Navigation Research
Configure A VoIP Network
Configure A VoIP Network Prof. Mr. Altaf. I. Darvadiya Electronics & Communication C.U.Shah College of Engg. & Tech. Wadhwan(363030), India e-mail: [email protected] Ms. Zarna M. Gohil Electronics & Communication
Content Inspection Director
Content Inspection Director High Speed Content Inspection North America Radware Inc. 575 Corporate Dr. Suite 205 Mahwah, NJ 07430 Tel 888 234 5763 International Radware Ltd. 22 Raoul Wallenberg St. Tel
Protocols. Packets. What's in an IP packet
Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about
The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN
The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN By Jim Metzler, Cofounder, Webtorials Editorial/Analyst Division Background and Goal Many papers have been written
SBSCET, Firozpur (Punjab), India
Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Layer Based
VOICE OVER IP AND NETWORK CONVERGENCE
POZNAN UNIVE RSITY OF TE CHNOLOGY ACADE MIC JOURNALS No 80 Electrical Engineering 2014 Assaid O. SHAROUN* VOICE OVER IP AND NETWORK CONVERGENCE As the IP network was primarily designed to carry data, it
R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol?
Chapter 1 Review Questions R1. What is the difference between a host and an end system? List several different types of end systems. Is a Web server an end system? 1. There is no difference. Throughout
Quality of Service Testing in the VoIP Environment
Whitepaper Quality of Service Testing in the VoIP Environment Carrying voice traffic over the Internet rather than the traditional public telephone network has revolutionized communications. Initially,
A Tool for Evaluation and Optimization of Web Application Performance
A Tool for Evaluation and Optimization of Web Application Performance Tomáš Černý 1 [email protected] Michael J. Donahoo 2 [email protected] Abstract: One of the main goals of web application
WAN Performance Analysis A Study on the Impact of Windows 7
A Talari Networks White Paper WAN Performance Analysis A Study on the Impact of Windows 7 Test results demonstrating WAN performance changes due to upgrading to Windows 7 and the network architecture and
IJMIE Volume 2, Issue 7 ISSN: 2249-0558
Evaluating Performance of Audio conferencing on Reactive Routing Protocols for MANET Alak Kumar Sarkar* Md. Ibrahim Abdullah* Md. Shamim Hossain* Ahsan-ul-Ambia* Abstract Mobile ad hoc network (MANET)
OPTIMIZED CONSUMPTION AND ACCESS OF REMOTE DISPLAY ON MOBILE DEVICE ENVIRONMENT
IMPACT: International Journal of Research in Engineering & Technology (IMPACT: IJRET) ISSN(E): 2321-8843; ISSN(P): 2347-4599 Vol. 2, Issue 2, Feb 2014, 167-174 Impact Journals OPTIMIZED CONSUMPTION AND
Technical papers Virtual private networks
Technical papers Virtual private networks This document has now been archived Virtual private networks Contents Introduction What is a VPN? What does the term virtual private network really mean? What
Introduction Chapter 1. Uses of Computer Networks
Introduction Chapter 1 Uses of Computer Networks Network Hardware Network Software Reference Models Example Networks Network Standardization Metric Units Revised: August 2011 Uses of Computer Networks
PQoS Parameterized Quality of Service. White Paper
P Parameterized Quality of Service White Paper Abstract The essential promise of MoCA no new wires, no service calls and no interference with other networks or consumer electronic devices remains intact
TCP/IP Jumbo Frames Network Performance Evaluation on A Testbed Infrastructure
I.J. Wireless and Microwave Technologies, 2012, 6, 29-36 Published Online December 2012 in MECS (http://www.mecs-press.net) DOI: 10.5815/ijwmt.2012.06.05 Available online at http://www.mecs-press.net/ijwmt
CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING
CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility
MICROSOFT. Remote Desktop Protocol Performance Improvements in Windows Server 2008 R2 and Windows 7
MICROSOFT Remote Desktop Protocol Performance Improvements in Windows Server 2008 R2 and Windows 7 Microsoft Corporation January 2010 Copyright This document is provided as-is. Information and views expressed
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
A Performance Analysis of Gateway-to-Gateway VPN on the Linux Platform
A Performance Analysis of Gateway-to-Gateway VPN on the Linux Platform Peter Dulany, Chang Soo Kim, and James T. Yu [email protected], [email protected], [email protected] School of Computer Science,
A Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu
On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu Dilip Kumar Krishnappa, Samamon Khemmarat, Lixin Gao, Michael Zink University of Massachusetts Amherst,
Remote Desktop Protocol Performance
MICROSOFT Remote Desktop Protocol Performance Presentation and Hosted Desktop Virtualization Team 10/13/2008 Contents Overview... 3 User Scenarios... 3 Test Setup... 4 Remote Desktop Connection Settings...
Smart Queue Scheduling for QoS Spring 2001 Final Report
ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS Smart Queue Scheduling for QoS Spring 2001 Final Report By Haijing Fang([email protected]) & Liu Tang([email protected])
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
VPN over Satellite A comparison of approaches by Richard McKinney and Russell Lambert
Sales & Engineering 3500 Virginia Beach Blvd Virginia Beach, VA 23452 800.853.0434 Ground Operations 1520 S. Arlington Road Akron, OH 44306 800.268.8653 VPN over Satellite A comparison of approaches by
The Application Delivery Controller Understanding Next-Generation Load Balancing Appliances
White Paper Overview To accelerate response times for end users and provide a high performance, highly secure and scalable foundation for Web applications and rich internet content, application networking
ALCATEL CRC Antwerpen Fr. Wellesplein 1 B-2018 Antwerpen +32/3/240.8550; [email protected] +32/3/240.7830; Guy.Reyniers@alcatel.
Contact: ALCATEL CRC Antwerpen Fr. Wellesplein 1 B-2018 Antwerpen +32/3/240.8550; [email protected] +32/3/240.7830; [email protected] Voice over (Vo) was developed at some universities to diminish
Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?
18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic
Load Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
