F I N A L R E P O R T O N N E T W O R K R E S O U R C E P R O V I S I O N I N G

Size: px
Start display at page:

Download "F I N A L R E P O R T O N N E T W O R K R E S O U R C E P R O V I S I O N I N G"

Transcription

1 F I N A L R E P O R T O N N E T W O R K R E S O U R C E P R O V I S I O N I N G Document Filename: Activity: Partner(s): Lead Partner: Document classification: BGII-DSA2-9-v0_1- SA2 KTH, EENet, VU, RTU, UIIP NASB IMCS UL PUBLIC Abstract: This document gives an overview on the network resource provisioning in the Baltic countries and Belarus during the BalticGrid-II project. The deliverable focuses on issues identified by the SA2 activity, TCP test measurements and methodology used to fine-tune TCP stack parameters on BG-II clusters, as well as provides data transfer speed comparison before and after the tuning. Network performance monitoring carried out by SA2 team and data collected during the project are also described and analysed in more detail. PUBLIC Page 1 of 36

2 Released for moderation to Approved for delivery by Document review and moderation Name Partner Date Signature Document Log Version Date Summary of changes Author /02/2010 Plan and structure of the deliverable Katrina Sataki 0.5 9/03/2010 First draft Katrina Sataki 0.6 1/04/2010 Latest monitoring data added Martins Libins /04/2010 Latest TCP test results added Edgars Znots /04/2010 Finalised version Katrina Sataki, Baiba Kaskina, Guntis Barzdins /04/2010 Reviewed version Marcin Radecki /04/2010 Final version Katrina Sataki PUBLIC Page 2 of 36

3 Contents 1. INTRODUCTION PURPOSE APPLICATION AREA REFERENCES TERMINOLOGY NETWORK RESOURCE IN BALTICGRID-II OVERVIEW ORGANISATIONAL STRUCTURE Operational principles of CNCC Operational principles of NNCCs SERVICE LEVEL AGREEMENTS NETWORK MONITORING MONITORING PORTAL EXPANSION OF THE BALTICGRID NETWORK MONITORING DATA CASE STUDY TCP PERFORMANCE TESTS MEASUREMENT LABORATORY COLLECTED DATA HIGHSPEED TCP AND SCALABLE TCP PERFORMANCE TESTS IN LABORATORY ENVIRONMENT Test laboratory: high-performance network simulator HighSpeed TCP Scalable TCP Test data comparison and analysis TCP PERFORAMNCE TUNING IN THE BALTICGRID-II NETWORK TCP TUNING IN LABORATORY ENVIRONMENT Baseline measurements Performance of ScientificLinux in comparison with baseline measurements TCP tuning results on production sites TEST DATA COMPARISON AND ANALYSIS TCP THROUGHPUT TESTS BETWEEN SITES CONCLUSION PUBLIC Page 3 of 36

4 1. INTRODUCTION 1.1. PURPOSE The purpose of this document is to give an overview of the network provisioning and work done by SA2 activity during the BalticGrid-II project. The document contains all essential information concerning network connectivity, network performance and traffic monitoring activity of the project APPLICATION AREA This document is intended as a summary of the work carried out by the SA2 activity during the BalticGrid-II project. The document outlines organisational model of cooperation of network coordination centres in the partnering countries, gives an overview of methodology used to measure network throughput and test various TCP stack parameters in order to fine-tune the performance of the BalticGrid-II sites. The analysis of network monitoring data gathered during the project is also performed REFERENCES REFERENCE DESCRIPTION [1] GÉANT Public Portal [2] h_high-performance_gridpp_feb04.ppt Information about the current project GN3 and previous project GN2 High Performance Networking for ALL Presentation by Robin Tasker, DataTAG Project, RIPE- 47, Amsterdam, 29 January TERMINOLOGY ACRONYMS AIRT BDP BIC CCA CMS CNCC Gbps GÉANT LAN LBE LFC EXPLANATION Application for Incident Response Teams (by SurfNET) Bandwidth Delay Product Binary Increase Congestion control Congestion Control Algorithm Compact Muon Solenoid Experiment Central Network Coordination Centre Gigabits per second European Academic Network Local Area Network Less than Best Effort Logical File Catalogue PUBLIC Page 4 of 36

5 LHC LTS Mbps NNCC NREN QoS Reno RTT SA2 SE SLA SRM TCP WN YAIM Large Hadron Collider Long Term Support Megabits per second National Network Coordination Centre National Research and Education Network Quality of Service TCP network congestion avoidance algorythm Round-trip Time Network Provisioning activity of the BalticGrid-II project Storage Element Service Level Agreement Storage Resource Management Transmission Control Protocol Worker Node YAIM Ain't an Installation Manager PUBLIC Page 5 of 36

6 2. NETWORK RESOURCE IN BALTICGRID-II 2.1. OVERVIEW The overall strategy and objective of the SA2 activity of the BalticGrid-II project was to ensure reliable network connectivity for Grid infrastructure in the Baltic countries and Belarus. During the first phase of the project BalticGrid it was identified that transfer of large data volumes is the bottleneck of the network resources available to Grid centres in the partnering countries. Therefore the main tasks for the SA2 activity were: 1) coordinate network services in all partnering countries, monitor network performance and its adherence to the Service Level Agreements (SLA) concluded between BalticGrid-II and NRENs; 2) ensure optimal network resources for applications that require transfer of large data volumes; 3) provide efficient handling of security incidents in close cooperation with SA1; 4) coordinate the work of SA2 with other projects. In order to implement the objectives of the activity a sustainable organisational model that would continue collaboration between national Grid initiatives after the end of the project was proposed and introduced in all the partnering countries. Central Network Coordination Centre (CNCC) was established to work as a multi-domain umbrella for different entities and to initiate collaboration between the National Network Coordination Centres (NNCC) in the partnering countries. CNCC also coordinated the transfer of knowledge accumulated during the BalticGrid project to the partners in Belarus by helping them to establish reliable network connectivity and monitoring their network interface and clusters. During the project CNCC monitored network and its adherence to the signed SLAs (established already during the BalticGrid project and updated with the BalticGrid-II project) and advocated interests of the Project and Grid users with National Research and Education Networks providing network infrastructure. To increase Grid usability for applications that require transfer of large data, the main emphasis was to perform network throughput tests, to propose solutions for network enhancement, as well as to implement and test HighSpeed TCP and Scalable TCP modifications of a standard TCP implementation to dramatically improve TCP performance in highspeed wide are networks. During the project alternative solution was proposed and TCP stack tuning parameters introduced. Results from the TCP tuning were compared with the ones of the tests of HighSpeed TCP and Scalable TCP. To increase the level of trust in Grid technologies and usage, SA2 carried out extended risk analysis for the BalticGrid-II network. Based on this analysis Grid Acceptable Use Policy and Security Policy were developed and implemented. Finally, Incident Handling and Response Policy was implemented and security incident handling carried out by BalticGrid Incident Response Teams (BG-IRT) within each NNCC. Thus, SA2 activity performed the role of the network-based second line of defence after SA1 concentrated on resource centre management and solving incidents arising on the network, in this way increasing users confidence ORGANISATIONAL STRUCTURE Central Network Coordination Centre, established by the IMCS UL as a coordinating body for all NNCCs, had the following responsibilities: PUBLIC Page 6 of 36

7 Since CNCC is a coordinating body, its main responsibility was to supervise, coordinate and lead all networking activities of the project. The main objective was to focus on the procedures of mutual collaboration between NNCCs in a way to ensure sustainability of the effective communication and problem solving even after the end of the project. One of the major tasks was to provide, test and implement all necessary tools to monitor network and storage elements 24x7 and elaborate procedures to identify and prevent network disruptions or other problems, as well as to alert about the possible bottlenecks in due time. This task required also study and research of observed network problems; CNCC was also a body that represented SA2 in the work with other activities within the project, as well as coordinated the work of SA2 with other projects and teams outside the consortium; In a case of observed network problems, possible security incidents or other events it was the responsibility of CNCC to notify all other NNCCs and control the measures taken by NNCC staff to prevent possible network failures; CNCC continued the work started during BalticGrid project by ensuring adherence of concluded SLAs, represents the project in the negotiations between the project and Internet Service providers (National Research and Education Networks of the partnering countries); CNCC tested and approbated new solutions for full utilisation of available network bandwidth; CNCC was responsible for the development and implementation of security policy and handling of security incidents. These policies were developed and discussed between all interested parties, e.g., NNCCs, SA1, NRENs, users; It was also necessary to transfer knowledge accumulated during the BalticGrid project in respect of network resource provisioning to the new partners; CNCC maintained and developed the BalticGrid monitoring portal CCNC was established by IMCS UL involving network experts of Latvian NREN SigmaNet as well as IMCS UL Grid experts. National Network Coordination Centres, established in Lithuania, Latvia, Estonia, and Belarus, had the following responsibilities during the project and will continue their work after the project to ensure sustainability of the Grid resources in the Baltics and Belarus: monitoring network and storage elements 24x7 in their respective countries; study and identification of observed network problems, notify other NNCC teams about the status of the network, problems identified and solutions tested and implemented. This is to ensure the distribution of knowledge gathered during the project between all partners involved; NNCC team shall notify local Grid users about security incidents and solutions implemented, as well as educate the users in respect of the network security and precautions each user has to take; communication with service providers about daily operations; testing and approbation of new solutions for non-standard HighSpeed or Scalable TCP data transfer and full utilisation of available network bandwidth; PUBLIC Page 7 of 36

8 development and implementation of security policy and handling of security incidents according to CNCC directives. The following partners are responsible for daily operations of NNCC: Lithuania VU; Latvia IMCS UL; Estonia EENet; Belarus UIIP NASB. Within each NNCC, as a separate group of at least 2 people an incident response teams (IRT) was established with the following responsibilities: actively participate in the process of policy development carried out by CNCC, implement the policy in their institutions, periodically review the provisions of the policy and suggest changes in order to ensure flexible and operational procedure that serve the needs of the user community; deal with security incidents, solve them and inform local community and other NNCCs about the incidents, using AIRT software; analyse security incidents and suggest security improvements. Each partner responsible for NNCC has assigned the tasks of incident response to skilled and professional people who would be able to perform them in good quality. The following people have been assigned: Lithuania: NNCC IRT Latvia NNCC IRT Estonia: NNCC Eduardas Kutka, eduardas.kutka@mif.vu.lt; Algimantas Juozapavicius, algimantas.juozapavicius@mif.vu.lt. Eduardas Kutka, eduardas.kutka@mif.vu.lt; Algimantas Juozapavicius, algimantas.juozapavicius@mif.vu.lt. Guntis Barzdins, guntis.barzdins@sigmanet.lv; Martins Libins, martins.libins@sigmanet.lv; Edgars Znots, edgars.znots@sigmanet.lv. Baiba Kaskina, baiba.kaskina@sigmanet.lv; Martins Libins, martins.libins@sigmanet.lv. PUBLIC Page 8 of 36

9 IRT Belarus: NNCC IRT Hardi Teder, Tõnu Raitviir, Hardi Teder, Tõnu Raitviir, Sergey Aneichik, Alexandr Lavrinenko, Andrey Volkov, Oleg Nosilovsky, Oleg Moiseichuk, Yury Zemtsov, Pavel Prokoshin, Andrey Volkov, Oleg Nosilovsky, Oleg Moiseichuk, Essential benefits of involving these experts were that they were familiar both with the Grid environment and demands as well as with the networking infrastructure in the country and usual practice of security teams. These skills only together can ensure successful Grid security incident handling Operational principles of CNCC CNCC has been established at the IMCS UL. CNCC started its operation in July The main principles of CNCC operation were to: 1. Ensure effective collaboration between NNCCs by developing framework procedures and providing necessary facilities; 2. Foster information exchange between NNCC teams; 3. Maintenance of the monitoring portal for the global BalticGrid network resource availability; 4. Represent BalticGrid interests in collaboration with other projects, e.g., GN2 (since April 2009 GN3) [1], EGEE. 5. Monitor SLA adherence and inform NNCC teams about the identified problems, as well as negotiate possible solutions with the respective NRENs Operational principles of NNCCs NNCCs have been established in Lithuania, Latvia, Estonia, and Belarus and the main principles of their operation were to: PUBLIC Page 9 of 36

10 1. Ensure network resource provisioning in its respective country; 2. Monitor all parts of its network via BalticGrid Network monitoring portal 3. Inform other team members and CNCC team about observed network anomalies and steps taken to solve identified problems; 4. Inform Grid users about steps taken to solve identified network problems. In order to perform those operations all partners have received information about the BalticGrid monitoring portal, as well as have been familiarised with the BalticGrid network structure SERVICE LEVEL AGREEMENTS The Service Level Agreement (SLA) of the BalticGrid-II project consists of two parts: 1) General provisions this part sets out the substantive clauses of the cooperation between National Research and Education network of Belarus BASNET and the BalticGrid-II project. These provisions also serve as guidelines to the interpretation of the specific provisions of the Agreement; 2) Specific provisions - technical service parameters which can be offered and/or ordered. General provisions of the SLA contain Administrative Level Objects (ALO) that include general information related to parties and the agreement itself: requisites of the Parties; purpose of the Agreement (includes the clause about the chosen SLA type); responsibilities of the parties: who is responsible for what, who are the contact persons, what are the expected reaction times of helpdesks etc. modification and termination of the Agreement. This part serves as a legal basis of the cooperation. Specific provisions of the SLA contain Service Level Objects (SLO), i.e., regarding of the SLA type chosen Specific provisions list the actual technical service parameters which can be offered and/or ordered. Agreement was signed by parties in 30 October 2008 during the first All-Hands Meeting of the BalticGrid-II project held in Minsk. PUBLIC Page 10 of 36

11 3. NETWORK MONITORING 3.1. MONITORING PORTAL Information on the network infrastructure and available monitoring data is collected on-line at The first page shows geographical position of the Baltic countries and Belarus. This portal provides a convenient centralised view on the essential historic and real-time BalticGrid network parameters, thus serving as an excellent troubleshooting and SLA adherence monitoring portal. Fig. 1 Homepage of gridimon.balticgrid.org By clicking on each country more detailed information on network topology is available. PUBLIC Page 11 of 36

12 3.2. EXPANSION OF THE BALTICGRID NETWORK By adding Belarus to the BalticGrid infrastructure, detailed map of Belarusian Grid network was developed and updated during the project. Fig. 2 Belarusian Grid network infrastructure view By clicking on each link it is possible to view available graphical information. Grid network infrastructure of BASNET was successfully included into the infrastructure of BalticGrid-II. Service Level Agreement between BalticGrid-II and BASNET was signed. Belarusian Grid network diagram was successfully added to the BalticGrid-II network monitoring portal at It allows monitoring of SLA adherence. PUBLIC Page 12 of 36

13 Last year s upgrades of the Belarusian network connectivity to the European Academic Network GÉANT2 and participation of BASNET in the GN3 project in the status of associate member ensured the sustainability of the network and enough resources for academic and, particularly, Grid users MONITORING DATA The structure of the monitoring portal among other features allows observing the relation between Link Load and Latency. The graphs below show monitoring data of Estonia, Latvia, Lithuania, and Belarus. The SA2 team performed a monitoring of the most significant parameters such as link load, latency, jitter and packetloss. Portal monitors each member of the BalticGrid-II project separately from others and collects statistics graphically. The graphs below (see Fig. 3 to Fig. 10) show statistics of primary GÉANT channel of EENet, LITNET, SigmaNet and BASNET. These graphs are for the period of time of the last month (March 2010) and show no outages or congested network in the networking infrastructure of BalticGrid-II. Fig. 3. EENet GÉANT link utilisation Fig. 4. EENet GÉANT SLA (rping) Fig. 5. LITNET GÉANT link utilisation PUBLIC Page 13 of 36

14 Fig. 6. LITNET GÉANT SLA (rping) Fig. 7. SigmaNet GÉANT link utilisation Fig. 8. SigmaNet GÉANT SLA (rping) Fig. 9. BASNET GÉANT link utilisation PUBLIC Page 14 of 36

15 Fig. 10. BASNET GÉANT SLA (rping) Statistics of the primary GÉANT channel of EENet, LITNET, SigmaNet, and BASNET for the period of time of the last Year (fig.9-16) Fig. 11. EENet GÉANT link utilisation Fig. 12. EENet GÉANT SLA (rping) Fig. 13. LITNET GÉANT link utilisation PUBLIC Page 15 of 36

16 Fig. 14. LITNET GÉANT SLA (rping) Fig. 15. SigmaNet GÉANT link utilisation Fig. 16. SigmaNet GÉANT SLA (rping) PUBLIC Page 16 of 36

17 Fig. 17. BASNET GÉANT link utilisation Fig. 18. BASNET GÉANT SLA (rping) 3.4. CASE STUDY The monitoring portal gridimon.balticgrid.org shows statistics on the utilisation of the network infrastructure of BalticGrid-II. These measurements are made in real time and shown in the graphs. A good example of how the SLA monitoring is working can be seen in the relevance of a networking problem to SLA parameters. There was an outage in the link between Riga (LV) and Tallinn (EE) due to a fibre cut in Televork network. As a result of this networking problem, all traffic between SigmaNet and EENet was rerouted over alternative paths. The traffic from SigmaNet to EENet (and vice versa) was going trough Lithuania. It affected the latency up to 6 times from average 10ms to average 60ms (see Fig. 19 and Fig. 20). However, it did not affect the quality of the network significantly and thanks to the fast re-routing the end user did not even notice it. This was a temporary problem that was solved when the Televork fibre was fixed and all SLA parameters returned back to their previous values (see the added ticket below). PUBLIC Page 17 of 36

18 Fig. 19. SLA monitoring case study Fig. 20. SLA monitoring case study-ii DANTE TICKET: 4476 Type: Dashboard Alarm Status: Closed Description: [rig-tal] EE-LV link down Location A: Riga, LV Location B: Tallinn, EE Incident Start: 16/11/ :03:10 (UTC) Incident Resolved: 16/11/ :37:20 (UTC) Ticket Open: 16/11/ :17:47 (UTC) Ticket Close: 17/11/ :37:28 (UTC) AFFECTED SERVICES EE-LV IP link down but traffic will be re-routed over alternative paths HISTORY Latest update: RESOLVED Outage due to fibre cut in Televork network. Link has been up and stable since 13:25:46 UTC. Link will be monitored for the next 24 hours. PUBLIC Page 18 of 36

19 HENRY.AGBENU 16/11/ :41:03 UTC HISTORY UPDATE Our logs show that the STM 64 link (id: DANTE/10GBWL/TALL-RIGA/001) between Tallinn and Riga is down since 08:10:58 UTC. Circuit provider has been contacted. AKILESWARAN.RADHAKRISHNAN 16/11/ :27:20 UTC HISTORY UPDATE Link is up and stable since 13:25:46 UTC. We are waiting for an RFO from the provider, Televork. HENRY.AGBENU 16/11/ :28:40 UTC PUBLIC Page 19 of 36

20 4. TCP PERFORMANCE TESTS TCP is the primary transport protocol used by vast majority of all services included in EGEE Grid middleware, deployed in the BalticGrid-II project. Although there have been devised multi-session TCP file transfer protocols such as GridFTP and similar, their actual use is rather limited due to sophisticated setup and tuning required. BalticGrid-II took the initiative to investigate this problem in detail and to try to find a solution. The solution indeed has been found, deployed in BalticGrid, and positively conformed by tests. The solution turned out to be different from what it was envisioned instead of the need to switch to HighSpeed TCP or Scalable TCP in the BalticGrid, the actual solution was found in TCP buffer optimization, which allows to increase considerably the achievable throughput for single TCP connection while maintaining compatibility with existing TCP protocol implementations. Already in the first phase of the BalticGrid project it was correctly identified that the true bottleneck for file transfers in grid infrastructure is not the GÉANT network itself (most BalticGrid resource centres have 1Gbps connectivity to GÉANT), but rather the poor TCP protocol performance limiting individual single-session file transfers to merely Mbps. During the first year of the project there were three breakthroughs that allowed to identify the cause of low TCP performance, as well as to propose a solution to this problem. During the second year of the project additional tests and TCP protocol fine-tuning was carried out in test-lab environment and later implemented in pilot setups. After verification of results in pilot implementation, devised TCP tuning was implemented in BalticGrid-II production network and resource centres. The three breakthroughs are the following: Creation of reliable gigabit test-lab for TCP performance measurement at variable RTT delays. This turned out to be a non-trivial task, because many off-the-shelf delay simulation techniques either produce results not consistent with real networks or not capable of delaying gigabit traffic flows without distortion. Example of non-conforming approach is use of Linux transmission control ('tc') tools, which were attempted initially. A working gigabit traffic delaying solution (breakthrough) has been found to be FreeBSD firewall ('ipfw') configured as Layer 2 bridge rather than Layer3 router (Linux approach). The test results achieved with this solution have been compared and positively matched with real international gigabit network tests. This achievement provided possibility to reliably test large number of TCP implementations and configurations in lab environment. Despite the initial theory that better than average TCP performance in some BacticGrid-II clusters was due to Intel I/O Acceleration Technology (Intel IOAT) technology (coincidentally present in one of the high-performing BalticGrid-II nodes), the thorough tests in the test-lab linked the TCP performance variations to different versions of Linux kernels installed on different BalticGrid-II clusters. By changing only Linux kernel version it was possible to re-create in the lab both the low TCP performance characteristic to most of the BalticGrid-II clusters, as well as the high TCP performance observed in some BalticGrid-II nodes. The last breakthrough came from a careful study of Linux TCP stack and its tuning parameters, as well as close inspection of TCP setup in BG-II clusters. It was discovered in test lab that default TCP stack for Scientific Linux has very small RX/TX buffer sizes (maximum TCP window), which plays crucial role in TCP performance. It was traced down that as few as 4 configuration lines for TCP stack inserted in system configuration file resolves the TCP performance bottleneck up to gigabit speeds. Later it was positively confirmed that KTH clusters, which despite using low-performing Linux kernel version achieved exceptionally good TCP performance, also have almost identical lines for improved TCP performance. KTH clusters had PUBLIC Page 20 of 36

21 these lines inserted a while back, and largely forgotten by the general community. The following 4 lines must be put in /etc/sysctl.conf file to improve TCP performance: net.core.rmem_max = net.core.wmem_max = net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = These lines relate to minimal, default and maximum allocatable TCP window sizes, and the following tests on other BG-II nodes with various Linux kernel versions have confirmed that the same 4 TCP configuration lines indeed resolve the TCP performance bottleneck up to gigabit speeds MEASUREMENT LABORATORY To investigate possible network latency or server TCP/IP stack influence on data transfer speeds, SA2 created a test laboratory where network throughput measurements could be made. The measurement laboratory provided possibility to simulate network latencies and optionally jitter and packet loss. It was decided to use a middle server between the transferring two. FreeBSD was chosen as a next alternative for OS, since FreeBSD has tools with similar functionality to that of Linux 'tc', called 'dummynet' network simulator module and managed through 'ipfw' tools. FreeBSD 'dummynet' works on the Ethernet frame level, thus it acts more as a passive switch. This implementation consumes considerably less CPU power on the host that acts as latency creator (network simulator), thus the results are consistent COLLECTED DATA After performing various tests using FreeBSD server with 'dummynet' tools between two Linux servers it was found that Scientific Linux 4, if used with default kernel and TCP/IP stack configuration, performs very poorly in network environments with latencies (round trip time, RTT) above 1ms. Results were compared with other Linux kernels and distributions, as well as with other TCP/IP congestion control algorithms. Preliminary results are shown in Fig. 21. The graph shows TCP bandwidth for various Linux kernels and distributions depending on network latency (RTT) and TCP send/receive buffer sizes. As it can be seen, default Scientific Linux 4 installation underperforms as soon as network latency becomes greater than 1-2ms. Typical network latency between sites in BalticGrid is 8-35ms. After tuning basic TCP/IP stack parameters, Scientific Linux 4 with default Linux kernel with traditional BIC or Reno congestion control algorithm performs up to times faster in latency range of 6-40ms, the important latency range for the BalticGrid-II applications. PUBLIC Page 21 of 36

22 Fig. 21. TCP bandwidth for various Linux kernels The Fig. 22 illustrates the TCP performance in BalticGrid-II at the beginning of the project and at the end of the project. Fig. 22. Single-session TCP performance of some BalticGrid clusters before and after TCP tuning (as measured from site in Latvia) PUBLIC Page 22 of 36

23 This figure clearly shows that methods proposed have indeed manifold improved the TCP singlesession performance in BalticGrid. Effectively the TCP performance bottleneck has been removed and transfer rates are primarily limited only by the actual network capacity, which for majority of the BalticGrid resource centres is 1Gbps. Not all BalticGrid resource centres have been able to achieve close to gigabit TCP performance as seen in Fig. 22. Meanwhile the cause of limited performance in these sites is purely due to available network capacity limitations all affected sites have only 100Mbps GÉANT connectivity, with Belarus resource centre having even less capacity towards GÉANT (as seen in Fig. 23). Fig. 23. TCP performance (in Mbps) of clusters with limited GÉANT connectivity PUBLIC Page 23 of 36

24 5. HIGHSPEED TCP AND SCALABLE TCP PERFORMANCE TESTS IN LABORATORY ENVIRONMENT Test laboratory: high-performance network simulator The achieved TCP performance improvement throughout the BalticGrid network was made possible solely due to the early decision made by the SA2 team to employ a powerful network simulator for TCP performance testing rather than attempt to carry out the tests on a live pan-european network (the success of this strategy later was confirmed by the successful deployment in the live network). The lack of full control over test-conditions in live network and inability to exactly duplicate earlier measurements made scientific argument impossible due to endless guesswork about side-effects during each measurement. Meanwhile creation of a reliable network simulator for gigabit speeds and variable pan-european RTT latencies was not a non-trivial task due to two factors: Other teams around the world use bundles of expensive multi-thousand kilometres of optical fibre to reliably simulate performance of a real network (e.g., The WAN in Lab (WiL) Project at Caltech has a 2400+km long haul fibre optic test bed, designed specifically to aid FAST TCP research and provide a TCP benchmarking facility). The standard network latency simulation software tc (transmission control) included with Linux operating system produces highly inaccurate simulation results not consistent with real networks and thus is useless for TCP performance investigation. These poor results cast doubt whether PC-based simulator at all can reliably simulate gigabit network at significant latencies. Despite this un-encouraging landscape of obvious choices, the BalticGrid team was able to come up with a working gigabit traffic delaying solution. The breakthrough solution has been found in FreeBSD firewall ('ipfw') configured as Layer 2 bridge rather than Layer3 router (Linux approach). The test results achieved with this solution have been compared and positively matched with real international gigabit network tests. Moreover, the Layer 2 implementation of network simulator made it fully agnostic towards the kind of traffic to be delayed and allowed to reliably simulate also effects of multiple concurrent TCP sessions, which turned out to be a crucial aspect for selecting a TCP implementation appropriate for the BalticGrid network and GÉANT in general (because of LBE Less than Best Effort service discontinuation in GÉANT network, which was crucial for nondisruptive use of aggressive TCP variations, like HighSpeed TCP and Scalable TCP). This achievement finally provided possibility to reliably test large number of TCP implementations and configurations in lab environment and proceed with deployment only after thorough studies. The test laboratory setup and TCP performance test results have been reported in the TERENA Networking Conference 2009 ( HighSpeed TCP Although HighSpeed TCP was measured to be capable of providing consistently high throughput at various network latencies, it also showed unacceptable tendency to degrade performance of regular TCP sessions for other network users as illustrated in Fig. 24 [2]. PUBLIC Page 24 of 36

25 Fig. 24. HighSpeed TCP degrades performance of regular TCP in the same network Scalable TCP Although Scalable TCP was measured to be capable to provide consistently high throughput at various network latencies, it also showed unacceptable tendency to degrade performance of regular TCP sessions for other network users as illustrated in Fig. 25 [2]. Fig. 25. Scalable TCP degrades performance of regular TCP in the same network PUBLIC Page 25 of 36

26 Test data comparison and analysis The above results are consistent with conclusions from other teams and confirm that both HighSpeed TCP and Scalable TCP may be non-disruptively used only on private networks while there is no packet loss for competing TCP versions in the network, or in conjunction with some sort of trafficmanagement solution, such as LBE (Less than Best Effort) traffic class previously available in GÉANT network, but discontinued in The deployment of HighSpeed TCP or Scalable TCP in BalticGrid network as a cure against the low TCP protocol performance (which was already then correctly identified as the actual network performance bottleneck ) was envisioned in 2007 during planning stages of BalticGrid-II project when LBE service was still available in GÉANT network. LBE service discontinuation by GÉANT has been correctly identified as one of the risks for SA2 activity. Fortunately, the powerful network testing laboratory created by the SA2 team enabled to design different, less obvious, but much more elegant solution for the TCP performance bottleneck problem. PUBLIC Page 26 of 36

27 6. TCP PERFORAMNCE TUNING IN THE BALTICGRID-II NETWORK 6.1. TCP TUNING IN LABORATORY ENVIRONMENT Baseline measurements To begin mitigation of TCP performance bottlenecks between BalticGrid sites, first a set of baseline TCP performance measurements had to be taken. This would allow later comparison of TCP performance between different Linux distributions present at BalticGrid sites, as well as at different parameter setups and network latencies. Ubuntu 8.04 LTS (kernel ) was chosen for baseline measurements, because at the time this distribution had the latest stable kernel, thus representing state-of-the-art for Linux TCP stack performance. Fig. 26 shows baseline TCP performance measurements for Ubuntu 8.04 at different TCP setups default setup, setup with TCP buffers increased to 8MB, and setup with TCP buffers increased to 16MB. Fig. 26. Single session TCP throughput for Ubuntu 8.04 LTS at various TCP buffer sizes PUBLIC Page 27 of 36

28 As it can be seen from the graph above, default TCP parameters for Ubuntu 8.04, especially TCP buffer size, is not optimal for high latency networks. Increasing TCP buffers to at least 8MB significantly improves single session TCP throughput for network latency range of 10-50ms. This range represents typical latencies in WANs, most latencies measured between BalticGrid sites are in the range of 8-35ms. It seems, though, that doubling TCP buffer size again for Ubuntu 8.04 LTS does not necessarily improve TCP performance further. On the contrary, throughput seems to degrade. Performing these baseline measurements gave several important results: 1. Measurements showed that our network delay simulator works and has an effect on TCP throughput 2. Even state-of-the-art Linux often has less than optimal TCP stack parameters for achieving high throughput 3. Even as simple procedure as increase of TCP buffers with changes in few system configuration lines can have a considerable effect on overall throughput 4. More is not always better although theoretically linear increase of TCP buffers should equally reflect in higher TCP throughput at large latency networks, the Linux TCP stack itself may be unable to scale with increased buffers Due to all of results and conclusions mentioned above, it was decided that existing network simulator (FreeBSD 7.1, ipfw with dummynet, 1000Hz kernel) is suitable for further measurement of Linux distributions and configurations used in BalticGrid. Also, a good baseline for comparing further TCP tuning results was set, using Ubuntu 8.04 LTS distribution with TCP buffers set to 8MB Performance of ScientificLinux in comparison with baseline measurements After correct operation of FreeBSD based network simulator was verified and baseline performance measured for Ubuntu 8.04 LTS, it was possible to measure and compare performance of ScientificLinux and baseline TCP throughput. Fig. 27 shows TCP throughput for ScientificLinux in comparison with the baseline measurements. ScientificLinux is the primary operating system for glite middleware and as such is used virtually by all sites. As it can be seen by the TCP throughput trend of ScientificLinux default configuration, the TCP stack is configured in such a manner that single session TCP throughput drops from near gigabit speed to less than 500Mbit/s as soon as network latency becomes greater than 1ms, and to less than 100Mbit/s as soon as latency increases above 5ms. These results were at first thought to be incorrect and caused by malfunctioning network latency simulator or other equipment or setup details, but several repeated results with the same hardware and different Linux kernel versions and TCP stack parameters verified that, indeed, data acquired by these measurements is correct, and ScientificLinux significantly underperforms in regard to TCP throughput. More specifically, it seems that ScientificLinux has a fixed and very small TCP buffers, whereas all major distributions usually use TCP buffer autotuning, increasing and decreasing their size for optimal performance and resource usage. The effect, thus, is that for all network connections with latencies above 1ms essentially all non-lan connections, ScientificLinux is unusable for high-speed single-session TCP data transfer. After identifying poor TCP throughput in case of ScientificLinux, the system parameters were adjusted to enable larger TCP buffers. After tests with several different buffer sizes it was concluded that, just like in case of Ubuntu 8.04 LTS, optimum upper values for TCP buffers that still provide TCP throughput increase in ScientificLinux 4.7 are 8MB. As it can be seen in the Figure 4-2, both PUBLIC Page 28 of 36

29 setups of Ubuntu 8.04 LTS and ScientificLinux 4.7 achieve almost identical throughput results across the latency range, although ScientificLinux uses much older kernel (2.6.9) with BIC congestion control algorithm, whereas Ubuntu has latest stable kernel and uses Reno congestion control algorithm. These measurements provide some interesting results and conclusions: 1) Default ScientificLinux 4.7 TCP configuration underperforms for non-lan scenarios 2) Bottleneck for TCP throughput in ScientificLinux is caused by small fixed TCP buffers 3) This bottleneck can be removed by simply increasing TCP buffers 4) It is possible to achieve baseline TCP throughput performance for ScientificLinux 5) Newer Linux kernels do not necessarily have better TCP stack, they are just (usually) configured with better (but not optimal) parameters in comparison to ScientificLinux 6) Usage of different congestion control algorithms (Reno vs. BIC) do not play any significant role in single-session TCP throughput for packet-loss free networks Fig. 27. TCP throughput for ScientificLinux 4.7 PUBLIC Page 29 of 36

30 TCP tuning results on production sites After all the simulations and tuning of TCP performance carried out in the test laboratory it was concluded, that most BalticGrid sites are using default TCP stack configuration provided by corresponding distribution maintainers (Scientific Linux, RedHat, SUSE, etc.) or by the YAIM configuration tool used in installing and configuring specifig glite middleware components on the hosts. These default are not appropriate for networks with high BDP (Bandwidth Delay Product), and it is necessary to increase available buffers to compensate for larger BDP. As it was later discovered, the YAIM configuration tool a tool to configure the middleware developed by the EGEE project does however increase TCP buffer size up to 2MB for Storage Element type hosts, but not in the case of Worker Node hosts. This decision to tune TCP buffers for SE hosts and leave default underperforming configuration for WN hosts seems to be deliberate design choice, and is not causing (actually preventing) network performance bottlenecks in case of CMS/Atlas jobs. CMS/Atlas research workflow implies that before the calculations data is first transferred from Tier1 to Tier2 sites. These transfers are performed between Storage Elements at Tier1 and Tier2 sites, thus only TCP buffers for these nodes have to be tuned. Worker Nodes at these Tier2 centers that perform calculations then fetch data from their respective local Storage Element, and since this transfer takes place in LAN with RTT less than 1ms, default TCP buffer configuration works as required. More over, the decision deliberately not to tune TCP parameters at Worker Nodes ensures that no other job at the site could cause network throughput problems for CMS/Atlas jobs in case a LHC-unrelated job is also running at the site and trying to download large amount of data to Worker Nodes from distant Storage Elements. # TCP buffer sizes configured by YAIM on Storage Elements net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = net.core.rmem_max = net.core.wmem_max = But, althouth this design decision ensures SE SE data transfer for LHC experiments at Tier2 sites whould not be limited running jobs that transfer data from non-local SEs, this model is not suited for workflows of most other applications and users in the BalticGrid-II. Majority of sites in BalticGrid-II are not performing heavy calculations within LHC experiments, and majority of CPUHours and amount of submitted jobs is from LHC unrelated research. These users rely on LFC and SRM services to choose closest Storage Element for initial data storage (upload) before the execution of computing jobs. But since the jobs are then distributed across the whole grid infrastructure, the resulting computing jobs again use the LFC and SRM services to download the input data to the Worker Node from Storage Element that is no longer the local site SE. The exact same process repeats upon job exit, when the job output (results) are stored on job-local SE and then have to be downloaded from a great distance from users User Interface. Theoretically, all grid applications, if following CMS/Atlas job design, should always perform additional data transfer step by transferring data from users-local Storage Element to job-local Storage Element, and only then download data on the LAN from joblocal SE to WN. Unfortunately, since most grid users are unaware of the intricate inner design and limitations of the grid infrastructure, this two-stage data transfer process is never practiced. Also, in several workflows a direct single stage transfer is necessary due to data transfer between services other than SRM or GridFTP. Thus, since the workflow for most BalticGrid-II users requires or prefers single stage data transfer, but at the same time there are sites, namely T2_Estonia at KBFI, that do require network throughput priveledge for LHC experiments, a decision was made that each site can choose to either leave sites PUBLIC Page 30 of 36

31 tuned for LHC experiment workflows, of apply TCP parameter tuning created by IMCS UL to enable more efficient single stage data transfer directly between Worker Nodes and non-local Storage Elements. Below is the tuned Linux TCP parameter version for those sites that choose to optimize data transfer for single stage workflows. Just by setting these parameters on Worker Nodes (optionally also on other host types for improved input/output sandbox transfer, etc) it is possible to significantly increase single-session TCP performance. Best performance results are in case hosts at both data transfer endpoints use these settings. Also, this configuration does not change the CCA (Congestion Control Algorithm) or other parameters influencing operation of TCP stack, thus these improvements are equally effective on various Linux distributions and CCAs (Reno/BIC/CUBIC/Vegas) used in BalticGrid and provide maximum compatibility with already deployed applications and systems. # TCP buffer sizes configured according to IMCS UL recommendations net.core.rmem_max = net.core.wmem_max = net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = All that was necessary for the BalticGrid site administrators after TCP stack improvement instructions were posted was to add these four lines to /etc/sysctl.conf configuration file on Linux machines. Additionaly, command sysctl -p allowed to apply this updated configuration without restarting hosts. Most site administrators were able to apply these improvements within one day, and so far there have been no negative side-effects to network operation after the tuning of TCP stack for these hosts. As it can be seen, such an elegant and easily deployable solution for site administrators was possible due to proper evaluation, testing and configuration of TCP stack in the test laboratory to find the most effective and yet compatible and problem-free TCP configuration suitable for all sites. Fig. 28 illustrates TCP tuning results on actual BalticGrid production sites. Fig. 28. TCP throughput improvement for BalticGrid production sites PUBLIC Page 31 of 36

32 As it can be seen from Figure 28, most sites experience major improvement in TCP performance once the optimal TCP stack configuration is applied. During the implementation of TCP stack improvements in BalticGrid sites it was noted that there are cases when parameter values specified in system default configuration file /etc/sysctl.conf are overridden by distribution specific configuration tool that stores system configuration values in some additional database. Thus site administrators must be aware of procedures and their order during system start-up for these TCP parameters to take effect and not to be overridden TEST DATA COMPARISON AND ANALYSIS After TCP throughput tests were carried out both on test laboratory and production sites who had implemented the recommended increase of TCP buffers it was possible to compare simulated tests with real life measurements. Figure 29 shows comparison of laboratory tests and tests on production sites. The continuous lines show laboratory measurements for default ScientificLinux setup as well as setup with increased TCP buffers in case of simulated network latency. As already described, default configuration underperforms significantly while configuration with 8MB buffers reaches baseline measurements achieved with Ubuntu 8.04 LTS. The graph shows that results of un-tuned production sites correlate very well with the test laboratory measurements of un-tuned ScientificLinux. Also results after TCP buffer increase show that majority of sites are within close correlation to predicted results, although the achieved TCP throughput for production sites is lower due to other network traffic on the measured network path. This shows that test laboratory is capable to simulate and replicate real world conditions and results obtained within test laboratory are reproducible and usable in real world applications for production sites. Thus, not only have these tests given benefit to practical application in increasing single-session TCP throughput in BalticGrid, but also verified and approved approach of using FreeBSD and its network utilities to simulate real world network conditions in laboratory environment a capability that classically required expensive and specialized equipment. PUBLIC Page 32 of 36

33 Fig. 29. Comparison of test laboratory and production site measurements Fig. 29 also shows how well performance data obtained from real Iperf measurements correlate with results obtained in dedicated test laboratory using network simulator. Also comparison and analysis of measurements for laboratory setup or production sites has not revealed any negative side-effects from using the techniques described and recommended in this document. Whereas HighSpeed TCP and Scalable TCP are only feasible on networks where all participants use these exotic TCP implementations, or these more aggressive implementations are used on Lower-Than-Best-Effort network services to protect other classic TCP Reno/CUBIC connections from congestion collapse, the proposed technique here keeps all existing TCP implementations already used and just improves their performance, maintaining interoperability and compatibility without restricting diversity on TCP implementations available for use. This approach is thus recommended for other organizations and cases where improvement of TCP performance is desirable without great effort or high risk of negative side-effects TCP THROUGHPUT TESTS BETWEEN SITES After test data comparison and analysis, described in Section 6.2, several sites were still identified as not reaching the baseline TCP performance expected after the TCP parameter tuning. At initial stage all throughput tests were carried out only in direction from all sites to main Iperf throughput testing server at IMCS UL site. To investigate more in causes and extent of identified bottlenecks, full or semi-full cross-site tests involving all paths between sites (full mesh tests) were necessary. Results from these tests would indicate on locality (city wide, country wide, region wide) of bottleneck for PUBLIC Page 33 of 36

I N T E R I M R E P O R T O N N E T W O R K P E R F O R M A N C E

I N T E R I M R E P O R T O N N E T W O R K P E R F O R M A N C E I N T E R I M R E P O R T O N N E T W O R K P E R F O R M A N C E Document Filename: Activity: Partner(s): Lead Partner: Document classification: SA2 KTH, EENet, KBFI, VU, RTU, ITPA, NICH BNTU, UIIP NASB

More information

BG-DSA2-3-v01-IMCSUL-NetworkSLSandSLAg4.doc. Document Filename: Activity: PUBLIC. Document classification:

BG-DSA2-3-v01-IMCSUL-NetworkSLSandSLAg4.doc. Document Filename: Activity: PUBLIC. Document classification: PROGRESS REPORT ON CURRENT SERVICE LEVEL AGREEMENTS AND THEIR IMPLEMENTATION IN THE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA2-3-v01-IMCSUL-NetworkSLSandSLAg4.doc

More information

I NTERIM R EPORT ON C OOPERATION

I NTERIM R EPORT ON C OOPERATION I NTERIM R EPORT ON C OOPERATION WITH EGEE AND GEANT ON N ETWORK M ONITORING A CTIVITIES Document Filename: Activity: Partner(s): Lead Partner: Document classification: SA2 KTH, IMCS UL IMCS UL PUBLIC

More information

DSA1.5 U SER SUPPORT SYSTEM

DSA1.5 U SER SUPPORT SYSTEM DSA1.5 U SER SUPPORT SYSTEM H ELP- DESK SYSTEM IN PRODUCTION AND USED VIA WEB INTERFACE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.5-v1.0-User-support-system.doc

More information

File Transfer Protocol Performance Study for EUMETSAT Meteorological Data Distribution

File Transfer Protocol Performance Study for EUMETSAT Meteorological Data Distribution Scientific Papers, University of Latvia, 2011. Vol. 770 Computer Science and Information Technologies 56 67 P. File Transfer Protocol Performance Study for EUMETSAT Meteorological Data Distribution Leo

More information

High-Speed TCP Performance Characterization under Various Operating Systems

High-Speed TCP Performance Characterization under Various Operating Systems High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand

More information

Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment

Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about

More information

GN1 (GÉANT) Deliverable D13.2

GN1 (GÉANT) Deliverable D13.2 Contract Number: IST-2000-26417 Project Title: GN1 (GÉANT) Deliverable 13.2 Technology Roadmap for Year 3 Contractual Date: 31 July 2002 Actual Date: 16 August 2002 Work Package: WP8 Nature of Deliverable:

More information

TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE

TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE Original problem IT-DB was backuping some data to Wigner CC. High transfer rate was required in order to avoid service degradation. 4G out of... 10G

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network

More information

DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet

DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.4-v1.0-Monitoring-operational-support-.doc

More information

Integration Guide. EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide

Integration Guide. EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide Integration Guide EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide August 2013 Copyright 2013 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate

More information

CloudLink - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds

CloudLink - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds February 2011 1 Introduction Today's business environment requires organizations

More information

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,

More information

perfsonar Overview Jason Zurawski, ESnet zurawski@es.net Southern Partnerships for Advanced Networking November 3 rd 2015

perfsonar Overview Jason Zurawski, ESnet zurawski@es.net Southern Partnerships for Advanced Networking November 3 rd 2015 perfsonar Overview Jason Zurawski, ESnet zurawski@es.net Southern Partnerships for Advanced Networking November 3 rd 2015 This document is a result of work by the perfsonar Project (http://www.perfsonar.net)

More information

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University

More information

Cisco Change Management: Best Practices White Paper

Cisco Change Management: Best Practices White Paper Table of Contents Change Management: Best Practices White Paper...1 Introduction...1 Critical Steps for Creating a Change Management Process...1 Planning for Change...1 Managing Change...1 High Level Process

More information

Campus Network Design Science DMZ

Campus Network Design Science DMZ Campus Network Design Science DMZ Dale Smith Network Startup Resource Center dsmith@nsrc.org The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see

More information

Avaya ExpertNet Lite Assessment Tool

Avaya ExpertNet Lite Assessment Tool IP Telephony Contact Centers Mobility Services WHITE PAPER Avaya ExpertNet Lite Assessment Tool April 2005 avaya.com Table of Contents Overview... 1 Network Impact... 2 Network Paths... 2 Path Generation...

More information

D1.2 Network Load Balancing

D1.2 Network Load Balancing D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June ronald.vanderpol@sara.nl,freek.dijkstra@sara.nl,

More information

Extending SANs Over TCP/IP by Richard Froom & Erum Frahim

Extending SANs Over TCP/IP by Richard Froom & Erum Frahim Extending SANs Over TCP/IP by Richard Froom & Erum Frahim Extending storage area networks (SANs) over a distance has become a necessity for enterprise networks. A single SAN island with one storage system

More information

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps

More information

Architecture of distributed network processors: specifics of application in information security systems

Architecture of distributed network processors: specifics of application in information security systems Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia vlad@neva.ru 1. Introduction Modern

More information

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory

More information

PORTrockIT. Spectrum Protect : faster WAN replication and backups with PORTrockIT

PORTrockIT. Spectrum Protect : faster WAN replication and backups with PORTrockIT 1 PORTrockIT 2 Executive summary IBM Spectrum Protect, previously known as IBM Tivoli Storage Manager or TSM, is the cornerstone of many large companies data protection strategies, offering a wide range

More information

Linux NIC and iscsi Performance over 40GbE

Linux NIC and iscsi Performance over 40GbE Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71

More information

WAN Optimization Integrated with Cisco Branch Office Routers Improves Application Performance and Lowers TCO

WAN Optimization Integrated with Cisco Branch Office Routers Improves Application Performance and Lowers TCO WAN Optimization Integrated with Cisco Branch Office Routers Improves Application Performance and Lowers TCO The number of branch-office work sites is increasing, so network administrators need tools to

More information

An objective comparison test of workload management systems

An objective comparison test of workload management systems An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: sfiligoi@fnal.gov Abstract. The Grid

More information

DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager

DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager DEPLOYMENT GUIDE Version 1.1 Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager Table of Contents Table of Contents Configuring BIG-IP WOM with Oracle Database

More information

NOS for Network Support (903)

NOS for Network Support (903) NOS for Network Support (903) November 2014 V1.1 NOS Reference ESKITP903301 ESKITP903401 ESKITP903501 ESKITP903601 NOS Title Assist with Installation, Implementation and Handover of Network Infrastructure

More information

Measuring Wireless Network Performance: Data Rates vs. Signal Strength

Measuring Wireless Network Performance: Data Rates vs. Signal Strength EDUCATIONAL BRIEF Measuring Wireless Network Performance: Data Rates vs. Signal Strength In January we discussed the use of Wi-Fi Signal Mapping technology as a sales tool to demonstrate signal strength

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services

Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services White Paper Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services What You Will Learn IT departments are increasingly relying on best-in-class

More information

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

More information

Measure wireless network performance using testing tool iperf

Measure wireless network performance using testing tool iperf Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,

More information

Performance Evaluation of Linux Bridge

Performance Evaluation of Linux Bridge Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet

More information

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

How To Monitor And Test An Ethernet Network On A Computer Or Network Card 3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel

More information

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying

More information

POWER ALL GLOBAL FILE SYSTEM (PGFS)

POWER ALL GLOBAL FILE SYSTEM (PGFS) POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm

More information

Integration of Network Performance Monitoring Data at FTS3

Integration of Network Performance Monitoring Data at FTS3 Integration of Network Performance Monitoring Data at FTS3 July-August 2013 Author: Rocío Rama Ballesteros Supervisor(s): Michail Salichos Alejandro Álvarez CERN openlab Summer Student Report 2013 Project

More information

Deploying Silver Peak VXOA with EMC Isilon SyncIQ. February 2012. www.silver-peak.com

Deploying Silver Peak VXOA with EMC Isilon SyncIQ. February 2012. www.silver-peak.com Deploying Silver Peak VXOA with EMC Isilon SyncIQ February 2012 www.silver-peak.com Table of Contents Table of Contents Overview... 3 Solution Components... 3 EMC Isilon...3 Isilon SyncIQ... 3 Silver Peak

More information

Test Methodology White Paper. Author: SamKnows Limited

Test Methodology White Paper. Author: SamKnows Limited Test Methodology White Paper Author: SamKnows Limited Contents 1 INTRODUCTION 3 2 THE ARCHITECTURE 4 2.1 Whiteboxes 4 2.2 Firmware Integration 4 2.3 Deployment 4 2.4 Operation 5 2.5 Communications 5 2.6

More information

Using High Availability Technologies Lesson 12

Using High Availability Technologies Lesson 12 Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?

More information

4 High-speed Transmission and Interoperability

4 High-speed Transmission and Interoperability 4 High-speed Transmission and Interoperability Technology 4-1 Transport Protocols for Fast Long-Distance Networks: Comparison of Their Performances in JGN KUMAZOE Kazumi, KOUYAMA Katsushi, HORI Yoshiaki,

More information

SIDN Server Measurements

SIDN Server Measurements SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources

More information

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller OpenFlow: Load Balancing in enterprise networks using Floodlight Controller Srinivas Govindraj, Arunkumar Jayaraman, Nitin Khanna, Kaushik Ravi Prakash srinivas.govindraj@colorado.edu, arunkumar.jayaraman@colorado.edu,

More information

Distributed applications monitoring at system and network level

Distributed applications monitoring at system and network level Distributed applications monitoring at system and network level Monarc Collaboration 1 Abstract Most of the distributed applications are presently based on architectural models that don t involve real-time

More information

perfsonar MDM release 3.0 - Product Brief

perfsonar MDM release 3.0 - Product Brief perfsonar MDM release 3.0 - Product Brief In order to provide the fast, reliable and uninterrupted network communication that users of the GÉANT 2 research networks rely on, network administrators must

More information

TamoSoft Throughput Test

TamoSoft Throughput Test TAKE CONTROL IT'S YOUR SECURITY TAMOSOFT df TamoSoft Throughput Test Help Documentation Version 1.0 Copyright 2011-2014 TamoSoft Contents Contents... 2 Introduction... 3 Overview... 3 System Requirements...

More information

Elevating Data Center Performance Management

Elevating Data Center Performance Management Elevating Data Center Performance Management Data Center innovation reduces operating expense, maximizes employee productivity, and generates new sources of revenue. However, many I&O teams lack proper

More information

perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case

perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case Fausto Vetter, Domenico Vicinanza DANTE TNC 2010, Vilnius, 2 June 2010 Agenda Large Hadron Collider Optical Private

More information

VoIP Conformance Labs

VoIP Conformance Labs VoIP acceptance, VoIP connectivity, VoIP conformance, VoIP Approval, SIP acceptance, SIP connectivity, SIP conformance, SIP Approval, IMS acceptance, IMS connectivity, IMS conformance, IMS Approval, VoIP

More information

Stratusphere Solutions

Stratusphere Solutions Stratusphere Solutions Deployment Best Practices Guide Introduction This guide has been authored by experts at Liquidware Labs in order to provide a baseline as well as recommendations for a best practices

More information

Customer White paper. SmartTester. Delivering SLA Activation and Performance Testing. November 2012 Author Luc-Yves Pagal-Vinette

Customer White paper. SmartTester. Delivering SLA Activation and Performance Testing. November 2012 Author Luc-Yves Pagal-Vinette SmartTester Delivering SLA Activation and Performance Testing November 2012 Author Luc-Yves Pagal-Vinette Customer White paper Table of Contents Executive Summary I- RFC-2544 is applicable for WAN and

More information

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE VSPEX IMPLEMENTATION GUIDE SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE Silver Peak Abstract This Implementation Guide describes the deployment of Silver Peak

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Improving Quality of Service

Improving Quality of Service Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic

More information

Network Middleware Solutions

Network Middleware Solutions Network Middleware: Lambda Station, TeraPaths, Phoebus Matt Crawford GLIF Meeting; Seattle, Washington October 1-2, 2008 Lambda Station (I) Target: last-mile problem between local computing resources and

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

Cisco WAAS Express. Product Overview. Cisco WAAS Express Benefits. The Cisco WAAS Express Advantage

Cisco WAAS Express. Product Overview. Cisco WAAS Express Benefits. The Cisco WAAS Express Advantage Data Sheet Cisco WAAS Express Product Overview Organizations today face several unique WAN challenges: the need to provide employees with constant access to centrally located information at the corporate

More information

Cisco Application Networking for Citrix Presentation Server

Cisco Application Networking for Citrix Presentation Server Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

Cisco Integrated Services Routers Performance Overview

Cisco Integrated Services Routers Performance Overview Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,

More information

THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS

THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS 159 APPLICATION NOTE Bruno Giguère, Member of Technical Staff, Transport & Datacom Business Unit, EXFO As end-users are migrating

More information

Data Sheet. V-Net Link 700 C Series Link Load Balancer. V-NetLink:Link Load Balancing Solution from VIAEDGE

Data Sheet. V-Net Link 700 C Series Link Load Balancer. V-NetLink:Link Load Balancing Solution from VIAEDGE Data Sheet V-Net Link 700 C Series Link Load Balancer V-NetLink:Link Load Balancing Solution from VIAEDGE V-NetLink : Link Load Balancer As the use of the Internet to deliver organizations applications

More information

LotServer Deployment Manual

LotServer Deployment Manual LotServer Deployment Manual Maximizing Network Performance, Responsiveness and Availability DEPLOYMENT 1. Introduction LotServer is a ZetaTCP powered software product that can be installed on origin web/application

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Truffle Broadband Bonding Network Appliance

Truffle Broadband Bonding Network Appliance Truffle Broadband Bonding Network Appliance Reliable high throughput data connections with low-cost & diverse transport technologies PART I Truffle in standalone installation for a single office. Executive

More information

Cisco Bandwidth Quality Manager 3.1

Cisco Bandwidth Quality Manager 3.1 Cisco Bandwidth Quality Manager 3.1 Product Overview Providing the required quality of service (QoS) to applications on a wide-area access network consistently and reliably is increasingly becoming a challenge.

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do

RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do RFC 6349 is the new transmission control protocol (TCP) throughput test methodology that JDSU co-authored along with

More information

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489

More information

Deploying in a Distributed Environment

Deploying in a Distributed Environment Deploying in a Distributed Environment Distributed enterprise networks have many remote locations, ranging from dozens to thousands of small offices. Typically, between 5 and 50 employees work at each

More information

The NREN s core activities are in providing network and associated services to its user community that usually comprises:

The NREN s core activities are in providing network and associated services to its user community that usually comprises: 3 NREN and its Users The NREN s core activities are in providing network and associated services to its user community that usually comprises: Higher education institutions and possibly other levels of

More information

DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service

DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service Achieving Scalability and High Availability Abstract DB2 Connect Enterprise Edition for Windows NT provides fast and robust connectivity

More information

Saving Time & Money Across The Organization With Network Management Simulation

Saving Time & Money Across The Organization With Network Management Simulation Saving Time & Money Across The Organization With Network Management Simulation May, 2011 Copyright 2011 SimpleSoft, Inc. All Rights Reserved Introduction Network management vendors are in business to help

More information

Local-Area Network -LAN

Local-Area Network -LAN Computer Networks A group of two or more computer systems linked together. There are many [types] of computer networks: Peer To Peer (workgroups) The computers are connected by a network, however, there

More information

A High-Performance Storage and Ultra-High-Speed File Transfer Solution

A High-Performance Storage and Ultra-High-Speed File Transfer Solution A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance

More information

Recommendations for Performance Benchmarking

Recommendations for Performance Benchmarking Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best

More information

Voice Over IP Performance Assurance

Voice Over IP Performance Assurance Voice Over IP Performance Assurance Transforming the WAN into a voice-friendly using Exinda WAN OP 2.0 Integrated Performance Assurance Platform Document version 2.0 Voice over IP Performance Assurance

More information

Enterprise Application Performance Management: An End-to-End Perspective

Enterprise Application Performance Management: An End-to-End Perspective SETLabs Briefings VOL 4 NO 2 Oct - Dec 2006 Enterprise Application Performance Management: An End-to-End Perspective By Vishy Narayan With rapidly evolving technology, continued improvements in performance

More information

Network Management and Monitoring Software

Network Management and Monitoring Software Page 1 of 7 Network Management and Monitoring Software Many products on the market today provide analytical information to those who are responsible for the management of networked systems or what the

More information

Computer Networking Networks

Computer Networking Networks Page 1 of 8 Computer Networking Networks 9.1 Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office

More information

Fundamentals of a Windows Server Infrastructure MOC 10967

Fundamentals of a Windows Server Infrastructure MOC 10967 Fundamentals of a Windows Server Infrastructure MOC 10967 Course Outline Module 1: Installing and Configuring Windows Server 2012 This module explains how the Windows Server 2012 editions, installation

More information

Network Simulation Traffic, Paths and Impairment

Network Simulation Traffic, Paths and Impairment Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating

More information

Infrastructure for active and passive measurements at 10Gbps and beyond

Infrastructure for active and passive measurements at 10Gbps and beyond Infrastructure for active and passive measurements at 10Gbps and beyond Best Practice Document Produced by UNINETT led working group on network monitoring (UFS 142) Author: Arne Øslebø August 2014 1 TERENA

More information

G DATA TechPaper #0275. G DATA Network Monitoring

G DATA TechPaper #0275. G DATA Network Monitoring G DATA TechPaper #0275 G DATA Network Monitoring G DATA Software AG Application Development May 2016 Contents Introduction... 3 1. The benefits of network monitoring... 3 1.1. Availability... 3 1.2. Migration

More information

Fail-Safe IPS Integration with Bypass Technology

Fail-Safe IPS Integration with Bypass Technology Summary Threats that require the installation, redeployment or upgrade of in-line IPS appliances often affect uptime on business critical links. Organizations are demanding solutions that prevent disruptive

More information

Achieving the Science DMZ

Achieving the Science DMZ Achieving the Science DMZ Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 22, 2012 Outline of the Day Motivation Services Overview Science DMZ

More information

Edge Configuration Series Reporting Overview

Edge Configuration Series Reporting Overview Reporting Edge Configuration Series Reporting Overview The Reporting portion of the Edge appliance provides a number of enhanced network monitoring and reporting capabilities. WAN Reporting Provides detailed

More information

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty Burst Testing Emerging high-speed protocols in mobility and access networks, combined with qualityof-service demands from business customers for services such as cloud computing, place increased performance

More information

Experiences with Interactive Video Using TFRC

Experiences with Interactive Video Using TFRC Experiences with Interactive Video Using TFRC Alvaro Saurin, Colin Perkins University of Glasgow, Department of Computing Science Ladan Gharai University of Southern California, Information Sciences Institute

More information

Enabling Cloud Architecture for Globally Distributed Applications

Enabling Cloud Architecture for Globally Distributed Applications The increasingly on demand nature of enterprise and consumer services is driving more companies to execute business processes in real-time and give users information in a more realtime, self-service manner.

More information

Troubleshooting Common Issues in VoIP

Troubleshooting Common Issues in VoIP Troubleshooting Common Issues in VoIP 2014, SolarWinds Worldwide, LLC. All rights reserved. Voice over Internet Protocol (VoIP) Introduction Voice over IP, or VoIP, refers to the delivery of voice and

More information

OptiView. Total integration Total control Total Network SuperVision. Network Analysis Solution. No one knows the value of an

OptiView. Total integration Total control Total Network SuperVision. Network Analysis Solution. No one knows the value of an No one knows the value of an Network Analysis Solution Total integration Total control Total Network SuperVision integrated solution better than network engineers and Fluke Networks. Our Network Analysis

More information

OptiView. Total integration Total control Total Network SuperVision. Network Analysis Solution. No one knows the value of an

OptiView. Total integration Total control Total Network SuperVision. Network Analysis Solution. No one knows the value of an No one knows the value of an Network Analysis Solution Total integration Total control Total Network SuperVision integrated solution better than network engineers and Fluke Networks. Our Network Analysis

More information

Visualization, Management, and Control for Cisco IWAN

Visualization, Management, and Control for Cisco IWAN Visualization, Management, and Control for Cisco IWAN Overview Cisco Intelligent WAN (IWAN) delivers an uncompromised user experience over any connection, whether that connection is Multiprotocol Label

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

Science DMZs Understanding their role in high-performance data transfers

Science DMZs Understanding their role in high-performance data transfers Science DMZs Understanding their role in high-performance data transfers Chris Tracy, Network Engineer Eli Dart, Network Engineer ESnet Engineering Group Overview Bulk Data Movement a common task Pieces

More information