UDT as an Alternative Transport Protocol for GridFTP
|
|
|
- Lisa Letitia Rodgers
- 10 years ago
- Views:
Transcription
1 UDT as an Alternative Transport Protocol for GridFTP John Bresnahan, 1,2,3 Michael Link, 1,2 Rajkumar Kettimuthu, 1,2 and Ian Foster 1,2,3 1 Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL Computation Institute, University of Chicago, Chicago, IL Department of Computer Science, University of Chicago, Chicago, IL Abstract GridFTP has emerged as a de facto standard for secure, reliable, high-performance data transfer across resources on the Grid. By default, GridFTP uses TCP as its transportlevel communication protocol. It is well known that TCP Reno cannot provide satisfactory performance on high-speed, long-delay networks. In this paper, we describe how we enabled GridFTP to use UDT as an alternative transport-level communication protocol. We compare the performance of GridFTP over UDT with GridFTP over TCP on various test beds. We also study the impact of UDT on bulk TCP flows. 1. Introduction GridFTP [1] has been commonly used as a data transfer protocol in the Grid. The GridFTP protocol extends the standard FTP protocol and provides a superset of the features offered by the various Grid storage systems currently in use. Key features of GridFTP include the following: Security: The Globus GridFTP [2] server/client utilizes the GSI protocol, which not only enables a secure Public Key Infrastructure (PKI) interface but also adds the capability of delegated authority via X.509 certificates. Delegated authority is critical for large collaboration efforts and enables single sign-on in virtual organizations, thereby eliminating the need for the user to enter passwords onto what can be hundreds of different sites. Kerberos is also supported. Parallelism: On wide-area links, using multiple TCP streams in parallel between a single source and destination can improve aggregate bandwidth relative to that achieved by a single stream. GridFTP supports such parallelism via FTP command extensions and data channel extensions. Striping: Additionally, GridFTP supports striped data movement, in which data distributed across, or generated by, a set of computers or storage systems at one end of a network is transferred to another remote set of storage systems or computers. Third-Party Control: GridFTP also allows secure third-party clients to initiate transfers between remote sites, thereby facilitating the management of large datasets for distributed communities. Partial File Transfer: Some applications can benefit from transferring portions of files rather than complete files. GridFTP supports requests for arbitrary file regions. Reliability: GridFTP provides support for reliable and restartable data transfers. Negotiation of TCP buffer/window sizes: GridFTP employs FTP command and data channel extensions to support both automatic and manual negotiation of TCP buffer sizes for large files as well as large sets of small files. The Globus implementation of GridFTP provides a software suite optimized for the gamut of data access issues from bulk file transfer to the details of getting data out of complex storage systems in sites. Although GridFTP supports multiple TCP streams to overcome the limitations of TCP congestion control algorithm for long, fat networks [3-5], it is still not possible to utilize the available bandwidth optimally in some situations. UDP based Data Transfer protocol (UDT) [6] is a popular application level data transport protocol that addresses the limitations of TCP in fast, long-distance networks. In this paper, we describe the following: - Development of a Globus XIO driver for UDT
2 - Enhancements done to Globus GridFTP to use UDT as an alternate transport protocol - Experimental study comparing the performance of GridFTP with TCP and UDT as the transport level protocols - Experimental study of the impact of GridFTP with UDT on GridFTP with TCP flows. The rest of the paper is organized as follows. In Section 2, we provide an introduction of UDT. In Section 3, we describe the development of Globus XIO driver for UDT. In Section 4, we highlight the enhancements done to Globus GridFTP. In Section 5, we present the experimental studies; and in Section 6, we summarize our results. 2. UDT TCP s congestion control algorithm is window based and every time a congestion event is detected, the window size is reduced to half. After the detection of first congestion event, the window size is increased at most one segment each round-trip time regardless of how many acknowledgments are received in that roundtrip time. This congestion control algorithm has significant limitations on fast, long-distance networks. Researchers have come up with numerous solutions to address the limitations of the above-mentioned TCP s AIMD-based congestion control mechanism [7]. These solutions include improvements to TCP [8-14], new transport protocols such as XCP [15], XTP [16] and reliable layers on top of UDP [6, 17-23]. UDT [6] is an application-level data transport protocol that uses UDP to transfer bulk data, while implementing its own reliability and congestion control mechanisms. UDT achieves good performance on high-bandwidth, highdelay networks in which TCP has significant limitations. UDT uses UDP packets to transfer data and retransmit the lost packets to guarantee reliability. UDT s congestion control algorithm combines rate based and window based approaches to tune the inter-packet time and the window size, respectively. The congestion control parameters (window size and inter-packet time) are updated dynamically by a bandwidth estimation technique. UDT is popular among the application level solutions to address TCP limitations and it has been shown to be friendly to short-lived TCP flows [6]. 3. Globus XIO Driver for UDT GridFTP uses the Globus Extensible Input Output (XIO) [24] interface to invoke network I/O operations. The Globus XIO framework presents a single, standard open/close, read/write interface to many different protocol implementations, including TCP, UDP, HTTP, and file and now UDT. The protocol implementations are called drivers. Once created, a driver can be dynamically loaded and stacked by any Globus XIO application. Many drivers have been created by using the native Globus XIO assistance APIs, including TCP, UDP, HTTP, File, Mode E [1], Telnet, Queuing, Ordering, GSI, and Multicast Transport [25]. For other evolving protocols, if an implementation already exists, requiring the developers to implement their protocols by using native Globus XIO APIs would necessitate considerable effort. Recognizing the benefit of providing wrapper code to hook these libraries into the Globus XIO driver interface, we therefore introduce the wrapblock feature to Globus XIO. This is a simple extension to the original Globus XIO driver interface that allows for much easier creation of drivers. The stock Globus XIO driver interface is written in an asynchronous model. While this is the most scalable and efficient model, it is also the most difficult to code against. Further, many existing protocol implementations do not have asynchronous APIs, and transforming them into an 2
3 asynchronous model can be time consuming. The wrapblock functionality uses thread pooling and event callback techniques to transform the asynchronous interface to a blocking interface. This makes the task of creating a driver from an existing library easier. static globus_result_t globus_l_xio_udt_write( void * driver_specific_handle, const globus_xio_iovec_t * iovec, int iovec_count, globus_size_t * nbytes) { globus_result_t result; xio_l_udt_handle_t * handle; GlobusXIOName(globus_l_xio_udt_write); handle = (xio_l_udt_handle_t *) driver_specific_handle; *nbytes = (globus_size_t) UDT::send( handle->sock, (char*)iovec[0].iov_base, iovec[0].iov_len, 0); if(*nbytes < 0) { result = GlobusXIOUdtError("UDT::send failed"); goto error; } return GLOBUS_SUCCESS; error: return result; } Fig. 1. Sample wrapblock write interface implementation for UDT. To illustrate the ease in creation, we present in Figure 1 the code required to implement write functionality for UDT. The implementation requires a simple pass-through call to the UDT library. The number of bytes written is passed back to Globus XIO by reference in the nbytes parameter. The data structure xio_l_udt_handle_t is created in the open interface call and is passed to all other interface calls as a generic void * memory pointer. This allows the developer to maintain connection state across operations. Similar code is written to handle reading data. In the open and close interface function, the initialization and cleanup of resources are done as would be expected. The code inside the driver looks much like a simple program using the third-party API. There is little Globus XIO specific code beyond the interface function signatures. There are driver-specific hooks that allow the user to directly interact with the driver in order to provide it with optimization parameters. This interaction is handled via cntl functions that look much like the standard UNIX ioctl(). 4. GridFTP Enhancements A new command has been added to the GridFTP protocol to add drivers to the data channel stack. SITE SETDCSTACK {<driver name>[:<driver options>],}+ The second parameter to the site command is a comma-separated list of driver names optionally followed by a : and a set of driver specific url-encoded options. From left to right the driver names form a stack from bottom to top. For security reasons the GridFTP server does not allow clients to load arbitrary xio drivers into the server. The GridFTP server admin must white list the driver individually, using the -dc-whitelist option to the server. 5. Experimental Studies We compare the performance of Iperf, scp, bbcp, GridFTP over TCP (both single and multiple streams), GridFTP over UDT, and raw UDT on four different networks a wide-area network between Argonne National Laboratory (ANL) and the University of Auckland, New Zealand, with a round-trip time of 204 ms; a wide-area network between ANL and Los Angeles, with a round-trip time of 60 ms; a wide-area network between the Ohio State University and JA site in Japan, which is a part of the Japan Gigibit Network II project, with a round-trip time of 193 ms; and a wide-area network between the JA site and Oak Ridge National Laboratory, with a round-trip time of 194 ms. To the best of our knowledge, all the pairs of the sites used in the experiments have 1 Gbit/s (maximum possible bandwidth) 3
4 connectivity. The operating system on all the nodes used in the experiments is Linux. TCP Reno is used in all the experiments. Table 1: Throughput (in Mbit/s) achieved when transferring 1 GB of data over two wide-area networks, using various mechanisms Testbed Transport Mechanism ANL - NZ ANL - ISI BMI - Japan Japan ORNL Iperf 1 stream Iperf streams scp 1 stream bbcp mem 1 stream bbcp mem 8 streams bbcp disk 1 stream bbcp disk 8 streams GridFTP mem TCP 1 stream GridFTP mem TCP 8 streams GridFTP disk TCP 1 stream GridFTP disk TCP 8 streams GridFTP mem UDT GridFTP disk UDT UDT mem UDT disk Throughput In these experiments, 1 GB of data was transferred between the end points. Table 1 shows the throughput achieved in megabit per second. We noted that the performance of GridFTP over TCP is comparable to the performance of iperf and is significantly better than scp and bbcp. GridFTP over UDT outperforms the best possible throughput obtained with TCP by a factor of 4 on two testbeds (ANL-NZ and ANL-ISI). Table 2: Throughput (in Mbit/s) for nonconcurrent and concurrent GridFTP over TCP and GridFTP over UDT flows on Japan- Nonconcurrent flows ORNL testbed GridFTP-TCP GridFTP-UDT Concurrent TCP GridFTP-TCP flows GridFTP-TCP Concurrent UDT GridFTP-UDT flows GridFTP-UDT Concurrent flows GridFTP-TCP (1 TCP and 1 UDT) GridFTP-UDT Concurrent flows GridFTP-TCP (2 TCP and 1 UDT) GridFTP-TCP GridFTP-UDT GridFTP over UDT outperforms GridFTP over TCP (single stream) by a factor of 3 on the other two testbeds (BMI-Japan and Japan- ORNL). GridFTP over TCP (with 8 streams) performs as well as GridFTP over UDT on the BMI-Japan testbed and interestingly it outperforms GridFTP over UDT on the Japan- ORNL testbed. Two concurrent UDT flows achieve about 400 Mbit/s each (Table 2) but a single UDT flow alone gives only about 400 Mbit/s on the Japan-ORNL link. We suspect that, for some reason, the probing mechanism employed by UDT to determine the available bandwidth detects the available bandwidth to be around only 400 Mbit/s on the Japan-ORNL link. We could not get bbcp numbers on the ANL-NZ testbed because of firewall problems. The BMI-Japan testbed and the Japan-ORNL testbed had autotuning of TCP buffer size enabled; hence, the TCP buffer size was not explicitly set to the bandwidth-delay product (in fact, setting the TCP buffer size to bandwidth-delay product on these nodes reduced the throughput by multiple folds). On the other two test beds, the TCP buffer size was 4
5 set to bandwidth-delay product for the GridFTP over TCP transfers. We note that the performance of GridFTP over UDT is close (about 95% or more) to that of raw UDT in case of memory-to-memory transfers. It is interesting to note that GridFTP over UDT outperforms raw UDT in case of disk-to-disk transfers. We believe that this is because the UDT test application may not be optimized for disk I/O. 5.2 Impact of UDT on TCP flows We ran GridFTP over UDT and GridFTP over TCP transfers concurrently to measure the impact of UDT on bulk TCP flows. For these tests, we did only memory-to-memory transfers (/dev/zero to /dev/null). We did not transfer a fixed amount of data for these tests, rather we ran the transfers until they attain steady throughput. This is the reason for the higher throughput for nonconcurrent UDT flows in Table 2 and Table 3 compared to those of in Table 1 for the BMI-Japan and Japan-ORNL testbeds. Table 3: Throughput (in Mbit/s) for nonconcurrent and concurrent GridFTP over TCP and GridFTP over UDT flows on BMI- Nonconcurrent flows Japan testbed GridFTP-TCP 80.1 GridFTP-UDT Concurrent TCP GridFTP-TCP 78.6 flows GridFTP-TCP Concurrent UDT GridFTP-UDT flows GridFTP-UDT Concurrent flows GridFTP-TCP 42.4 (1 TCP and 1 UDT) GridFTP-UDT Concurrent flows GridFTP-TCP 26.5 (2 TCP and 1 UDT) GridFTP-TCP 34.6 GridFTP-UDT From Table 2, we see that UDT does not affect the throughput of the concurrent bulk TCP flows on Japan-ORNL testbed, whereas on the BMI-Japan testbed, UDT does have some impact on the bulk TCP flows. But we note from Table 3 that UDT does reduce its rate to accommodate the TCP flows. We also ran some tests to compare the system resources utilized for GridFTP-TCP and GridFTP-UDT transfers. For these tests, we used the TeraGrid [26] network between ANL and ORNL, as the performance of TCP and UDT is comparable on this testbed. Both TCP and UDT achieved a throughput around 700 Mbit/s on this testbed. The CPU utilization for TCP transfers was in the range of 30 50%, whereas for UDT transfers it was around 80%. The memory consumption was around 0.2% for TCP and 1% for UDT. 6. Summary We described how we enhanced the Globus GridFTP server and the framework to use UDT as an alternative transport level protocol for TCP. We discussed the wrapblock feature in Globus XIO that aided in the creation of the UDT driver. We presented experimental studies comparing the performance of GridFTP over UDT with GridFTP over TCP. We showed that UDT outperformed single-stream TCP on all the four testbeds and multistream TCP on three testbeds. We also reported on the impact of UDT on bulk TCP flows and the system resources utilized by UDT and TCP transfers. Overall, although UDT does use more system resources than TCP, it achieves significantly higher throughput compared to TCP and reduces its rate to accommodate the competing TCP flows. Acknowledgments This work was supported in part by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Dept. of Energy, under Contract DE-AC02-06CH11357, and in part by National Science Foundation s CDIGS. References [1] W. Allcock, GridFTP: Protocol extensions to 5
6 FTP for the Grid, Global Grid ForumGFD-R- P.020, [2] W. Allcock, J. Bresnahan, R. Kettimuthu, M. Link, C. Dumitrescu, I. Raicu, and I. Foster, The Globus striped GridFTP framework and server, SC'05, ACM Press, [3] D. Katabi, M. Handley, and C. Rohrs, Internet congestion control for future high bandwidth-delay product environments, ACM/SIGCOM, [4] S. Floyd, High-speed TCP for large congestion windows, RFC 3649, Experimental, December [5] C. Jin, D. X. Wei, and S. H. Low, FAST TCP: motivation, architecture, algorithms, performance, in Proceedings of the IEEE Infocom, March [6] Y. Gu and R. L. Grossman, UDT: UDP-based data transfer for high-speed wide area networks, Comput. Networks 51, 7 (May. 2007), DOI= 9 [7] Allman, M., Paxson, V. and Stevens, W. TCP Congestion Control. IETF, RFC-2581, [8] Floyd, S. HighSpeed TCP for Large Congestion Windows. IETF, RFC 3649, [9] Jin, C., Wei, D.X. and Low, S.H., FAST TCP: motivation, architecture, algorithms, performance. IEEE Infocom, [10] Kelly, T., Scalable TCP: Improving Performance in High-Speed Wide Area Networks. First International Workshop on Protocols for Fast Long Distance Networks, [11] Ha, S., Rhee, I., and Xu, L CUBIC: a new TCP-friendly high-speed TCP variant. SIGOPS Oper. Syst. Rev. 42, 5 (Jul. 2008), DOI= [12] R. Shorten, and D. Leith, "H-TCP: TCP for High-Speed and Long-Distance Networks, Second International Workshop on Protocols for Fast Long- Distance Networks, February 16-17, 2004, Argonne, Illinois USA [13] T. Hatano, M. Fukuhara, H. Shigeno, and K. Okada, "TCP-friendly SQRT TCP for High Speed Networks," in Proceedings of APSITT 2003, pp , Nov [14] L. Xu, K. Harfoush, and I. Rhee, "Binary Increase Congestion Control (BIC) for Fast Long- Distance Networks, INFOCOM 2004, March 2004 [15] Katabi, D., Handley, M. and Rohrs, C., Congestion Control for High Bandwidth-Delay Product Networks. Sigcomm, [16] Strayer, W.T., Lewis, M.J. and Cline Jr., R.E. XTP as Transport Protocol for Distributed Parallel Processing. USENIX Symposium on High-speed Networking [17] Tsunami Network Protocol Implementation, [18] Chien, A., Faber, T., Falk, A., Bannister, J., Grossman, R. and Leigh, J. Transport Protocols for High Performance: Whither TCP? Communications of the ACM, 46 (11) [19] Clark, D., Lambert, M. and Zhang, L. NETBLT: A Bulk Data Transfer Protocol. IETF, RFC 998, [20] Gu, Y. and Grossman, R.L., UDT: An Application Level Transport Protocol for Grid Computing. Second International Workshop on Protocols for Fast Long-Distance Networks, [21] He, E., Leigh, J., Yu, O. and DeFanti, T.A., Reliable Blast UDP: Predictable High Performance Bulk Data Transfer. IEEE Cluster Computing, [22] Sivakumar, H., Grossman, R.L., Mazzucco, M., Pan, Y. and Zhang, Q. Simple Available Bandwidth Utilization Library for High-Speed Wide Area Networks. Journal of Supercomputing [23] Qishi Wu, Nageswara S. V. Rao: Protocol for high-speed data transport over dedicated channels. Third International Workshop on Protocols for Long-Distance Networks (PFLDnet 2005), Lyon, France, Feb [24] W. Allcock, J. Bresnahan, R. Kettimuthu, and J. Link, The Globus extensibleinput/output System (XIO): A protocol independent I/O system for the Grid, in Proceedings of the 19th IEEE international Parallel and Distributed Processing Symposium (Ipdps'05) - Workshop 4, Vol. 5 (April 4-8, 2005). IPDPS. IEEE Computer Society, Washington, DC, DOI= [25] K. Jeacle and J. Crowcroft, A multicast transport driver for Globus XIO, in WETICE 05: Proceedings of the 14th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprise. Washington, DC: IEEE Computer Society, 2005, pp [26] TeraGrid. 6
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
GridFTP: A Data Transfer Protocol for the Grid
GridFTP: A Data Transfer Protocol for the Grid Grid Forum Data Working Group on GridFTP Bill Allcock, Lee Liming, Steven Tuecke ANL Ann Chervenak USC/ISI Introduction In Grid environments,
GridFTP GUI: An Easy and Efficient Way to Transfer Data in Grid
GridFTP GUI: An Easy and Efficient Way to Transfer Data in Grid Wantao Liu, 1,2 Rajkumar Kettimuthu, 3,4 Brian Tieman, 5 Ravi Madduri, 3,4 Bo Li, 1 Ian Foster 2,3,4 1 School of Computer Science and Engineering,
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand Hari Subramoni *, Ping Lai *, Raj Kettimuthu **, Dhabaleswar. K. (DK) Panda * * Computer Science and Engineering Department
Comparisons between HTCP and GridFTP over file transfer
Comparisons between HTCP and GridFTP over file transfer Andrew McNab and Yibiao Li Abstract: A comparison between GridFTP [1] and HTCP [2] protocols on file transfer speed is given here, based on experimental
WEB SERVER PERFORMANCE WITH CUBIC AND COMPOUND TCP
WEB SERVER PERFORMANCE WITH CUBIC AND COMPOUND TCP Alae Loukili, Alexander Wijesinha, Ramesh K. Karne, and Anthony K. Tsetse Towson University Department of Computer & Information Sciences Towson, MD 21252
UDR: UDT + RSYNC. Open Source Fast File Transfer. Allison Heath University of Chicago
UDR: UDT + RSYNC Open Source Fast File Transfer Allison Heath University of Chicago Motivation for High Performance Protocols High-speed networks (10Gb/s, 40Gb/s, 100Gb/s,...) Large, distributed datasets
Web Service Robust GridFTP
Web Service Robust GridFTP Sang Lim, Geoffrey Fox, Shrideep Pallickara and Marlon Pierce Community Grid Labs, Indiana University 501 N. Morton St. Suite 224 Bloomington, IN 47404 {sblim, gcf, spallick,
A Tutorial on Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments
A Tutorial on Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments John Bresnahan Michael Link Rajkumar Kettimuthu Dan Fraser Argonne National Laboratory University of
A Reliable and Fast Data Transfer for Grid Systems Using a Dynamic Firewall Configuration
A Reliable and Fast Data Transfer for Grid Systems Using a Dynamic Firewall Configuration Thomas Oistrez Research Centre Juelich Juelich Supercomputing Centre August 21, 2008 1 / 16 Overview 1 UNICORE
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput Guojun Jin Brian Tierney Distributed Systems Department Lawrence Berkeley National Laboratory 1 Cyclotron
4 High-speed Transmission and Interoperability
4 High-speed Transmission and Interoperability Technology 4-1 Transport Protocols for Fast Long-Distance Networks: Comparison of Their Performances in JGN KUMAZOE Kazumi, KOUYAMA Katsushi, HORI Yoshiaki,
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
GridFTP GUI: An Easy and Efficient Way to Transfer Data in Grid
GridFTP GUI: An Easy and Efficient Way to Transfer Data in Grid Wantao Liu 1,2 Raj Kettimuthu 2,3, Brian Tieman 3, Ravi Madduri 2,3, Bo Li 1, and Ian Foster 2,3 1 Beihang University, Beijing, China 2 The
CUBIC: A New TCP-Friendly High-Speed TCP Variant
: A New TCP-Friendly High-Speed TCP Variant Injong Rhee, and Lisong Xu Abstract This paper presents a new TCP variant, called, for high-speed network environments. is an enhanced version of : it simplifies
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
IMPLEMENTING GREEN IT
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
EFFECT OF TRANSFER FILE SIZE ON TCP-ADaLR PERFORMANCE: A SIMULATION STUDY
EFFECT OF TRANSFER FILE SIZE ON PERFORMANCE: A SIMULATION STUDY Modupe Omueti and Ljiljana Trajković Simon Fraser University Vancouver British Columbia Canada {momueti, ljilja}@cs.sfu.ca ABSTRACT Large
Open Source File Transfers
Open Source File Transfers A comparison of recent open source file transfer projects By: John Tkaczewski Contents Introduction... 2 Recent Open Source Projects... 2 UDT UDP-based Data Transfer... 4 Tsunami
VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing
Journal of Information & Computational Science 9: 5 (2012) 1273 1280 Available at http://www.joics.com VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing Yuan
Performance Comparison of SCTP and TCP over Linux Platform
Performance Comparison of SCTP and TCP over Linux Platform Jong-Shik Ha, Sang-Tae Kim, and Seok J. Koh Department of Computer Science, Kyungpook National University, Korea {mugal1, saintpaul1978, sjkoh}@cs.knu.ac.kr
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
A Survey: High Speed TCP Variants in Wireless Networks
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com A Survey:
Network congestion control using NetFlow
Network congestion control using NetFlow Maxim A. Kolosovskiy Elena N. Kryuchkova Altai State Technical University, Russia Abstract The goal of congestion control is to avoid congestion in network elements.
Iperf Tutorial. Jon Dugan <[email protected]> Summer JointTechs 2010, Columbus, OH
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH Outline What are we measuring? TCP Measurements UDP Measurements Useful tricks Iperf Development What are we measuring? Throughput?
Data Movement and Storage. Drew Dolgert and previous contributors
Data Movement and Storage Drew Dolgert and previous contributors Data Intensive Computing Location Viewing Manipulation Storage Movement Sharing Interpretation $HOME $WORK $SCRATCH 72 is a Lot, Right?
Aspera FASP TM High Speed Transport A Critical Technology Comparison WHITE PAPER
Aspera FASP TM High Speed Transport WHITE PAPER TABLE OF CONTENTS Introduction 3 High-Speed TCP Overview 4 UDP-based High-speed Solutions 6 Aspera FASP Solution 11 HIGHLIGHTS Challenges The Transmission
Efficient Data Transfer Protocols for Big Data
1 Efficient Data Transfer Protocols for Big Data Brian Tierney, Ezra Kissel, Martin Swany, Eric Pouyoul Lawrence Berkeley National Laboratory, Berkeley, CA 97 School of Informatics and Computing, Indiana
Dissertation Title: SOCKS5-based Firewall Support For UDP-based Application. Author: Fung, King Pong
Dissertation Title: SOCKS5-based Firewall Support For UDP-based Application Author: Fung, King Pong MSc in Information Technology The Hong Kong Polytechnic University June 1999 i Abstract Abstract of dissertation
Campus Network Design Science DMZ
Campus Network Design Science DMZ Dale Smith Network Startup Resource Center [email protected] The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see
Performance of RDMA-Capable Storage Protocols on Wide-Area Network
Performance of RDMA-Capable Storage Protocols on Wide-Area Network Weikuan Yu Nageswara S.V. Rao Pete Wyckoff* Jeffrey S. Vetter Ohio Supercomputer Center* Managed by UT-Battelle InfiniBand Clusters around
Designing Efficient FTP Mechanisms for High Performance Data-Transfer over InfiniBand
Designing Efficient FTP Mechanisms for High Performance Data-Transfer over InfiniBand Ping Lai, Hari Subramoni, Sundeep Narravula, Amith Mamidala, Dhabaleswar K. Panda Department of Computer Science and
XCP-i : explicit Control Protocol for heterogeneous inter-networking of high-speed networks
: explicit Control Protocol for heterogeneous inter-networking of high-speed networks D. M. Lopez-Pacheco INRIA RESO/LIP, France Email: [email protected] C. Pham, Member, IEEE LIUPPA, University of
TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.
TCP Labs WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.2015 Hands-on session We ll explore practical aspects of TCP Checking the effect
Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain
Praveen Bhaniramka, Wei Sun, Raj Jain Department of Computer and Information Science The Ohio State University 201 Neil Ave, DL39 Columbus, OH 43210 USA Telephone Number: +1 614-292-3989 FAX number: +1
The Impact of Background Network Traffic on Foreground Network Traffic
The Impact of Background Network Traffic on Foreground Network Traffic George Nychis Information Networking Institute Carnegie Mellon University [email protected] Daniel R. Licata Computer Science Department
High Performance Cluster Support for NLB on Window
High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) [email protected] [2]Asst. Professor,
Evaluating High Performance Data Transfer with RDMA-based Protocols in Wide-Area Networks
1 Evaluating High Performance Data Transfer with RDMA-based Protocols in Wide-Area Networks Ezra Kissel, Martin Swany School of Computer and Information Sciences, University of Delaware, Newark, DE 1971
Data Mining for Data Cloud and Compute Cloud
Data Mining for Data Cloud and Compute Cloud Prof. Uzma Ali 1, Prof. Punam Khandar 2 Assistant Professor, Dept. Of Computer Application, SRCOEM, Nagpur, India 1 Assistant Professor, Dept. Of Computer Application,
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Network Friendliness of Mobility Management Protocols
Network Friendliness of Mobility Management Protocols Md Sazzadur Rahman, Mohammed Atiquzzaman Telecommunications and Networks Research Lab School of Computer Science, University of Oklahoma, Norman, OK
From Centralization to Distribution: A Comparison of File Sharing Protocols
From Centralization to Distribution: A Comparison of File Sharing Protocols Xu Wang, Teng Long and Alan Sussman Department of Computer Science, University of Maryland, College Park, MD, 20742 August, 2015
DiPerF: automated DIstributed PERformance testing Framework
DiPerF: automated DIstributed PERformance testing Framework Ioan Raicu, Catalin Dumitrescu, Matei Ripeanu Distributed Systems Laboratory Computer Science Department University of Chicago Ian Foster Mathematics
The Effect of Packet Reordering in a Backbone Link on Application Throughput Michael Laor and Lior Gendel, Cisco Systems, Inc.
The Effect of Packet Reordering in a Backbone Link on Application Throughput Michael Laor and Lior Gendel, Cisco Systems, Inc. Abstract Packet reordering in the Internet is a well-known phenomenon. As
A High-Performance Virtual Storage System for Taiwan UniGrid
Journal of Information Technology and Applications Vol. 1 No. 4 March, 2007, pp. 231-238 A High-Performance Virtual Storage System for Taiwan UniGrid Chien-Min Wang; Chun-Chen Hsu and Jan-Jan Wu Institute
Facility Usage Scenarios
Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to
SJBIT, Bangalore, KARNATAKA
A Comparison of the TCP Variants Performance over different Routing Protocols on Mobile Ad Hoc Networks S. R. Biradar 1, Subir Kumar Sarkar 2, Puttamadappa C 3 1 Sikkim Manipal Institute of Technology,
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Improving our Evaluation of Transport Protocols. Sally Floyd Hamilton Institute July 29, 2005
Improving our Evaluation of Transport Protocols Sally Floyd Hamilton Institute July 29, 2005 Computer System Performance Modeling and Durable Nonsense A disconcertingly large portion of the literature
Argonne National Laboratory, Argonne, IL USA 60439
LEGS: A WSRF Service to Estimate Latency between Arbitrary Hosts on the Internet R Vijayprasanth 1, R Kavithaa 2,3, and Rajkumar Kettimuthu 2,3 1 Department of Information Technology Coimbatore Institute
Concepts and Architecture of the Grid. Summary of Grid 2, Chapter 4
Concepts and Architecture of the Grid Summary of Grid 2, Chapter 4 Concepts of Grid Mantra: Coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations Allows
1000Mbps Ethernet Performance Test Report 2014.4
1000Mbps Ethernet Performance Test Report 2014.4 Test Setup: Test Equipment Used: Lenovo ThinkPad T420 Laptop Intel Core i5-2540m CPU - 2.60 GHz 4GB DDR3 Memory Intel 82579LM Gigabit Ethernet Adapter CentOS
File Transfer Protocol Performance Study for EUMETSAT Meteorological Data Distribution
Scientific Papers, University of Latvia, 2011. Vol. 770 Computer Science and Information Technologies 56 67 P. File Transfer Protocol Performance Study for EUMETSAT Meteorological Data Distribution Leo
Low-rate TCP-targeted Denial of Service Attack Defense
Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu
Sector vs. Hadoop. A Brief Comparison Between the Two Systems
Sector vs. Hadoop A Brief Comparison Between the Two Systems Background Sector is a relatively new system that is broadly comparable to Hadoop, and people want to know what are the differences. Is Sector
Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata
Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached
TCP tuning guide for distributed application on wide area networks 1.0 Introduction
TCP tuning guide for distributed application on wide area networks 1.0 Introduction Obtaining good TCP throughput across a wide area network usually requires some tuning. This is especially true in high-speed
The Problem with TCP. Overcoming TCP s Drawbacks
White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,
Packet-based Network Traffic Monitoring and Analysis with GPUs
Packet-based Network Traffic Monitoring and Analysis with GPUs Wenji Wu, Phil DeMar [email protected], [email protected] GPU Technology Conference 2014 March 24-27, 2014 SAN JOSE, CALIFORNIA Background Main
Monitoring Clusters and Grids
JENNIFER M. SCHOPF AND BEN CLIFFORD Monitoring Clusters and Grids One of the first questions anyone asks when setting up a cluster or a Grid is, How is it running? is inquiry is usually followed by the
What is Analytic Infrastructure and Why Should You Care?
What is Analytic Infrastructure and Why Should You Care? Robert L Grossman University of Illinois at Chicago and Open Data Group [email protected] ABSTRACT We define analytic infrastructure to be the services,
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
