Investigations on InfiniBand: Efficient Network Buffer Utilization at Scale



Similar documents
Why Compromise? A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics

Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces

RDMA over Ethernet - A Preliminary Study

A Comparison of Three MPI Implementations for Red Storm

A Flexible Cluster Infrastructure for Systems Research and Software Development

Can High-Performance Interconnects Benefit Memcached and Hadoop?

How To Monitor Infiniband Network Data From A Network On A Leaf Switch (Wired) On A Microsoft Powerbook (Wired Or Microsoft) On An Ipa (Wired/Wired) Or Ipa V2 (Wired V2)

Performance Evaluation of InfiniBand with PCI Express

Lustre Networking BY PETER J. BRAAM

High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Initial Performance Evaluation of the Cray SeaStar

So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:

High Performance MPI on IBM 12x InfiniBand Architecture

Storage at a Distance; Using RoCE as a WAN Transport

Advanced Computer Networks. High Performance Networking I

RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vsphere 5 R E S E A R C H N O T E

Improving System Scalability of OpenMP Applications Using Large Page Support

MPI / ClusterTools Update and Plans

Advancing Applications Performance With InfiniBand

Efficient shared memory message passing for inter-vm communications

Performance Analysis and Evaluation of PCIe 2.0 and Quad-Data Rate InfiniBand

GSAND C. Demonstrating Improved Application Performance Using Dynamic Monitoring and Task Mapping

Performance Evaluation of Amazon EC2 for NASA HPC Applications!

LS DYNA Performance Benchmarks and Profiling. January 2009

Stream Processing on GPUs Using Distributed Multimedia Middleware

ECLIPSE Performance Benchmarks and Profiling. January 2009

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

Micro-Benchmark Level Performance Comparison of High-Speed Cluster Interconnects

Multicore Parallel Computing with OpenMP

PERFORMANCE CONSIDERATIONS FOR NETWORK SWITCH FABRICS ON LINUX CLUSTERS

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

High Speed I/O Server Computing with InfiniBand

Power-Aware High-Performance Scientific Computing

Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing

benchmarking Amazon EC2 for high-performance scientific computing

Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card

Muse Server Sizing. 18 June Document Version Muse

Ceph Optimization on All Flash Storage

Energy-aware job scheduler for highperformance

PMI: A Scalable Parallel Process-Management Interface for Extreme-Scale Systems

A Dell Technical White Paper Dell Compellent

Infiniband Monitoring of HPC Clusters using LDMS (Lightweight Distributed Monitoring System)

RoCE vs. iwarp Competitive Analysis

Tyche: An efficient Ethernet-based protocol for converged networked storage

Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

Aggregate Router: An Efficient Inter-Cluster MPI Communication Facility

Accelerating Spark with RDMA for Big Data Processing: Early Experiences

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Performance of the NAS Parallel Benchmarks on Grid Enabled Clusters

Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

White Paper Abstract Disclaimer

Virtual Machine Aware Communication Libraries for High Performance Computing

High Performance Computing in CST STUDIO SUITE

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Scalability and Classifications

LaPIe: Collective Communications adapted to Grid Environments

Kashif Iqbal - PhD Kashif.iqbal@ichec.ie

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing

Performance and scalability of a large OLTP workload

D1.2 Network Load Balancing

Quantum StorNext. Product Brief: Distributed LAN Client

OS/Run'me and Execu'on Time Produc'vity

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters*

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Relational Databases in the Cloud

Cray DVS: Data Virtualization Service

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Netgauge: A Network Performance Measurement Framework

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Box Leangsuksun+ * Thammasat University, Patumtani, Thailand # Oak Ridge National Laboratory, Oak Ridge, TN, USA + Louisiana Tech University, Ruston,

Microsoft SMB Running Over RDMA in Windows Server 8

Windows Server 2008 R2 Hyper-V Live Migration

Performance Monitoring of Parallel Scientific Applications

Performance of RDMA-Capable Storage Protocols on Wide-Area Network

Tableau Server 7.0 scalability

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

3G Converged-NICs A Platform for Server I/O to Converged Networks

Using PCI Express Technology in High-Performance Computing Clusters

InfiniBand in the Enterprise Data Center

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Informatica Ultra Messaging SMX Shared-Memory Transport

Introduction to InfiniBand for End Users Industry-Standard Value and Performance for High Performance Computing and the Enterprise

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

MPICH FOR SCI-CONNECTED CLUSTERS

MapReduce Evaluator: User Guide

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

High Performance VMM-Bypass I/O in Virtual Machines

TCP Adaptation for MPI on Long-and-Fat Networks

Overlapping Data Transfer With Application Execution on Clusters

INAM - A Scalable InfiniBand Network Analysis and Monitoring Tool

Transcription:

Investigations on InfiniBand: Efficient Network Buffer Utilization at Scale Galen M. Shipman 1, Ron Brightwell 2, Brian Barrett 1, Jeffrey M. Squyres 3, and Gil Bloch 4 1 Los Alamos National Laboratory, Los Alamos, NM USA, {gshipman,bbarrett}@lanl.gov, LA-UR-07-3198, 2 Sandia National Laboratories, Albuquerque, NM USA, rbbrigh@sandia.gov 3 Cisco, Inc., San Jose, CA USA, jsquyres@cisco.com 4 Mellanox Technologies, Santa Clara, CA USA, gil@mellanox.com Abstract. The default messaging model for the OpenFabrics Verbs API is to consume receive buffers in order regardless of the actual incoming message size leading to inefficient registered memory usage. For example, many small messages can consume large amounts of registered memory. This paper introduces a new transport protocol in Open MPI implemented using the existing OpenFabrics Verbs API that exhibits efficient registered memory utilization. Several real-world applications were run at scale with the new protocol; results show that global network resource utilization efficiency increases, allowing increased scalability and larger problem sizes on clusters which can increase application performance in some cases. 1 Introduction The recent emergence of near-commodity clusters with thousands of nodes connected with InfiniBand (IB) has increased the need for examining scalability issues with MPI implementations for IB. Several of these issues were originally discussed in detail for the predecessor to IB [1], and several possible approaches to overcoming some of the more obvious scalability limitations were proposed. This study examines the scalability, performance, and complexity issues of the message buffering for implementations of MPI over IB. The semantics of IB Verbs place a number of constraints on receive buffers. Receive buffers are consumed in FIFO order, and the buffer at the head of Los Alamos National Laboratory is operated by Los Alamos National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under contract DE-AC52-06NA25396. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy s National Nuclear Security Administration under contract DE-AC04-94AL85000.

the queue must be large enough to hold the next incoming message. If there is no receive buffer at the head of the queue or if the receive buffer is not large enough, this will trigger a network-level protocol that can significantly degrade communication performance. Because of these constraints, MPI implementations must be careful to insure that a sufficient number of receive buffers of sufficient size are always available to match incoming messages. Several other details can complicate the issue. Most operating systems require message buffers to be registered so that they may be mapped to physical pages. Since this operation is time consuming, it is desirable to register memory only when absolutely needed and to use registered memory as efficiently as possible. MPI implementations that are single-threaded may only be able to replenish message buffers when the application makes an MPI library call. Finally, the use of receive buffers is highly dependent on application message passing patterns. This paper describes a new protocol that is more efficient at using receive buffers and can potentially avoid some of the flow control protocols that are needed to insure that receive buffers of the appropriate size are always available. We begin by measuring the efficiency of receive buffer utilization for the current Open MPI implementation for IB. We then propose a new strategy that can significantly increase the efficiency of receive buffer utilization, make more efficient use of registered memory, and potentially reduce the need for MPI-level flow control messages. The rest of this paper is organized as follows. The next section provides a brief discussion of previous work in this area. Section 3 describes the new protocol, while Section 4 provides details of the test platform and analyzes results for several application benchmarks and applications. We conclude in Section 5 with a summary of relevant results and offer some possible avenues of future work in Section 5.1. 2 Background The complexity of managing multiple sets of buffers across multiple connections was discussed in [1], and a mechanism for sharing a single pool of buffers across multiple connections was proposed. In the absence of such a shared buffer pool for IB, MPI implementations were left to develop user-level flow control strategies to insure that message buffer space was not exhausted [2]. Eventually, a shared buffer pool mechanism was implemented for IB in the form of a shared receive queue (SRQ), and has been shown to be effective in reducing memory usage [3, 4]. However, the IB SRQ still does not eliminate the need for user-level and network-level flow control required to insure the shared buffer pool is not depleted [5]. The shared buffer pool approach was also implemented for MPI on the Portals data movement layer, but using fixed sized buffers was shown to have poor memory efficiency in practice, so an alternative strategy was developed [6]. In Section 3, we propose a similar strategy that can be used to improve memory efficiency for MPI over IB as well.

3 Protocol Description IB does not natively support receive buffer pools similar to [6], but it is possible to emulate the behavior with buckets of receive buffers of different sizes, with each bucket using a single shared receive queue (SRQ). We call this emulation the Bucket SRQ, or B-SRQ. B-SRQ begins by allocating a set of per-peer receive buffers. These per-peer buffers are for tiny messages (128 bytes plus space for header information) and are regulated by a simple flow control protocol to ensure that tiny messages always have a receive buffer available. 5 The tiny size of 128 bytes was choosen as an optimization to ensure that global operations on a single MPI DOUBLE element would always fall into the per-peer buffer path. The 128-byte packet size also ensures that other control messages (such as rendezvous protocol acknowledgments) use the per-peer allocated resources path as well. B-SRQ then allocates a large slab of registered memory for receive buffers. The slab is divided up into N buckets; bucket B i is S i bytes long and contains a set of individual receive buffers, each of size R i bytes (where R i R j for i, j [0, N 1] and i j). Bucket B n contains Sn R n buffers. Each bucket is associated with a unique queue pair (QP) and a unique SRQ. This design point is a result of current IB network adapter limitations; only a single receive queue can be associated with a QP. The QP effectively acts as the sender-side addressing mechanism of the corresponding SRQ bucket on the receiver. 6 Other networks such as Myrinet GM [7] allow the sender to specify a particular receive buffer on the send side through a single logical connection and as such would allow for a similar protocol as that used in B-SRQ. Quadrics Elan [8] uses multiple pools of buffers to handle unexpected messages, although the specific details of this protocol have never been published. In our prototype implementation, S i = S j for i j, and R i = 2 8+i, for i [0, 7]. That is, the slab was divided equally between eight buckets, and individual receive buffers were powers of two sizes ranging from 2 8 to 2 15. Fig. 1 illustrates the receive buffer allocation strategy. Send buffers are allocated using a similar technique, except that free lists are used which grow on demand (up to a configurable limit). When a MPI message is scheduled for delivery, a send buffer is dequeued from the free list; the smallest available send buffer that is large enough to hold the message is returned. 4 Results Two protocols are analyzed and compared: Open MPI v1.2 s default protocol for short messages and Open MPI v1.2 s default protocol modified to use B-SRQ for short messages (denoted v1.2bsrq ). 5 The flow control protocol, while outside the scope of this paper, ensures that the receiver never issues a receiver not ready (or RNR-NAK) error, which can degrade application performance. 6 Open MPI uses different protocols for long messages. Therefore, the maximum B- SRQ bucket size is effectively the largest short message that Open MPI will handle.

Per Peer Receive Resources Shared Receive Resources Fixed regardless of # of peers 1KB 1KB 1KB 1KB 1KB SRQ Slab Size = Total SRQ Resources SRQ Bucket Size = Total SRQ Resource / # Buckets # of Receive Descriptors = SRQ Bucket Size / RD Size 3 3 3 3 3 Fig. 1. Open MPI s B-SRQ receive resources. The default protocol for short messages in Open MPI v1.2 uses a credit-based flow-control algorithm to send messages to fixed-sized buffers on the receiver. When the sender exhausts its credits, messages are queued until the receiver returns credits (via an ACK message) to the sender. The sender is then free to resume sending. By default, SRQ is not used in Open MPI v1.2 because of a slight latency performance penalty; SRQ may be more efficient in terms of resource utilization, but it can be slightly slower than normal receive queues in some network adapters [3, 4]. Open MPI v1.2bsrq implements the protocol described in Section 3. It has the following advantages over Open MPI v1.2 s default protocol: Receiver buffer utilization is more efficient. More receive buffers can be be posted in the same amount of memory. No flow control protocol overhead for messages using the SRQ QPs. 7 4.1 Experimental Setup The Coyote cluster at Los Alamos National Laboratory was used to test the B-SRQ protocol. Coyote is a 1,290 node AMD Opteron cluster divided into 258- node sub-clusters. Each sub-cluster is an island of IB connectivity; nodes are fully connected via IB within the sub-cluster but are physically disjoint from IB in other sub-clusters. Each node has two single-core 2.6 GHz AMD Opteron processors, 8 GB of RAM, and a single-port Mellanox Sinai/Infinihost III SDR IB adapter (firmware revision 1.0.800). The largest MPI job that can be run utilizing the IB network is therefore 516 processors. Both versions of Open MPI (v1.2 and v1.2bsrq) were used to run the NAS Parallel Benchmarks (NPBs) and two Los Alamos-developed applications. Wallclock execution time was measured to compare overall application performance 7 Without pre-negotiating a fixed number of SRQ receive buffers for each peer, there is no way for senders to know when receive buffers have been exhausted in an SRQ [3].

Percentile 100 80 60 40 20 BT CG MG SP 0 0 500 1000 1500 2000 2500 InfiniBand Receive Size (Bytes) (a) 600 Receive Buffer Efficiency 100% 80% 60% 40% 20% 0% Min Rank Efficiency Avg Rank Efficiency Max Rank Efficiency CG Default CG B SRQ BT Default BT B SRQ (b) SP Default SP B SRQ MG Default MG B SRQ Time in seconds 500 400 300 200 100 Default B SRQ 0 BT CG MG SP (c) Fig. 2. NAS Parallel Benchmark Class D results on 256 nodes for (a) message size, (b) buffer efficiency, and (c) wall-clock execution time. with and without B-SRQ. Instrumentation was added to both Open MPI versions to measure the effectiveness and efficiency of B-SRQ by logging send and receive message size buffer utilization, defined as buffer size. Performance results used the noninstrumented versions. Buffer utilization of 100% is ideal, meaning that the buffer is exactly the same size as the data that it contains. Since IB is a resource-constrained network, the buffer utilization of an application can have a direct impact on its overall performance. For example, if receive buffer utilization is poor and the incoming message rate is high, available receive buffers can be exhausted, resulting in an RNR-NAK (and therefore performance degradation) [3, 4]. The frequency of message sizes received via IB was also recorded. Note that IB-level message sizes may be different than MPI-level message sizes as Open MPI may segment an MPI-level message, use RDMA for messages, and send control messages over IB. This data is presented in percentile graphs, showing the percentage of receives at or below a given size (in bytes). 4.2 NAS Parallel Benchmarks Results using D sized problems with the NPBs are shown in Figure 2(a). The benchmarks utilize a high percentage of small messages at the IB level, with the notable exception of MG, which uses some medium-sized messages at the IB level. Some of these benchmarks do send larger messages as well, triggering a different message passing protocol in Open MPI that utilizes both rendezvous

Percentile 100 80 60 40 20 0 0 500 1000 1500 2000 2500 InfiniBand Receive Size (Bytes) (a) 60.00 Receive Buffer Efficiency 100% 80% 60% 40% 20% 0% Min Rank Efficiency Avg Rank Efficiency Max Rank Efficiency 64 B SRQ 16 Deault 16 B_SRQ (b) 256 Default 256 B SRQ 64 Default Time in seconds 50.00 40.00 30.00 20.00 10.00 Default B SRQ 0.00 16 64 256 (c) Fig. 3. SAGE results for (a) message size at 256 processes, (b) buffer efficiency at varying process count, and (c) wall-clock execution time at varying process count. techniques and RDMA [9]. Long messages effectively avoid the use of dedicated receive buffers delivering data directly into the application s receive buffer. Figure 2(b) shows that Open MPI s v1.2 protocol exhibits poor receive buffer utilization due to the fact that receive buffers are fixed at and 3. These buffer sizes provide good performance but poor buffer utilization. The B-SRQ protocol in v1.2bsrq provides good receive buffer utilization for the NPB benchmarks, with increases in efficiency of over 50% for the MG benchmark. Overall B-SRQ performance decreases slightly, shown in Figure 2(c). Measurements of class B and C benchmarks at 256 processes do not exhibit this performance degradation which is currently being investigated in conjunction with several IB vendors. 4.3 Los Alamos Applications Two Los Alamos National Laboratory applications were used to evaluate B-SRQ. The first, SAGE, is an adaptive grid Eulerian hydrocode that uses adaptive mesh refinement. SAGE is typically run on thousands of processors and has weak scaling characteristics. Message sizes vary, typically in the tens to hundreds of kilobytes. Figure 3(a) shows that most IB level messages received were less than 128 bytes with larger messages using our RDMA protocol. Figure 3(b) illustrates poor memory utilization in Open MPI v1.2 when run at 256 processors. The new protocol exhibits much better receive buffer utilization, as smaller buffers are

Percentile 100 80 60 40 20 0 0 200 400 600 800 1000 1200 1400 InfiniBand Receive Size (Bytes) (a) 1.20 Receive Buffer Efficiency 100% 80% 60% 40% 20% 0% Min Rank Efficiency Avg Rank Efficiency Max Rank Efficiency 64 B SRQ 16 Default 16 B SRQ (b) 256 Default 256 B SRQ 64 Default Time in seconds 1.00 0.80 0.60 0.40 0.20 Default B SRQ 0.00 16 64 256 (c) Fig. 4. SWEEP results for (a) message size at 256 processes, (b) buffer efficiency at varying process count, and (c) wall-clock execution time at varying process count. used to handle the data received. As with the NPBs, there is a slight performance impact, but it is very minimal at 256 processors, as illustrated in Figure 3(c). The second application evaluated was SWEEP. SWEEP uses a pipelined wavefront communication pattern. Messages are small and communication in the wavefront is limited to spatial neighbors on a cube. The wavefront direction and starting point is not fixed; it can change during the run to start from any corner on the cube. Receive buffer size is highly regular as shown in Figure 4(a). Figure 4(b) shows that receive buffer efficiency is improved by the B-SRQ algorithm, although to a lesser extent than in other applications. Figure 4(c) shows that performance was dramatically increased by the use of the B-SRQ protocol. This performance increase is due to the B-SRQ protocol providing a much larger number of receive resources to any one peer (receive resources are shared). The v1.2 protocol provides a much smaller number of receive resources to any one peer (defaulting to 8). This small number of receive resources prevents SWEEP s high message rate from keeping the IB network full. The B-SRQ protocol allows SWEEP to have many more outstanding send/receives as a larger number of receive resources are available to a rank s 3-D neighbors and thereby increasing performance. This performance increase could also be realized using v1.2 s per peer allocation policy but would require allocation of a larger number of per peer resources to each process. B-SRQprovides good performance without such application specific tuning while conserving receive resources.

5 Conclusions Evaluation of application communication over Open MPI revealed that the size of data received at the IB level, while often quite small, can vary substantially from application to application. Using a single pool of fixed-sized receive buffers may therefore result in inefficient receive buffer utilization. Receive buffer depletion negatively impacts performance, particularly at large scale. Earlier work focused on complex receiver-side depletion detection and recovery, or avoiding depletion altogether by tuning receive buffer sizes and counts on a per-application (and sometimes per-run) basis. The B-SRQ protocol focuses on avoiding resource depletion by more efficient receive buffer utilization. Experimental results are promising; by better matching receive buffer sizes with the size of received data, application runs were constrained to an overall receive buffer memory footprint of less than 25 MB. Additionally, receive queues were never depleted and no application-specific tuning of receive buffer resources was required. 5.1 Future Work While the B-SRQ protocol results in better receive buffer utilization, other receive buffer allocation methods are possible. For example, buffer sizes in increasing powers of two may not be optimal. Through measurements collected via Open MPI v1.2bsrq, we will explore receive buffer allocation policies and their potential impact on overall receive buffer utilization. Recent announcements by IB vendors indicate that network adapters will soon support the ability to associate a single QP with multiple SRQs by specifying the SRQ for delivery on the send side. The ConnectX IB HCA (an InfiniBand Host Channel Adapter recently announced by Mellanox) extends the InfiniBand specification and defines a new Transport Service called SRC (Scalable Reliable Connection). SRC transport service separates the transport context from the Receive Work Queue and allows association of multiple receive-queues to a single connection. Using SRQ Transport Service, the sender indicates the destination receive queue when posting a send request to allow de-multiplexing in the remote receiver. Using the new paradigm, an efficient BSRQ can be implemented using a single connection context and a single send queue to send data to different remote SRQs. When posting a work request the sender will indicate the appropriate destination SRQ according to message size (and priority). References [1] Brightwell, R., Maccabe, A.B.: Scalability limitations of VIA-based technologies in supporting MPI. In: Proceedings of the Fourth MPI Devlopers and Users Conference. (2000) [2] Liu, J., Panda, D.K.: Implementing efficient and scalable flow control schemes in mpi over infiniband. In: Workshop on Communication Architecture for Clusters (CAC 04). (2004) [3] Shipman, G.M., Woodall, T.S., Graham, R.L., Maccabe, A.B., Bridges, P.G.: Infiniband scalability in Open MPI. In: International Parallel and Distributed Processing Symposium (IPDPS 06). (2006)

[4] Sur, S., Chai, L., Jin, H.W., Panda, D.K.: Shared receive queue based scalable MPI design for InfiniBand clusters. In: International Parallel and Distributed Processing Symposium (IPDPS 06). (2006) [5] Sur, S., Koop, M.J., Panda, D.K.: High-performance and scalable MPI over InfiniBand with reduced memory usage: An in-depth performance analysis. In: ACM/IEEE International Conference on High-Performance Computing, Networking, Storage, and Analysis (SC 06). (2006) [6] Brightwell, R., Maccabe, A.B., Riesen, R.: Design, implementation, and performance of MPI on Portals 3.0. International Journal of High Performance Computing Applications 17(1) (2003) [7] Myrinet: Myrinet GM. (http://www.myri.com/scs/documentation.html) [8] Quadrics: Quadrics elan programming manual v5.2. http://www.quadrics.com/ (2005) [9] Shipman, G.M., Woodall, T.S., Bosilca, G., Graham, R.L., Maccabe, A.B.: High performance RDMA protocols in HPC. In: Proceedings, 13th European PVM/MPI Users Group Meeting. Lecture Notes in Computer Science, Bonn, Germany, Springer-Verlag (2006)