10GbE vs Infiniband 4x Performance tests



Similar documents
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Linux NIC and iscsi Performance over 40GbE

New!! - Higher performance for Windows and UNIX environments

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

D1.2 Network Load Balancing

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

- An Essential Building Block for Stable and Reliable Compute Clusters

Network Performance in High Performance Linux Clusters

VMWARE WHITE PAPER 1

A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks

Can High-Performance Interconnects Benefit Memcached and Hadoop?

QoS & Traffic Management

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

SR-IOV: Performance Benefits for Virtualized Interconnects!

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server

ECLIPSE Performance Benchmarks and Profiling. January 2009

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

How To Test A Microsoft Vxworks Vx Works (Vxworks) And Vxwork (Vkworks) (Powerpc) (Vzworks)

Comparing the Network Performance of Windows File Sharing Environments

Performance Evaluation of Amazon EC2 for NASA HPC Applications!

Networking Driver Performance and Measurement - e1000 A Case Study

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Kashif Iqbal - PhD Kashif.iqbal@ichec.ie

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server

SR-IOV In High Performance Computing

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

LS DYNA Performance Benchmarks and Profiling. January 2009

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

High Performance Computing in CST STUDIO SUITE

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

Peter Senna Tschudin. Performance Overhead and Comparative Performance of 4 Virtualization Solutions. Version 1.29

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

AIX NFS Client Performance Improvements for Databases on NAS

VMware vsphere 4.1 Networking Performance

Windows 8 SMB 2.2 File Sharing Performance

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

64-Bit versus 32-Bit CPUs in Scientific Computing

Business white paper. HP Process Automation. Version 7.0. Server performance

RDMA over Ethernet - A Preliminary Study

Building Enterprise-Class Storage Using 40GbE

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing

White Paper. Recording Server Virtualization

LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment

The Assessment of Benchmarks Executed on Bare-Metal and Using Para-Virtualisation

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin.

Improved LS-DYNA Performance on Sun Servers

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

1000Mbps Ethernet Performance Test Report

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand

FLOW-3D Performance Benchmark and Profiling. September 2012

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

WI-FI PERFORMANCE BENCHMARK TESTING: Aruba Networks AP-225 and Cisco Aironet 3702i

Benchmarking Cassandra on Violin

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Building an Inexpensive Parallel Computer

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Measuring Cache and Memory Latency and CPU to Memory Bandwidth

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

The MAX5 Advantage: Clients Benefit running Microsoft SQL Server Data Warehouse (Workloads) on IBM BladeCenter HX5 with IBM MAX5.

Performance and scalability of a large OLTP workload

Monitoring high-speed networks using ntop. Luca Deri

PCI Express* Ethernet Networking

Lecture 1: the anatomy of a supercomputer

Cluster Computing in a College of Criminal Justice

White Paper WP01. Blade Server Technology Overview

PERFORMANCE CONSIDERATIONS FOR NETWORK SWITCH FABRICS ON LINUX CLUSTERS

MOSIX: High performance Linux farm

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

Microsoft Exchange Server 2003 Deployment Considerations

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Deep Dive: Maximizing EC2 & EBS Performance

Transcription:

10GbE vs Infiniband 4x Performance tests Deep Computing & Network Transformation Center ATS-PSSC Benchmark Report Last update: 24-Jul-07 Authors Role Comments Erwan Auffret IBM IT Specialist Network Transformation Center François Corradino Ludovic Enault Charles Ferland IBM IT Specialist Deep Computing Benchmark center IBM IT Specialist Deep Computing Benchmark center BladeNetwork Director of Sales, EMEA cferland@bladenetwork.net +49 151 1265 0830 Copyright IBM Corporation All Rights Reserved 2005

Table of Contents 1. Introduction... 1 2. Objectives... 2 3. Benchmark Infrastructure... 3 3.1 Hardware List...3 4. TCP/IP NetPerf testing... 6 4.1 NetPerf 2.4.3...6 4.2 Results...7 5. HPC testing... 9 5.1 Intel MPI Benchmark...9 5.1.1 Results...11 5.2 High Performance Computing Challenge...13 5.2.1 HPL...14 5.2.2 PTRANS...15 5.2.3 Latency comparison...16 5.2.4 Memory bandwidth comparison...17 5.3 VASP (Vienna Ab-initio Simulation Package)...18 5.3.1 TEST 1...18 5.3.2 TEST 2...19 6. Conclusions... 20 7. Contacts:... 21 IBM internal use only

1. Introduction With the announcement of the new BladeNetwork 10GbE switch for IBM BladeCenter H, it s been decided to test the 10GbE adapters in High Performance Computing /Next Generation Networks environments. The compared elements were NetXen Ethernet adapter, Topspin IB adapter and a Low Latency Ethernet adapter for blades, manufactured by Myricom. A set of standard HPC benchmarks and TCP benchmarks were performed on the adapters. A real HPC application has been tested as well. Results were logged and stored. They are described in this document. IBM Advanced Technical Support PSSC Montpellier page 1 of 23

2. Objectives The objective of the tests is to check the behavior of several network adapters with the new BladeCenter H and HT high performance Nortel switch. A first set of tests were performed to COMPARE Infiniband 4x and 10Gb Ethernet. A second standalone test has been performed to get base performance metrics on TCP/IP protocols. IBM Advanced Technical Support PSSC Montpellier page 2 of 23

3. Benchmark Infrastructure 3.1 Hardware List 2* IBM BladeCenter HS21 XM (7995-E2X) 2* INTEL Xeon E5345 (2.33GHz) QC (8MB L2 cache) 16GB (8*2GB) 667MHz FBD RAM 1* SFF SAS HDD Integrated Dual gigabit Broadcom 5708S Ethernet controller HDD DIMMs CPUs Daughter Card Figure 1: IBM BladeCenter HS21 XM internal layout Please check the following web site for information on System x and Blades updates. http://www-03.ibm.com/servers/eserver/xseries/ Several adapters were then added and tested on the PCI-express connector for IO daughter cards. There are different form factors available. Here are the ones that were used. TopSpin Infiniband Expansion Card for IBM BladeCenter (PN: xxxxxx) Figure 2: HSFF (High Speed Form Factor) daughter card for IBM BladeCenter H IBM Advanced Technical Support PSSC Montpellier page 3 of 23

NetXen 10Gb Ethernet Expansion Card for IBM BladeCenter (PN: 39Y9271) Figure 3: CFF-H (Combination Form Factor for Horizontal switches) daughter card for IBM BladeCenter H Myricom Low Latency 10Gb Ethernet Expansion Card for IBM BladeCenter (NO IBM PN) Daughter card (HSDC) for IBM BladeCenter H (10G-PCIE-8AI-C+MX1) http://www.myri.com/myri-10g/product_list.html Figure 4: CFF-H (Combination Form Factor for Horizontal switches) daughter card for IBM BladeCenter H The IBM BladeCenter IO adapters need a switch to connect to. The Ethernet adapters (NetXen and Myricom) connect to a BladeNetwork 10GbE switch module, while the IB adapters connect to a Cisco Systems 4x Infiniband switch module. Nortel 10Gb Ethernet Switch Module for IBM BladeCenter (PN: 39Y9267) Figure 5: BladeNetwork 10GbE switch module IBM Advanced Technical Support PSSC Montpellier page 4 of 23

BIOS, firmware & OS configuration Part name BIOS/OS/Firmware Version HS21 XM BIOS 1.02 HS21 XM OS (1) RedHat Enterprise Linux 5 AS for x86_64 kernel: 2.6.18-8.el5 HS21 XM OS (2) RedHat Enterprise Linux 4 u4 AS for x86_64 kernel: 2.6.9-42 Topspin IB adapter driver/firmware Myricom driver/firmware Myri-10G 1.3.0 (for Netperf tests) MXoE: 1.2.1 rc17 (for HPC tests) NetXen 10GbE driver/firmware 3.4.106 IBM Advanced Technical Support PSSC Montpellier page 5 of 23

4. TCP/IP NetPerf testing 4.1 NetPerf 2.4.3 NetPerf is a benchmark that can be used to measure various aspects of networking performance. Its primary focus is on bulk data transfer and request/response performance using either TCP or UDP. It is also referred to as stream or unidirectional stream performance. Basically, these tests will measure how fast one system can send data to another and/or how fast that other system can receive it. The TCP_STEAM test is the default test and is the one giving the stream performance that is the closest to IPTV workload. NGN applications and particularly IPTV solutions are typical streaming applications. However, a NetPerf test will not simulate an IPTV workload. Indeed, this type of applications requires the data transfer to VERY regular (with absolutely no packet loss), at a regular bit rate (which depends on the quality of the content that is streamed) and with various content read access (different files). Moreover, other functionalities like time shifting (fast forward/rewind, pause) need to be managed by the IPTV application and can generate network workloads that are not comparable with NetPerf results. A series of tests has been performed on the two 10Gb Ethernet adapters (NetXen and Myricom) between the two blades. The tests on the NetXen adapters were performed with the default MTU at 1500 and then with a MTU set at 8000 (on both servers). On the Myricom adapter, two different drivers were used: one which is tuned for HPC applications that require more better response time; the other is tuned to deliver better bandwidth, which is the main need for IPTV applications. All NetPerf tests were performed on RedHat Enterprise Linux 5. IBM Advanced Technical Support PSSC Montpellier page 6 of 23

4.2 Results The TCP_STREAM tests could deliver bandwidth results as well as CPU usage information. The maximum bandwidth could be reached by using the NGN driver on the Myricom adapter. Almost 70% of the theoretical bandwidth could be reached. As a comparison, the same test was performed on the integrated Broadcom Gigabit Ethernet adapter and 94% of the 1Gb bandwidth was reached. NetPerf tets (TCP_STREAM)- Bandwidth 7000 6000 5000 Bandw idth (Mbps) 4000 3000 Throughput (Mbps) 2000 1000 0 MTU = 1500 MTU = 8000 "HPC" driver (better latency) "NGN" driver (better bandw idth) Netxen Myricom Table 1: NetPerf TCP_STREAM Bandwidth results CPU usage is very important for servers tuning and application providers. The fact that the network is stressed can generate a lot of CPU utilization which, therefore, cannot be allocated to application treatment. TOE (Totally Offload Ethernet) can be used on some adapters which support this functionality. Some basic typical network treatment can be handled by the adapter itself, instead of the CPUs. This speeds up some applications. We decided not to use TOE since it is not yet supported on the Myricom adapter. Otherwise the CPU utilization would be less. The average utilization is around 15%. With TOE enabled, we can expect 5% to 10 % utilization. IBM Advanced Technical Support PSSC Montpellier page 7 of 23

NetPerf tets (TCP_STREAM)- CPU usage 25 20 15 CPU utilization (%) 10 Local (send) CPU utilization Remote (receive) CPU utilization 5 0 MTU = 1500 MTU = 8000 "HPC" driver (better latency) "NGN" driver (better bandw idth) Netxen Myricom Table 2: NetPerf TCP_STREAM CPU Utilization results IBM Advanced Technical Support PSSC Montpellier page 8 of 23

5. HPC testing For the following section all the tests have been performed with Myricom low latency 10G network adapter and Infiniband 4x SDR network card. 5.1 Intel MPI Benchmark This test is a kind of reference test, since it gives us the performance of the network. The idea of IMB is to provide a concise set of elementary MPI benchmark kernels. The Intel (formerly Pallas) MPI Benchmark Suite was used to study communications. Points to point communications were studied with the use of the PingPong and SendRecv benchmarks. PingPong The Ping Pong is the classical pattern used for measuring startup and throughput of a single message sent between two processors. The plot below shows the PingPong pattern. Figure 6: PIngPong pattern The latency reported by the PingPong test is the type to send a message of size 0 so it is time is the plot above. The network bandwidth defined in Mbytes/sec is the time to send 2x bytes in t (µsec). SendRecv IBM Advanced Technical Support PSSC Montpellier page 9 of 23

This test is based on MPI_Sendrecv, the processes form a periodic communication chain. Each process sends to the right and receives from the left neighbor in the chain. See below the Sendrecv pattern Figure 7: SendRecv pattern The throughput performance is 2x divided by t (µsec). As here only 2 processes are used, it will report the bi-directional bandwidth of the system. IBM Advanced Technical Support PSSC Montpellier page 10 of 23

5.1.1 Results 5.1.1.1 Latency results 6 Intel MPI Benchmark latency 10G 13,5 IB 4x 5 13 % Lower is better Dif f erence 13 4 12,5 usec 3 12 11,5 2 11 % 11 1 10,5 0 PingPong Latency SendRecv latency 10 Table 3: Intel MPI benchmark latency There is a 13 % difference for the PingPong latency between the network adapter 10 G and the Infiniband 4x which is very interesting for HPC purpose. The latency is an important factor since it represents the time to open a communication between two processes. 13 % difference on the latency may show a huge difference for an overall performance point of view. There is about the same difference for the SendRecv latency. In the following part it would be interesting to see the overall performance impact that the latency implies. 5.1.1.2 Network bandwidth performance The following plots represent the PingPong and SendRecv results: IBM Advanced Technical Support PSSC Montpellier page 11 of 23

Ping Pong benchmark 1000 900 800 700 MB/s 600 500 400 Higher is better 300 200 IB 4x 10 G 100 0 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 524288 1E+06 2E+06 4E+06 Bytes Table 4: PingPong bandwidth with respect to the message size It appears clearly that the network adapter 10 G offers a better bandwidth (around 10 %) with a peak at 950 MB/s whereas the IB 4x offers 854.29 MB/sec. For the 10G, the performance really increases when the message size is between 4KB and 32 KB where the difference is about 28 %. IBM Advanced Technical Support PSSC Montpellier page 12 of 23

5.2 High Performance Computing Challenge The HPC Challenge benchmark consists of basically 7 tests, not all of them are relevant for our work. Among the seven tests we consider only: HPL - the Linpack TPP benchmark which measures the floating point rate of execution for solving a linear system of equations. PTRANS (parallel matrix transpose) - exercises the communications where pairs of processors communicate with each other simultaneously. It is a useful test of the total communications capacity of the network. Communication bandwidth and latency - a set of tests to measure latency and bandwidth of a number of simultaneous communication patterns. Latency/Bandwidth measures latency (time required to send an 8-byte message from one node to another) and bandwidth (message size divided by the time it takes to transmit a 2,000,000 byte message) of network communication using basic MPI routines. The measurement is done during non-simultaneous (ping-pong benchmark) and simultaneous communication (random and natural ring pattern) and therefore it covers two extreme levels of contention (no contention and contention caused by the fact that each process communicates with a randomly chosen neighbour in parallel) that might occur in real application. For measuring latency and bandwidth of parallel communication, all processes are arranged in a ring topology and each process sends and receives a message from its left and its right neighbour in parallel. Two types of rings are reported: a naturally ordered ring (i.e., ordered by the process ranks in MPI_COMM_WORLD), and the geometric mean of ten different randomly chosen process orderings in the ring. The communication is implemented: (a) with MPI standard non-blocking receive and send, and (b) with two calls to MPI_Sendrecv for both directions in the ring. With this type of parallel communication, the bandwidth per process is defined as total amount of message data divided by the number of processes and the maximal time needed in all processes. IBM Advanced Technical Support PSSC Montpellier page 13 of 23

5.2.1 HPL As it is mentioned above the HPL benchmark reveals the sustainable peak performance that your system can achieve. The algorithm used is the LU decomposition in parallel using mainly a BLAS 3 routine (DGEMM) and a block-cyclic decomposition exchanging the data between processor. From experience point of view the network has some performance impacts in the sense that using a low performance network implies that the sustainable peak will not be high (less than 60 % of the theoretical peak), loosing efficiency during exchanging data between the processors. For coherency two matrix sizes are considered N=32000 and N=58000 Below is represented the sustainable peak. HPL performance GFlops/sec 112 110 108 106 104 102 100 98 96 94 92 109,31 109,139 Higher is better 99,0957 58000 matrix size 32000 Network adapter 10G IB 4x 101,168 Table 5: HPL performance with 2 matrix sizes The performance are very close, meaning that since both networks are fast and so the performance achieved for HPL won t differ a lot. For the matrix size 58000 the percentage of the peak performance is 73 % which is a rather good number. Of course the influence of the network increases with the number of node used. Two nodes are not really relevant to show the network influence. The good information is that both networks are able to give good performance for the HPL benchmark. IBM Advanced Technical Support PSSC Montpellier page 14 of 23

5.2.2 PTRANS As above the PTRANS benchmark is useful to test the total communication capacity of the network. It performs a matrix inversion in parallel. The graph below show for 2 matrices sizes the capacity of the network. 1,8 1,6 1,4 1,2 Higher is better PTRANS performance 1,17491 1,20881 1,5821 Network adapter 10G IB 4x 1,26888 GB/sec 1 0,8 0,6 0,4 0,2 0 58000 32000 matrix size Table 6: PTRANS performance The performances of PTRANS for a matrix size equal to 58000 are very close whereas for a smaller matrix size (N=32000) the difference is much bigger, 3 % in the first case and 20% in the second case. It is important to say that the performance of PTRANS depends on the matrix size but the matrix used for the benchmark is the same as the HPL one. So in one hand the matrix size should be the biggest as possible for the HPL and in the other hand should not be too big otherwise the PTRANS performance will decrease. IBM Advanced Technical Support PSSC Montpellier page 15 of 23

5.2.3 Latency comparison Latency comparison Netw ork adapter 10G IB 4x 10 Difference 35 9 8 7 Lower is better 24,69% 31,51% 30 25 usec 6 5 4 20 15 3 2 1 0 9,13% MaxPingPongLatency RandomlyOrderedRingLatency NaturallyOrderedRingLatency 10 5 0 Table 7: PingPong, randomly ordered and natural ordered latencies The max PingPong latency is around 4usecs for both networks meaning that the two networks are good, nevertheless the network adapter 10G is 9% better in terms of latency as the IMB benchmark showed (see above). For the randomly ordered ring latency the difference is bigger with 24.7% and it is 31.5 % for the naturally ordered ring latency. It means that when simultaneous communication occurred the network adapter 10G shows better performance. It is interesting since in real HPC applications it is very often that simultaneous communications take place. IBM Advanced Technical Support PSSC Montpellier page 16 of 23

5.2.4 Network bandwidth comparison Network bandwidth comparison 1,2 1 43,9 Netw ork adapter 10G IB 4x Difference 50 45 40 0,8 35 GBytes/sec 0,6 24,7 Higher is better 30 25 20 0,4 15 0,2 0 MinPingPongBandwidth NaturallyOrderedRingBandwidth RandomlyOrderedRingBandwidth 2,2 10 5 0 Table 8: PingPong, naturally ordered and randomly ordered ring bandwidths The difference for the min PingPong bandwidth is around 25%. For the naturally ordered ring bandwidth case the performance differs by 44%, whereas the performance is about the same for the randomly ordered ring bandwidth. Now that comparisons between the two network adapters have been performed on simple kernel benchmarks, it is interesting to see how it impacts performance on a real application. IBM Advanced Technical Support PSSC Montpellier page 17 of 23

5.3 VASP (Vienna Ab-initio Simulation Package) VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudo potentials and a plane wave basis set. This application has been chosen because it represents a large HPC segment namely Life Science. At the opposite of the two previous tests it is a real application and not a kernel benchmark. The communication is an important factor for the performance of VASP and so it is a good candidate for our work. Two input test cases have been selected. The first is the following: 5.3.1 TEST 1 SYSTEM = Co rods 1x1x1 Startparameter for this run: PREC = High medium, high low ISTART = 0 job : 0-new 1-cont 2-samecut ISPIN = 2 spin polarized calculation? Electronic Relaxation 1 ENCUT = 400.0 ev NELM = 120; NELMIN= 2; NELMDL= -5 # of ELM steps EDIFF = 0.1E-03 stopping-criterion for ELM ISMEAR = 1; SIGMA = 0.1 Ionic relaxation EDIFFG = -0.02 stopping-criterion for IOM NSW = 45 number of steps for IOM IBRION = 2 ionic relax: 0-MD 1-quasi-New 2-CG ISIF = 2 stress and relaxation NBANDS = 104 MAGMOM = 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 Electronic relaxation 2 (details) IALGO = 38 algorithm IBM Advanced Technical Support PSSC Montpellier page 18 of 23

5.3.2 TEST 2 SYSTEM = SWCNT ISTART= 0 ISMEAR = -5! Small K-Points Only NELM = 15 EDIFF = 0 #Paralellisation switches - NPAR = no proc LPLANE=.TRUE. NPAR=4 NSIM=8 LCHARG=.FALSE. LWAVE=.FALSE. The following plot shows the performance on both systems: 6000 VASP Network adapter 10G IB 4x Difference 35 5000 33 % 30 Time (sec) 4000 3000 17 % Lower is better 25 20 15 2000 10 1000 5 0 test 1 test 2 0 Table 9: Elapsed Time for VASP execution It is very clear that there is a better performance when using a network adapter 10G, for the test case 2, 33 % difference is observed whereas for the test case 1 the difference is 17%. The difference of gain can be due to the different test case, the test case 2 stresses more the network than the test case 1. IBM Advanced Technical Support PSSC Montpellier page 19 of 23

6. Conclusions In certain conditions (RedHat 5, right drivers ), the NetXen 10GbE adapter can be a good alternative to ten 1GbE adapters for simplicity. For performance issues, it is not an interesting solution yet. Indeed, a single 10GbE performs like five or six 1GbE adapters and not like ten. The Myricom adapter seems to perform much better but is still less efficient (when talking about bandwidth) than ten 1GbE adapters. Blade servers would definitely benefit from that solution. From the HPC point of view the study performed has been really interesting since it shows that the Myricom low latency 10G network adapter gives better performance than an IB 4x card. On the kernel benchmark IMB a difference of 10% in term of network latency and network bandwidth has been observed whereas with HPCC the latency difference is increasing (around 25 %). The most interesting result is the result we obtained on VASP. VASP is a real application and represent a large numbers of life science codes in term of communication requirement. For the two cases the network adapter 10G shows better performance than the IB adapter by 17 % and 33 % respectively. Of course some further testing has to be done, but it is really promising. The follow-on will be to test the 10 G adapter with others real applications belonging to other HPC sectors. IBM Advanced Technical Support PSSC Montpellier page 20 of 23

7. Contacts: IBM Products and Solutions Support Center (Montpellier) Erwan Auffret IBM Sales & Distribution IT Specialist - Network Transformation Center. Phone: +33 4 6734 6077 E-mail: erwan.auffret@fr.ibm.com François-Romain Corradino IBM Sales & Distribution IT Specialist - Deep Computing Phone: +33 4 6734 4836 E-mail: francois.corradino@fr.ibm.com Ludovic Enault IBM Sales & Distribution IT Specialist - Deep Computing Phone: +33 4 6734 4706 E-mail: ludovic.enault@fr.ibm.com IBM Advanced Technical Support PSSC Montpellier page 21 of 23