Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Similar documents
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

RoCE vs. iwarp Competitive Analysis

Connecting the Clouds

Mellanox Academy Online Training (E-learning)

SX1012: High Performance Small Scale Top-of-Rack Switch

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Mellanox Accelerated Storage Solutions

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

State of the Art Cloud Infrastructure

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Mellanox WinOF for Windows 8 Quick Start Guide

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

SMB Direct for SQL Server and Private Cloud

White Paper Solarflare High-Performance Computing (HPC) Applications

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Replacing SAN with High Performance Windows Share over a Converged Network

Introduction to Cloud Design Four Design Principals For IaaS

Power Saving Features in Mellanox Products

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

Building a Scalable Storage with InfiniBand

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Installing Hadoop over Ceph, Using High Performance Networking

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Juniper Networks QFabric: Scaling for the Modern Data Center

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Storage, Cloud, Web 2.0, Big Data Driving Growth

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Mellanox OpenStack Solution Reference Architecture

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

3G Converged-NICs A Platform for Server I/O to Converged Networks

High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

ECLIPSE Performance Benchmarks and Profiling. January 2009

LS DYNA Performance Benchmarks and Profiling. January 2009

Mellanox Global Professional Services

Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes

Building Enterprise-Class Storage Using 40GbE

Storage at a Distance; Using RoCE as a WAN Transport

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Block based, file-based, combination. Component based, solution based

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Interoperability Testing and iwarp Performance. Whitepaper

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

Advancing Applications Performance With InfiniBand

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Doubling the I/O Performance of VMware vsphere 4.1

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

VMware vsphere 4.1 Networking Performance

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Oracle Quad 10Gb Ethernet Adapter

Network Functions Virtualization Using Intel Ethernet Multi-host Controller FM10000 Family

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Michael Kagan.

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Enabling High performance Big Data platform with RDMA

Can High-Performance Interconnects Benefit Memcached and Hadoop?

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

OpenStack Networking: Where to Next?

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Consolidating Multiple Network Appliances

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

VXLAN Performance Evaluation on VMware vsphere 5.1

Linux NIC and iscsi Performance over 40GbE

Deliver More Applications for More Users

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

MOVE AT THE SPEED OF BUSINESS. a CELERA DATASHEET WAN OPTIMIZATION CONTROLLERS

1-Gigabit TCP Offload Engine

How To Increase Network Performance With Segmentation

Transcription:

COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX -3 Pro EN vs. Intel X520...2 Technology...2 Performance...2 Acceleration for the Cloud...3 Power Consumption...5 Return on Investment...5 Bottom Line...6 Introduction: How to Choose a Network Interface Card High performance connectivity is required by everyone these days, whether in enterprise data centers, cloud computing environments, or Web 2.0 installations. The difference between a successful deployment and one plagued with poor performance is often a matter of the underlying interconnect technology. The Network Interface Card (NIC) is therefore a crucial piece of the puzzle when building a high-performance data center. In choosing one NIC over another, there are various factors to consider. Does the NIC address the needs of your application and market? It is important to make sure that your NIC has a wide enough range of features that accelerate your application and offload your CPU, providing more room for compute and virtualization. What is the technology embedded in the NIC, and what advantages does it provide? The available bandwidth, the ability to support both bare metal and virtual workloads, and specific application offloads are all important features of the NIC technology. A scalable and high performing NIC will have a wider range of application benefits and a longer life span because it can address the changing needs of a data center. Which NIC provides superior performance? Ultimately, a high performaning data center s networking is largely dependent upon a highly performing NIC. Raw bandwidth, low latency, and application-specific performance are some of the parameters to compare. Power consumption is another factor. Power consumption is a major expense in the data center today. In a large data center with hundreds or thousands of servers, power consumption can become a major drain on profitability, so it is important to consider a NIC that can reduce such an expense. The overall return on investment (ROI) should also be calculated. Cost and performance should be weighed against one another to determine the actual Total Cost of Ownership (TCO).

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 2 Comparison: Mellanox ConnectX -3 Pro EN vs. Intel X520 Given the aforementioned factors in choosing the best NIC, we compared two of the leading Ethernet NICs in the marketplace to see which addresses these concerns better: Mellanox s ConnectX -3 Pro EN and Intel s X520. Technology When it comes to bandwidth, ConnectX-3 Pro EN offers 10, 40 and 56 Gb/s Ethernet, while the X520 only reaches the 10 GbE level. This constitutes a significant difference in capability. Furthermore, because the ConnectX-3 Pro EN uses a PCIe Generation 3.0 bus, it can handle as many as 8 GT/s (Giga-transfers per second), compared to only 5 GT/s with the X520 s PCIe Gen 2.0 bus. Mellanox s ConnectX family of adapters also supports RDMA over Converged Ethernet (RoCE), which enables zero-copy data transfers and reduces CPU overhead tremendously. Moreover, ConnectX-3 Pro EN enables hardware offloading via TCP stateless offloads even for overlay networks such as NVGRE and VXLAN, in order to further free the CPU for other activities. Neither RoCE nor hardware offloading for overlay networks is available in the X520. ConnectX-3 Pro EN also includes congestion control features (QCN and ECN), which ensure that the maximum bandwidth can be passed efficiently during congestion events. Again, this is lacking in the X520. Finally, for customers seeking the ultimate in data center simplicity and peace of mind, the ConnectX family is but one piece of Mellanox s end-to-end suite of interconnect products, including switches, cables, and management software. Mellanox even offers a version of ConnectX-3 Pro EN that uses Virtual Protocol Interconnect (VPI ), a proprietary Mellanox technology that facilitates the conversion of a device from Ethernet to InfiniBand or vice versa. This guarantees that the customer can grow the network based on the latest requirements instead of based on legacy equipment. The X520, on the other hand, is a standalone offering with little flexibility for current data center integration or for future growth. Table 1. Technological Features Infrastructure Mellanox ConnectX-3 Pro EN Intel X520 82599 Controller Network Ports 10 / 40 / 56GbE 1 / 10GbE Ethernet and VPI options Host Interface PCIe 3.0 x8 8GT/s PCIe 2.0 x8 5GT/s Power Consumption 3.8W 6.5W (IC on 10GbE Board) RDMA RoCE No Latency 0.8us (RDMA) 12us (TCP) 6us (TCP) Overlay Networks VXLAN, NVGRE hardware offloads No Congestion Control QCN (L2) ECN (L3) No Conclusion: From a technological standpoint, ConnectX-3 Pro EN is at least one generation ahead of X520, and it can therefore better address the specific needs of the applications and markets that require high performance interconnect. Performance Naturally, with higher bandwidth and a more advanced bus comes higher throughput. ConnectX-3 Pro EN reaches a maximum throughput of 38 Gb/s, while X520 can only achieve 9.5 Gb/s.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 3 Figure 1. Maximum bandwidth But even comparing only at 10GbE speed, ConnectX-3 Pro EN provides better results than the Intel X520 by way of low latency. ConnectX-3 Pro EN shows 2X better latency results than X520, even before incorporating RoCE. With RoCE support, ConnectX-3 s latency numbers are 15 times better than the X520 s. Figure 2. Latency on 10GbE cards, with and without RDMA Acceleration for the Cloud When overlay networks (NVGRE and VXLAN) are in use, there is added value in enabling hardware offloading for these protocols to improve utilization of the CPU. ConnectX-3 Pro EN supports this feature, but X520 does not.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 4 Figure 3. 10GbE throughput of VXLAN traffic Figure 4. CPU Utilization per 1Gb/s with and without NVGRE hardware offloading Figure 5. CPU Utilization per 1Gb/s with and without VXLAN hardware offloading

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 5 Conclusion: ConnectX-3 Pro EN offers a significant performance advantage over Intel s X520, and it offers additional features that provide an even greater performance boost. Cloud and other environments that prefer to use overlay network protocols such as VXLAN and NVGRE will receive much higher bandwidth per server. Offloading the CPU also enables a much better ratio of VMs per server to the cloud administrator with Mellanox ConnectX-3 Pro EN versus the competition. Power Consumption Not only does ConnectX-3 Pro EN offer higher performance, but it provides it while also consuming less power than the competition. Table 2. Power features Mellanox ConnectX-3 Pro Dual-Port SFP+ Mellanox ConnectX-3 Pro Single-Port QSFP Speed 20 40 20 Ports 2 1 2 PCIe x8 Gen3 x8 Gen3 x8 Gen2 Power Consumption 1 3.74W 5.32W 4.5W Power per 1Gb (W/Gb) 0.187 0.133 0.225 Intel X520 Dual-Port SFP+ Figure 6. Watts per 1Gb/s of supported speed Conclusion: When power savings are multiplied across hundreds or thousands of NICs, ConnectX-3 Pro EN becomes the clear choice thanks to its more efficient consumption for similar (or even greater) bandwidth. Return on Investment We have already shown ConnectX-3 Pro EN s superiority over X520 in bandwidth and latency, but performance alone is not enough to consider. It is also worthwhile to compare the two cards with regard to how much performance they provide per dollar. 1 Data is collected from Intel and Mellanox specifications and datasheets. Power consumption is based only on the adapter. It does not include cooling, installation, and other tangential factors.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 6 According to the US Energy Information Administration, the average retail price for industrial consumers in April 2014 was 6.75 cents per Kilowatthour. By extrapolating the average power consumption numbers over the course of a year and multiplying by this average retail price, it is possible to see the ongoing cost of each adapter card. However, more important than the cost is the return on investment (ROI). The NIC that provides the highest performance for your money is the card that provides the most value. Figure 7. Throughput per dollar of power consumption ConnectX-3 Pro EN has a clear advantage in terms of throughput per dollar. In fact, by choosing the Mellanox card over the Intel one, you not only receive vastly superior performance and save on power consumption; you also receive even greater savings in the long-run. Because ConnectX-3 Pro EN is already at least a generation ahead in bandwidth, it is already future-proofed; when the demands of the data center or cloud increase, there is no need to upgrade the interconnect. Conclusion: ConnectX-3 Pro EN is not only the better NIC on performance; it is also the better investment financially. Bottom Line Mellanox ConnectX-3 Pro EN is a better NIC than Intel s X520 on all counts and for all the main use cases. Whether for HPC, cloud, Web 2.0, storage, or data center, ConnectX-3 Pro EN is the leading choice to ensure successful high-performance deployments. When comparing on technology, performance, power consumption, and return on investment, ConnectX-3 Pro EN is the clear leader across the board. 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com Copyright 2014. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, ConnectX, and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 15-4193CB Rev 1.0