Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710



Similar documents
Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Connecting the Clouds

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

RoCE vs. iwarp Competitive Analysis

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Mellanox Academy Online Training (E-learning)

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

State of the Art Cloud Infrastructure

SX1012: High Performance Small Scale Top-of-Rack Switch

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Mellanox Accelerated Storage Solutions

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

Replacing SAN with High Performance Windows Share over a Converged Network

Introduction to Cloud Design Four Design Principals For IaaS

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

SMB Direct for SQL Server and Private Cloud

Power Saving Features in Mellanox Products

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Building a Scalable Storage with InfiniBand

Juniper Networks QFabric: Scaling for the Modern Data Center

Cut I/O Power and Cost while Boosting Blade Server Performance

Mellanox WinOF for Windows 8 Quick Start Guide

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Building a cost-effective and high-performing public cloud. Sander Cruiming, founder Cloud Provider

Storage, Cloud, Web 2.0, Big Data Driving Growth

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Installing Hadoop over Ceph, Using High Performance Networking

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Storage at a Distance; Using RoCE as a WAN Transport

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

LS DYNA Performance Benchmarks and Profiling. January 2009

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

White Paper Solarflare High-Performance Computing (HPC) Applications

ECLIPSE Performance Benchmarks and Profiling. January 2009

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:

Building Enterprise-Class Storage Using 40GbE

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Mellanox Global Professional Services

Doubling the I/O Performance of VMware vsphere 4.1

Mellanox OpenStack Solution Reference Architecture

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Driving Revenue Growth Across Multiple Markets

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Block based, file-based, combination. Component based, solution based

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Network Virtualization for Large-Scale Data Centers

Getting More Performance and Efficiency in the Application Delivery Network

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

White. Paper. Building Next Generation Data Centers. Implications for I/O Strategies. August 2014

APRIL 2010 HIGH PERFORMANCE NETWORK SECURITY APPLIANCES

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

OPTIMIZING SERVER VIRTUALIZATION

Executive summary. Introduction Trade off between user experience and TCO payoff

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst

MOVE AT THE SPEED OF BUSINESS. a CELERA DATASHEET WAN OPTIMIZATION CONTROLLERS

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

Linux NIC and iscsi Performance over 40GbE

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

3G Converged-NICs A Platform for Server I/O to Converged Networks

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Oracle Quad 10Gb Ethernet Adapter

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

Consolidating Multiple Network Appliances

Network Function Virtualization Using Data Plane Developer s Kit

MOVE AT THE SPEED OF BUSINESS. a CELERA DATASHEET WAN OPTIMIZATION CONTROLLERS

Enabling High performance Big Data platform with RDMA

Advancing Applications Performance With InfiniBand

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

Insurance Company Deploys UCS and Gains Speed, Flexibility, and Savings

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

Interoperability Testing and iwarp Performance. Whitepaper

Industry Brief. Renaissance in VM Network Connectivity. An approach to network design that starts with the server. Featuring

Michael Kagan.

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Transcription:

COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX -3 Pro EN vs. Intel XL7... Technology... Performance... Power Consumption...4 Return on Investment...5 Bottom Line...5 Introduction: How to Choose a Network Interface Card for a Cloud Environment High performance connectivity is an essential component of cloud computing environments, enabling efficient data transfers and effective hardware utilization while maximizing virtualization and scalability. The difference between a successful deployment and one plagued with poor performance is often a matter of the underlying interconnect technology. The Network Interface Card (NIC) is therefore a crucial piece of the puzzle when building a high-performance cloud. In choosing one NIC over another, there are various factors to consider. Does the NIC address the needs of your applications and market? It is important to make sure that your NIC has a wide enough range of features that accelerate your application and offload your CPU, providing more room for compute and virtualization. What is the technology embedded in the NIC, and what advantages does it provide? The available bandwidth, the ability to support heavy virtual workloads and specific application offloads are all important features of the NIC technology. A scalable and high performing NIC will have a wider range of application benefits and a longer life span because it can address the growing needs of your cloud. Which NIC provides superior performance? Ultimately, a high performance cloud s networking is largely dependent upon a highly performing NIC. Bandwidth, low latency, and application-specific performance are some of the parameters to compare. Power consumption is another factor. Power consumption is a major expense in today s networks. In a large cloud infrastructure with thousands of servers, power consumption can become a major drain on profitability, so it is important to consider a NIC that can reduce such an expense. The overall return on investment (ROI) should also be calculated. Cost and performance should be weighed against one another to determine the actual Total Cost of Ownership (TCO). Comparison: Mellanox ConnectX -3 Pro EN vs. Intel XL7 Given the aforementioned factors in choosing the best NIC for a cloud environment, we compared two of the leading Ethernet NICs in the marketplace to see which addresses these concerns better: Mellanox s ConnectX -3 Pro EN and Intel s XL7.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel XL7) page Technology When it comes to bandwidth, ConnectX-3 Pro EN offers, 4 and 56 Gb/s Ethernet, while the XL7 reaches the and 4 GbE levels. The option of providing 56Gb is important in a cloud to optimize the user experience. Applications that require higher bandwidth will suffer without a 56Gb option. Furthermore, a network that is bounded at 4Gb cannot easily upgrade when the demands require greater bandwidth in the future. The 56Gb option futureproofs the cloud.. Mellanox s ConnectX family of adapters supports RDMA over Converged Ethernet (RoCE), which enables zero-copy data transfers and reduces CPU overhead tremendously. Moreover, ConnectX-3 Pro EN enables hardware offloading via TCP stateless offloads even for overlay networks such as NVGRE and VXLAN, in order to further free the CPU for other activities. RDMA is not available on the XL7, and hardware offloading for overlay networks is only partially available in transmit operations. ConnectX-3 Pro EN also includes congestion control features (QCN and ECN), which ensure that the maximum bandwidth can be passed efficiently during congestion events. Again, this is lacking in the XL7. Finally, for customers seeking the utmost cloud simplicity and peace of mind, the ConnectX family is one piece of Mellanox s end-to-end suite of interconnect products, including switches, cables, and management software. Mellanox even offers a version of ConnectX-3 Pro EN that uses Virtual Protocol Interconnect (VPI ), a Mellanox technology that facilitates the conversion of a device from Ethernet to InfiniBand or vice versa. This guarantees that the customer can grow the network based on the latest requirements instead of based on legacy equipment. The XL7, on the other hand, is a standalone offering with little flexibility for current data center integration or for future growth. Table. Technological Features Infrastructure Mellanox ConnectX-3 Pro EN Intel XL7 Controller Network Ports / 4 / 56GbE / 4GbE Ethernet and VPI options RDMA RoCE No Latency 7.5us (TCP) 7us (TCP) Overlay Networks Congestion Control VXLAN, NVGRE hardware offloads on Transmit and Receive QCN (L) ECN (L3) Only for VXLAN and only on Transmit No Conclusion: From a technological standpoint, ConnectX-3 Pro EN is a full generation ahead of XL7, and it can therefore better address the specific needs of the applications and markets that require high performance interconnect. Performance In virtualized environments, overlay networks of inter-vm communication are integral to assuring the scalability and security of the cloud. Such overlay networks can reduce overall throughput unless there are hardware offloads in place to improve utilization of the CPU and enable stateless offloads of the payloads. ConnectX-3 Pro EN enables both VXLAN and NVGRE overlay networks, and provides hardware offloading to ensure near-line-rate throughput. XL7, however, only enables VXLAN in transmit operations and fails to provide sufficient offloading of the CPU. The resultant throughput is at best half that of ConnectX-3 Pro EN.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel XL7) page 3 4 35 4GbE VXLAN Throughput (Red Hat 7.) 36. 36. 3 5 8.37 5.4 5 8 VM pairs 6 VM pairs Intel XL7 VXLAN HW offload Figure. VXLAN maximum bandwidth 4GbE CPU U liza on per GbE on Receiving Host (Red Hat 7.) 4 3.57 3.7 3.5 3.5.5.5.7 8 VM pairs 6 VM pairs Intel XL7 VXLAN HW offload 4GbE CPU U liza on per GbE on Transmi ng Host (Red Hat 7.) 4 3.5 3.5.5.8.94.5.55.69 8 VM pairs 6 VM pairs Intel XL7 VXLAN HW offload Figure. CPU utilization per Gb/s with VXLAN hardware offloading Furthermore, in a cloud that connects virtual machines from around the globe, response time is critical. ConnectX-3 Pro EN offers its high throughput at much lower latency than XL7, further increasing the gap between the performance of the two NICs.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel XL7) page 4 Latency (μs) 35 3 5 5 5 TCP Latency 4 8 6 3 64 8 56 5 4 48 496 Message Size (Byte) ConnectX-3 Pro Intel XL7 4GbE Figure 3. TCP latency Conclusion: ConnectX-3 Pro EN offers a significant performance advantage over Intel s XL7, and it offers additional features that provide an even greater performance boost. Cloud environments that use overlay network protocols will receive much higher bandwidth per server with ConnectX-3 Pro EN. Reducing CPU utilization that is wasted on data transfer also enables a much better ratio of VMs per server to the cloud with Mellanox ConnectX-3 Pro EN versus the competition. Power Consumption ConnectX-3 EN s performance advantage in virtualized environments is clear, but it might not be as impressive if it achieves such an advantage at the cost of significantly higher power consumption. That is not the case, as Intel s advertised max power numbers for XL7 are only marginally better than those of ConnectX-3 EN s. Moreover, when power is considered as a function of performance, the advantage swings dramatically in favor of ConnectX-3 Pro EN. The following table demonstrates that advantage, expressed as the number of Watts of power consumed per Gb of VXLAN traffic: Table. Power consumption Max Power x4gbe Throughput with VXLAN W per GbE of VXLAN Mellanox ConnectX-3 Pro Intel XL7 6. W 5.9 W 36. Gb 8.37 Gb.6 W/GbE.3 W/GbE ConnectX-3 Pro EN consumes 5% less power for the same bandwidth compared to XL7. To transfer the same amount of data with XL7 would require multiple ports, thereby using a lot more power. Conclusion: XL7 has a slight advantage in maximum power consumption, but given the vast difference in performance, this advantage is essentially meaningless. The more important calculation is power as a function of performance, and ConnectX-3 Pro EN is far ahead on this count. Data is collected from Intel and Mellanox specifications and datasheets. Power consumption is based only on the adapter. It does not include cooling, installation, and other tangential factors.

COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel XL7) page 5 Return on Investment We have already shown ConnectX-3 Pro EN s superiority over XL7 in bandwidth and latency, but performance alone is not enough to consider. It is also worthwhile to compare the two cards with regard to how much performance they provide per dollar. According to the US Energy Information Administration, the average retail price for industrial consumers in April 4 was 6.75 cents per Kilowatthour. By extrapolating the average power consumption numbers over the course of a year and multiplying by this average retail price, it is possible to see the ongoing cost of each adapter card. However, much more important than the cost is the return on investment (ROI). The NIC that provides the highest performance for your money is the card that provides the most value. ConnectX-3 Pro EN offers nearly twice as much throughput per dollar than XL7. Annual Return on Investment [Gb/$]. 5. ConnectX-3 Pro EN Intel XL7 Figure 4. Return on Investment (Gb/$) In fact, by choosing the Mellanox card over the Intel one, you not only receive vastly superior performance and save on power consumption; you also receive even greater savings in the longrun. Because ConnectX-3 Pro EN is already a generation ahead of XL7 technologically, it is already future-proofed; when the demands of the data center or cloud increase, there is no need to upgrade the interconnect. Conclusion: ConnectX-3 Pro EN is not only the better NIC on performance; it is also the better investment financially. Bottom Line Mellanox ConnectX-3 Pro EN is a better NIC than Intel s XL7 on all counts and for all the main use cases. When choosing a NIC for virtualized environments, ConnectX-3 Pro EN is the leading choice to ensure successful high-performance deployments. When comparing on technology, performance, power consumption, and return on investment, ConnectX-3 Pro EN is the clear leader across the board. 35 Oakmead Parkway, Suite, Sunnyvale, CA 9485 Tel: 48-97-34 Fax: 48-97-343 www.mellanox.com Copyright 5. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, ConnectX, and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 5-487CB Rev.