Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520
|
|
|
- Geraldine Townsend
- 9 years ago
- Views:
Transcription
1 COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX -3 Pro EN vs. Intel X Technology...2 Performance...2 Acceleration for the Cloud...3 Power Consumption...5 Return on Investment...5 Bottom Line...6 Introduction: How to Choose a Network Interface Card High performance connectivity is required by everyone these days, whether in enterprise data centers, cloud computing environments, or Web 2.0 installations. The difference between a successful deployment and one plagued with poor performance is often a matter of the underlying interconnect technology. The Network Interface Card (NIC) is therefore a crucial piece of the puzzle when building a high-performance data center. In choosing one NIC over another, there are various factors to consider. Does the NIC address the needs of your application and market? It is important to make sure that your NIC has a wide enough range of features that accelerate your application and offload your CPU, providing more room for compute and virtualization. What is the technology embedded in the NIC, and what advantages does it provide? The available bandwidth, the ability to support both bare metal and virtual workloads, and specific application offloads are all important features of the NIC technology. A scalable and high performing NIC will have a wider range of application benefits and a longer life span because it can address the changing needs of a data center. Which NIC provides superior performance? Ultimately, a high performaning data center s networking is largely dependent upon a highly performing NIC. Raw bandwidth, low latency, and application-specific performance are some of the parameters to compare. Power consumption is another factor. Power consumption is a major expense in the data center today. In a large data center with hundreds or thousands of servers, power consumption can become a major drain on profitability, so it is important to consider a NIC that can reduce such an expense. The overall return on investment (ROI) should also be calculated. Cost and performance should be weighed against one another to determine the actual Total Cost of Ownership (TCO).
2 COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 2 Comparison: Mellanox ConnectX -3 Pro EN vs. Intel X520 Given the aforementioned factors in choosing the best NIC, we compared two of the leading Ethernet NICs in the marketplace to see which addresses these concerns better: Mellanox s ConnectX -3 Pro EN and Intel s X520. Technology When it comes to bandwidth, ConnectX-3 Pro EN offers 10, 40 and 56 Gb/s Ethernet, while the X520 only reaches the 10 GbE level. This constitutes a significant difference in capability. Furthermore, because the ConnectX-3 Pro EN uses a PCIe Generation 3.0 bus, it can handle as many as 8 GT/s (Giga-transfers per second), compared to only 5 GT/s with the X520 s PCIe Gen 2.0 bus. Mellanox s ConnectX family of adapters also supports RDMA over Converged Ethernet (RoCE), which enables zero-copy data transfers and reduces CPU overhead tremendously. Moreover, ConnectX-3 Pro EN enables hardware offloading via TCP stateless offloads even for overlay networks such as NVGRE and VXLAN, in order to further free the CPU for other activities. Neither RoCE nor hardware offloading for overlay networks is available in the X520. ConnectX-3 Pro EN also includes congestion control features (QCN and ECN), which ensure that the maximum bandwidth can be passed efficiently during congestion events. Again, this is lacking in the X520. Finally, for customers seeking the ultimate in data center simplicity and peace of mind, the ConnectX family is but one piece of Mellanox s end-to-end suite of interconnect products, including switches, cables, and management software. Mellanox even offers a version of ConnectX-3 Pro EN that uses Virtual Protocol Interconnect (VPI ), a proprietary Mellanox technology that facilitates the conversion of a device from Ethernet to InfiniBand or vice versa. This guarantees that the customer can grow the network based on the latest requirements instead of based on legacy equipment. The X520, on the other hand, is a standalone offering with little flexibility for current data center integration or for future growth. Table 1. Technological Features Infrastructure Mellanox ConnectX-3 Pro EN Intel X Controller Network Ports 10 / 40 / 56GbE 1 / 10GbE Ethernet and VPI options Host Interface PCIe 3.0 x8 8GT/s PCIe 2.0 x8 5GT/s Power Consumption 3.8W 6.5W (IC on 10GbE Board) RDMA RoCE No Latency 0.8us (RDMA) 12us (TCP) 6us (TCP) Overlay Networks VXLAN, NVGRE hardware offloads No Congestion Control QCN (L2) ECN (L3) No Conclusion: From a technological standpoint, ConnectX-3 Pro EN is at least one generation ahead of X520, and it can therefore better address the specific needs of the applications and markets that require high performance interconnect. Performance Naturally, with higher bandwidth and a more advanced bus comes higher throughput. ConnectX-3 Pro EN reaches a maximum throughput of 38 Gb/s, while X520 can only achieve 9.5 Gb/s.
3 COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 3 Figure 1. Maximum bandwidth But even comparing only at 10GbE speed, ConnectX-3 Pro EN provides better results than the Intel X520 by way of low latency. ConnectX-3 Pro EN shows 2X better latency results than X520, even before incorporating RoCE. With RoCE support, ConnectX-3 s latency numbers are 15 times better than the X520 s. Figure 2. Latency on 10GbE cards, with and without RDMA Acceleration for the Cloud When overlay networks (NVGRE and VXLAN) are in use, there is added value in enabling hardware offloading for these protocols to improve utilization of the CPU. ConnectX-3 Pro EN supports this feature, but X520 does not.
4 COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 4 Figure 3. 10GbE throughput of VXLAN traffic Figure 4. CPU Utilization per 1Gb/s with and without NVGRE hardware offloading Figure 5. CPU Utilization per 1Gb/s with and without VXLAN hardware offloading
5 COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 5 Conclusion: ConnectX-3 Pro EN offers a significant performance advantage over Intel s X520, and it offers additional features that provide an even greater performance boost. Cloud and other environments that prefer to use overlay network protocols such as VXLAN and NVGRE will receive much higher bandwidth per server. Offloading the CPU also enables a much better ratio of VMs per server to the cloud administrator with Mellanox ConnectX-3 Pro EN versus the competition. Power Consumption Not only does ConnectX-3 Pro EN offer higher performance, but it provides it while also consuming less power than the competition. Table 2. Power features Mellanox ConnectX-3 Pro Dual-Port SFP+ Mellanox ConnectX-3 Pro Single-Port QSFP Speed Ports PCIe x8 Gen3 x8 Gen3 x8 Gen2 Power Consumption W 5.32W 4.5W Power per 1Gb (W/Gb) Intel X520 Dual-Port SFP+ Figure 6. Watts per 1Gb/s of supported speed Conclusion: When power savings are multiplied across hundreds or thousands of NICs, ConnectX-3 Pro EN becomes the clear choice thanks to its more efficient consumption for similar (or even greater) bandwidth. Return on Investment We have already shown ConnectX-3 Pro EN s superiority over X520 in bandwidth and latency, but performance alone is not enough to consider. It is also worthwhile to compare the two cards with regard to how much performance they provide per dollar. 1 Data is collected from Intel and Mellanox specifications and datasheets. Power consumption is based only on the adapter. It does not include cooling, installation, and other tangential factors.
6 COMPETITIVE BRIEF: Choosing the Best Network Interface Card (Mellanox ConnectX -3 Pro EN vs. Intel X520) page 6 According to the US Energy Information Administration, the average retail price for industrial consumers in April 2014 was 6.75 cents per Kilowatthour. By extrapolating the average power consumption numbers over the course of a year and multiplying by this average retail price, it is possible to see the ongoing cost of each adapter card. However, more important than the cost is the return on investment (ROI). The NIC that provides the highest performance for your money is the card that provides the most value. Figure 7. Throughput per dollar of power consumption ConnectX-3 Pro EN has a clear advantage in terms of throughput per dollar. In fact, by choosing the Mellanox card over the Intel one, you not only receive vastly superior performance and save on power consumption; you also receive even greater savings in the long-run. Because ConnectX-3 Pro EN is already at least a generation ahead in bandwidth, it is already future-proofed; when the demands of the data center or cloud increase, there is no need to upgrade the interconnect. Conclusion: ConnectX-3 Pro EN is not only the better NIC on performance; it is also the better investment financially. Bottom Line Mellanox ConnectX-3 Pro EN is a better NIC than Intel s X520 on all counts and for all the main use cases. Whether for HPC, cloud, Web 2.0, storage, or data center, ConnectX-3 Pro EN is the leading choice to ensure successful high-performance deployments. When comparing on technology, performance, power consumption, and return on investment, ConnectX-3 Pro EN is the clear leader across the board. 350 Oakmead Parkway, Suite 100, Sunnyvale, CA Tel: Fax: Copyright Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, ConnectX, and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners CB Rev 1.0
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Connecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
Mellanox Accelerated Storage Solutions
Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.
Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Mellanox WinOF for Windows 8 Quick Start Guide
Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters Evaluation report prepared under contract with Emulex Executive Summary As
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
Replacing SAN with High Performance Windows Share over a Converged Network
WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
Power Saving Features in Mellanox Products
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman
High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud
Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions
WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox Smart InfiniBand Switch Systems the highest performing interconnect
Installing Hadoop over Ceph, Using High Performance Networking
WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Mellanox continues its leadership providing InfiniBand Host Channel
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Rev 1.1 March 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED
Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers
White Paper Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th As the deployment of bandwidth-intensive applications such as public and private cloud computing continues to increase, IT administrators
Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving Growth Kevin Deierling Vice President of Marketing October 25, 2013 Delivering the Highest ROI Across all Markets HPC Web 2.0 DB/Enterprise Cloud Financial Services
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Mellanox OpenStack Solution Reference Architecture
Mellanox OpenStack Solution Reference Architecture Rev 1.3 January 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s
Uncompromising Integrity Making 100Gb/s deployments as easy as 10Gb/s Supporting Data Rates up to 100Gb/s Mellanox 100Gb/s LinkX cables and transceivers make 100Gb/s deployments as easy as 10Gb/s. A wide
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters
High Throughput File Servers with SMB Direct, Using the 3 Flavors of network adapters Jose Barreto Principal Program Manager Microsoft Corporation Abstract In Windows Server 2012, we introduce the SMB
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
Mellanox Global Professional Services
Mellanox Global Professional Services User Guide Rev. 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes
Mellanox ConnectX -3 Firmware (fw-connectx3) Notes Rev 2.11.55 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 [email protected] Why Storage at a Distance the Storage Cloud Following
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics
QLogic 16Gb Gen 5 Fibre Channel for Database Assessment for Database and Business Analytics Using the information from databases and business analytics helps business-line managers to understand their
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging
Interoperability Testing and iwarp Performance. Whitepaper
Interoperability Testing and iwarp Performance Whitepaper Interoperability Testing and iwarp Performance Introduction In tests conducted at the Chelsio facility, results demonstrate successful interoperability
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing [email protected] Michael Kagan Chief Technology Officer [email protected]
BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX
White Paper BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH Abstract This white paper explains how to configure a Mellanox SwitchX Series switch to bridge the external network of an EMC Isilon
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand Pak Lui, Application Performance Manager September 12, 2013 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server and
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options
Doubling the I/O Performance of VMware vsphere 4.1
White Paper Doubling the I/O Performance of VMware vsphere 4.1 with Broadcom 10GbE iscsi HBA Technology This document describes the doubling of the I/O performance of vsphere 4.1 by using Broadcom 10GbE
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
VMware vsphere 4.1 Networking Performance
VMware vsphere 4.1 Networking Performance April 2011 PERFORMANCE STUDY Table of Contents Introduction... 3 Executive Summary... 3 Performance Enhancements in vsphere 4.1... 3 Asynchronous Transmits...
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report
WHITE PAPER July 2012 HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report Executive Summary...1 The Four New 2012 Technologies Evaluated In This Benchmark...1 Benchmark Objective...2 Testing
Virtual Network Exceleration OCe14000 Ethernet Network Adapters
WHI TE PA P ER Virtual Network Exceleration OCe14000 Ethernet Network Adapters High Performance Networking for Enterprise Virtualization and the Cloud Emulex OCe14000 Ethernet Network Adapters High Performance
Oracle Quad 10Gb Ethernet Adapter
Oracle Quad 10Gb Ethernet Adapter Oracle Quad 10Gb Ethernet Adapter is the latest high-bandwidth, low-power network interface card (NIC) based on the Intel Ethernet Controller XL710 10 GbE controller.
Network Functions Virtualization Using Intel Ethernet Multi-host Controller FM10000 Family
white paper Network Functions Virtualization Using Intel Multi-host Controller FM10000 Family Introduction Network service providers are finding it increasingly difficult to keep pace with consumer bandwidth
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Enabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers
Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Introduction Adoption of
OpenStack Networking: Where to Next?
WHITE PAPER OpenStack Networking: Where to Next? WHAT IS STRIKING IS THE PERVASIVE USE OF OPEN VSWITCH (OVS), AND AMONG NEUTRON FEATURES, THE STRONG INTEREST IN SOFTWARE- BASED NETWORKING ON THE SERVER,
Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters
Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior
Consolidating Multiple Network Appliances
October 2010 Consolidating Multiple s Space and power are major concerns for enterprises and carriers. There is therefore focus on consolidating the number of physical servers in data centers. Application
Next Gen Data Center. KwaiSeng Consulting Systems Engineer [email protected]
Next Gen Data Center KwaiSeng Consulting Systems Engineer [email protected] Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Deliver More Applications for More Users
HARDWARE DATASHEET Deliver More Applications for More Users F5 BIG-IP Application Delivery Controller (ADC) platforms can manage even the heaviest traffic loads at both layer 4 and layer 7. By merging
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
MOVE AT THE SPEED OF BUSINESS. a CELERA DATASHEET WAN OPTIMIZATION CONTROLLERS
MOVE AT THE SPEED OF BUSINESS. a CELERA DATASHEET WAN OPTIMIZATION CONTROLLERS acelera WAN optimization controllers accelerate applications, speed data transfers and reduce bandwidth costs using a combination
1-Gigabit TCP Offload Engine
White Paper 1-Gigabit TCP Offload Engine Achieving greater data center efficiencies by providing Green conscious and cost-effective reductions in power consumption. June 2009 Background Broadcom is a recognized
How To Increase Network Performance With Segmentation
White Paper Intel Adapters Network Management Network Performance Using Segmentation to Increase Network Performance More network users, greater application complexity, larger file sizes all of these can
