RoCE vs. iwarp Competitive Analysis
|
|
- Derrick Griffith
- 1 years ago
- Views:
Transcription
1 WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary... Executive Summary Remote Direct Memory Access (RDMA) provides direct access from the memory of one computer to the memory of another without involving either computer s operating system. This technology enables high-throughput, low-latency networking with low CPU utilization, which is especially useful in massively parallel compute clusters. RDMA over Converged Ethernet (RoCE) is the most commonly used RDMA technology for Ethernet networks and is deployed at scale in some of the largest hyper-scale data centers in the world. RoCE is the only industry-standard Ethernet-based RDMA solution with a multi-vendor ecosystem delivering network adapters and operating over standard layer 2 and layer 3 Ethernet switches. The RoCE technology is standardized within industry organizations including the IBTA, IEEE, and IETF. Mellanox Technologies was the first company to implement the new standard, and its ConnectX -3 Pro and ConnectX-4 product families implement a complete offload of the RoCE protocol. These solutions provide wire-speed throughput at up to 1Gb/s throughput and market-leading latency, with the lowest CPU and memory utilization possible. As a result, the ConnectX family of adapters has been deployed in a variety of mission critical, latency sensitive data centers. RoCE s Advantages over iwarp iwarp is an alternative RDMA offering that is more complex and unable to achieve the same level of performance as RoCE-based solutions. iwarp uses a complex mix of layers, including DDP (Direct Data Placement), a tweak known as MPA (Marker PDU Aligned framing), and a separate RDMA protocol (RDMAP) to deliver RDMA services over TCP/IP. This convoluted architecture is an ill-conceived attempt to fit RDMA into existing software transport frameworks. Unfortunately this compromise causes iwarp to fail to deliver on precisely the three key benefits that RoCE is able to achieve: high throughput, lowlatency, and low CPU utilization. In addition to the complexity and performance disadvantages, only a single vendor (Chelsio) is supporting iwarp on their current products, and the technology has not been well adopted by the market. Intel previously supported iwarp in its 1GbE NIC from 29, but has not supported it in any of its newer NICs since then. No iwarp support is available at the latest Ethernet speeds of 2,, and 1Gb/s.
2 page 2 iwarp is designed to work over the existing TCP transport, and is essentially an attempt to patch up existing LAN/WAN networks. The Ethernet data link delivers best effort service, relying on the TCP layer to deliver reliable services. The need to support existing IP networks, including wide area networks, requires coverage of a larger set of boundary conditions with respect to congestion handling, scaling, and error handling, causing inefficiency in hardware offload of the RDMA and associated transport operations. RoCE, on the other hand, is a purpose-built RDMA transport protocol for Ethernet, not as a patch to be used on top of existing TCP/IP protocols. iwarp Network Layers RDMA APPLICATION VERBS INTERFACE OFA (Open Fabric Alliance) Stack RoCE Network Layers RDMA APPLICATION VERBS INTERFACE OFA (Open Fabric Alliance) Stack RDMAP DDP MPA IBTA Transport Protocol TCP IP Ethernet Link Layer UDP IP Ethernet Link Layer ETHERNET MANAGEMENT ETHERNET MANAGEMENT Figure 1. iwarp s complex network layers vs. RoCE s simpler model Because TCP is connection-based, it must use reliable transport. iwarp, therefore, only supports reliable connected transport service, whch also means that is is not an appropriate platform for multicast. RoCE offers a variety of transport services, including reliable connected, unreliable datagram, and others, and enables user-level multicast capability. iwarp traffic also cannot be easily managed and optimized in the fabric itself, leading to inefficiency in deployments. It does not provide a way to detect RDMA traffic at or below the transport layer, for example within the fabric itself. Sharing of TCP s port space by iwarp makes using flow management impossible, since the port alone cannot identify whether the message carries RDMA or traditional TCP. iwarp shares the protocol number space with legacy TCP traffic, so context (state) are required to determine that a packet is iwarp. Typically, this context may not fit in the NIC s on-chip memory, which results in much more complexity and therefore longer time in traffic demultiplexing. This also occurs in the switches and routers of the fabric, where there is no such state available. In contrast, a packet can be identified as RoCE simply by looking at its UDP destination port field. If the value matches the IANA assigned port for RoCE then the packet is RoCE. This stateless traffic identification allows for quick and early demultiplexing of traffic in a converged NIC implementation, and enables capabilities such as switch or fabric monitoring and access control lists (ACLs) for improved traffic flow analysis and management. Similarly, because iwarp shares port space with the legacy TCP stack, it also faces challenges integrating with OS stacks. RoCE, on the other hand, offers full OS stack integration. These challenges limit the cost-effectiveness and deployability of iwarp products, especially in comparison to RoCE. RoCE includes IP and UDP headers in the packet encapsulation, meaning that RoCE can be used across both L2 and L3 networks. This enables layer 3 routing, which brings RDMA to networks with multiple subnets. Finally, by deploying Soft-ROCE (Figure 2), the implementation of RoCE via software, RoCE can be expanded to devices that do not natively support RoCE in hardware. This enables greater flexibility in leveraging RoCE s benefits in the Data Center.
3 page 3 Application User Soft-RoCE user driver HW-RoCE user driver Sockets Kernel TCP/IP Soft-RoCE kernel driverr HW-RoCE kernel driver NIC driver HW Ethernet NIC RoCE HCA Figure 2. Soft-RoCE Architecture Performance and Benchmark Examples EDC latency-sensitive applications such as Hadoop for real-time data analysis are cornerstones of competitiveness for Web2. and Big Data providers. Such platforms can benefit from Mellanox s ConnectX-3 Pro, as its RoCE solution delivers extremely low latencies on Ethernet while scaling to handle millions of messages per second. A benchmark that was run comparing the performance of Chelsio s T messaging application running over 4Gb Ethernet iwarp and Connectx-3 Pro with RoCE shows that RoCE consistently delivered messages significantly faster than iwarp (Figure 3). 4GbE RDMA Latency Latency (μs) Chelsio iwarp Figure 3. 4Gb Ethernet Latency Benchmark When measuring ConnectX-3 s RoCE latency against Intel s NetEffect 2 iwarp, the results are even more impressive. At 1Gb, RoCE showed an 6% improvement using RoCE at B message size, and a % improvement at B (Figure 4). Message Size (Bytes) NetEffect 2 iwarp ConnectX-3 1GbE RoCE ConnectX-3 4GbE RoCE B 7.22μs 1μs.7μs B 1.μs 3.79μs 2.13μs Figure 4. Mellanox RoCE and Intel iwarp Latency Benchmark
4 page 4 4GbE RDMA Throughput (RDMA Write) Bandwidth (Gb/s) Chelsio iwarp Figure. 4Gb Ethernet Throughput Benchmark Meanwhile, throughput when using RoCE at 4Gb on ConnectX-3 Pro is over 2X higher than using iwarp on the Chelsio T (Figure ), and X higher that using iwarp at 1Gb with Intel (Figure 6). Bandwidth (Gb/s) GbE RDMA Throughput (RDMA Write) Intel iwarp Figure 6 1Gb Ethernet Throughput Benchmark ConnectX-3 RoCE Best Performance for Virtualization A further advantage to RoCE is its ability to run over SR-IOV, enabling RoCE s superior performance of the lowest latency, lowest CPU utlilization, and maximum throughput, in a virtualized environment. RoCE has proven it can provide less than 1 us latency between virtual machines while maintaining consistent throughput as the virtual environment scales. Chelsio s iwarp does not run over multiple VMs in SR-IOV, relying instead on TCP for VM-to-VM communication. The difference in latency is astounding (Figure 7). 2 VM-to-VM Latency RoCE vs. TCP Latency (μs) iwarp 4GbE Figure 7. 4Gb Ethernet Latency from VM to VM
5 page Summary RoCE simplifies the transport protocol; it bypasses the TCP stack to enable true and scalable RDMA operations, resulting in higher ROI. RoCE is a standard protocol, which was built specifically with data center traffic in mind, with consideration paid to latency, performance, and CPU utilization. It performs especially well in virtualized environments. When a network runs over Ethernet, RoCE provides a superior solution compared to iwarp. For the enterprise data center seeking the ultimate in performance, RoCE is clearly the choice, especially when latency-sensitive applications are involved. Moreover, RoCE is currently deployed in dozens of data centers with up to hundreds of thousands of nodes, while iwarp is virtually non-existent in the field. Simply put, RoCE is the obvious way to deploy RDMA over Ethernet. 3 Oakmead Parkway, Suite 1, Sunnyvale, CA 94 Tel: Fax: Copyright 21. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, and ConnectX are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners WP Rev 1.1
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520
COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing motti@mellanox.com Michael Kagan Chief Technology Officer michaelk@mellanox.com
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters Evaluation report prepared under contract with Emulex Executive Summary As
A Tour of the Linux OpenFabrics Stack
A Tour of the OpenFabrics Stack Johann George, QLogic June 2006 1 Overview Beyond Sockets Provides a common interface that allows applications to take advantage of the RDMA (Remote Direct Memory Access),
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Connecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
RDMA-based Big Data Analytic
RDMA-based Big Data Analytic Gilad Shainer Technion, March 2014 The InfiniBand Architecture Industry standard defined by the InfiniBand Trade Association Defines System Area Network architecture Comprehensive
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman
High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
Design and Implementation of the iwarp Protocol in Software. Dennis Dalessandro, Ananth Devulapalli, Pete Wyckoff Ohio Supercomputer Center
Design and Implementation of the iwarp Protocol in Software Dennis Dalessandro, Ananth Devulapalli, Pete Wyckoff Ohio Supercomputer Center What is iwarp? RDMA over Ethernet. Provides Zero-Copy mechanism
High Speed I/O Server Computing with InfiniBand
High Speed I/O Server Computing with InfiniBand José Luís Gonçalves Dep. Informática, Universidade do Minho 4710-057 Braga, Portugal zeluis@ipb.pt Abstract: High-speed server computing heavily relies on
Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters
High Throughput File Servers with SMB Direct, Using the 3 Flavors of network adapters Jose Barreto Principal Program Manager Microsoft Corporation Abstract In Windows Server 2012, we introduce the SMB
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers
Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Introduction Adoption of
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
A convergence of networking and storage. Abstract Introduction The changing storage environment... 2
iscsi technology: A convergence of networking and storage technology brief Abstract... 2 Introduction... 2 The changing storage environment... 2 iscsi technology... 3 How iscsi works... 3 iscsi technical
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
Mellanox Accelerated Storage Solutions
Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.
Emerging Fabric Technologies
Emerging Fabric Technologies technology brief Abstract... 2 Introduction... 2 Data center requirements for fabric technologies... 2 Network interconnect challenges... 3 Key enabling technologies for fabrics...
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Ultra Low Latency Data Center Switches and iwarp Network Interface Cards
WHITE PAPER Delivering HPC Applications with Juniper Networks and Chelsio Communications Ultra Low Latency Data Center Switches and iwarp Network Interface Cards Copyright 20, Juniper Networks, Inc. Table
Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers
SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation
Mellanox WinOF for Windows 8 Quick Start Guide
Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 pgrun@systemfabricworks.com Why Storage at a Distance the Storage Cloud Following
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Cut I/O Power and Cost while Boosting Blade Server Performance
April 2009 Cut I/O Power and Cost while Boosting Blade Server Performance 1.0 Shifting Data Center Cost Structures... 1 1.1 The Need for More I/O Capacity... 1 1.2 Power Consumption-the Number 1 Problem...
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
OFA Training Program. Writing Application Programs for RDMA using OFA Software. Author: Rupert Dance Date: 11/15/2011. www.openfabrics.
OFA Training Program Writing Application Programs for RDMA using OFA Software Author: Rupert Dance Date: 11/15/2011 www.openfabrics.org 1 Agenda OFA Training Program Program Goals Instructors Programming
RDMA over Ethernet - A Preliminary Study
RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Outline Introduction Problem Statement
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
InfiniBand in the Enterprise Data Center
InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900
New Data Center architecture
New Data Center architecture DigitPA Conference 2010, Rome, Italy Silvano Gai Consulting Professor Stanford University Fellow Cisco Systems 1 Cloud Computing The current buzzword ;-) Your computing is
Replacing SAN with High Performance Windows Share over a Converged Network
WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN
Paving The Road to Exascale Computing HPC Technology Update
Paving The Road to Exascale Computing HPC Technology Update Todd Wilde, Director of Technical Computing and HPC Mellanox HPC Technology Accelerates the TOP500 InfiniBand has become the de-factor interconnect
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
Cloud Computing and the Internet. Conferenza GARR 2010
Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology
iscsi: Superior Storage for Virtualization
iscsi: Superior Storage for Virtualization Technical White Paper by Dell April, 2007 IT management is increasingly focused on better utilization of infrastructure, containment of server sprawl, and improving
Bioscience. Introduction. The Importance of the Network. Network Switching Requirements. Arista Technical Guide www.aristanetworks.
Introduction Over the past several years there has been in a shift within the biosciences research community, regarding the types of computer applications and infrastructures that are deployed to sequence,
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
Enabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
A Low-Latency Solution for High- Frequency Trading from IBM and Mellanox
Effective low-cost High-Frequency Trading solutions from IBM and Mellanox May 2011 A Low-Latency Solution for High- Frequency Trading from IBM and Mellanox Vinit Jain IBM Systems and Technology Group Falke
HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report
WHITE PAPER July 2012 HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report Executive Summary...1 The Four New 2012 Technologies Evaluated In This Benchmark...1 Benchmark Objective...2 Testing
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development Motti@mellanox.com
VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Mellanox Global Professional Services
Mellanox Global Professional Services User Guide Rev. 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,
THE 2015 ETHERNET ROADMAP SC 2015 BOF Panel Scott Kipp, Brocade, President of the Ethernet Alliance David Chalupsky, Intel Shreyas Shah, Xilinx Brandon Hoff, Avago November 18, 2015 www.ethernetalliance.org
Advanced Computer Networks. High Performance Networking I
Advanced Computer Networks 263 3501 00 High Performance Networking I Patrick Stuedi Spring Semester 2014 1 Oriana Riva, Department of Computer Science ETH Zürich Outline Last week: Wireless TCP Today:
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Power Saving Features in Mellanox Products
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
QLogic FCoE/iSCSI and IP Networking Adapter Evaluation
QLogic FCoE/iSCSI and IP Networking Adapter Evaluation Evaluation report prepared under contract with QLogic Introduction Enterprises are moving towards 10 Gigabit networking infrastructures and are examining
Installing Hadoop over Ceph, Using High Performance Networking
WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3
LS-DYNA Performance Benchmark and Profiling on Windows. July 2009
LS-DYNA Performance Benchmark and Profiling on Windows July 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center
Security in Mellanox Technologies InfiniBand Fabrics Technical Overview
WHITE PAPER Security in Mellanox Technologies InfiniBand Fabrics Technical Overview Overview...1 The Big Picture...2 Mellanox Technologies Product Security...2 Current and Future Mellanox Technologies
Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV
Technology Brief Optimize Server ization with QLogic s 10GbE Secure SR-IOV Flexible, Secure, and High-erformance Network ization with QLogic 10GbE SR-IOV Solutions Technology Summary Consolidation driven
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Telecom - The technology behind
SPEED MATTERS v9.3. All rights reserved. All brand names, trademarks and copyright information cited in this presentation shall remain the property of its registered owners. Telecom - The technology behind
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Network-Oriented Software Development. Course: CSc4360/CSc6360 Instructor: Dr. Beyah Sessions: M-W, 3:00 4:40pm Lecture 2
Network-Oriented Software Development Course: CSc4360/CSc6360 Instructor: Dr. Beyah Sessions: M-W, 3:00 4:40pm Lecture 2 Topics Layering TCP/IP Layering Internet addresses and port numbers Encapsulation
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
PRODUCTS & TECHNOLOGY
PRODUCTS & TECHNOLOGY DATA CENTER CLASS WAN OPTIMIZATION Today s major IT initiatives all have one thing in common: they require a well performing Wide Area Network (WAN). However, many enterprise WANs
ethernet alliance Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks
Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks Gary Lee, Fulcrum Microsystems Robert Hays, Intel Ethernet Alliance 3855 SW 153 Drive
What s New in VMware vsphere 5.5 Networking
VMware vsphere 5.5 TECHNICAL MARKETING DOCUMENTATION Table of Contents Introduction.................................................................. 3 VMware vsphere Distributed Switch Enhancements..............................
Stress Testing Switches and Routers
Stress Testing Switches and Routers Rev 4 How to perform a simple stress test on a Layer 2 switch device step-by-step. APPLICATION NOTE The Xena testers can verify traffic forwarding performance, protocol
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Network Function Virtualization Using Data Plane Developer s Kit
Network Function Virtualization Using Enabling 25GbE to 100GbE Virtual Network Functions with QLogic FastLinQ Intelligent Ethernet Adapters DPDK addresses key scalability issues of NFV workloads QLogic
Open Ethernet. April 29 2014
Open Ethernet April 29 2014 The Evolution of SDN Switches are a build of closed software being sold as a package from switch vendors Stanford guys wanted to change the networking world using OpenFlow SDN
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Rev 1.1 March 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED
Stateful Inspection Technology
Stateful Inspection Technology Security Requirements TECH NOTE In order to provide robust security, a firewall must track and control the flow of communication passing through it. To reach control decisions
Intel s Vision for Cloud Computing
Intel s Vision for Cloud Computing Author: Jake Smith, Advanced Server Technologies Date: April 5 th, 2011 Location: Monterrey, California, USA www.openfabrics.org 1 By 2015 More Users More Devices More
Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s
Uncompromising Integrity Making 100Gb/s deployments as easy as 10Gb/s Supporting Data Rates up to 100Gb/s Mellanox 100Gb/s LinkX cables and transceivers make 100Gb/s deployments as easy as 10Gb/s. A wide
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems