White Paper Solarflare High-Performance Computing (HPC) Applications
|
|
|
- Blaise Newton
- 10 years ago
- Views:
Transcription
1 Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters to High- Performance Computing (HPC) applications INTRODUCTION High-performance computing clusters rely on predictable low-latency networking to tune inter-process communications, as well as sufficient network bandwidth to support scalable clusters, storage access and management. HPC clusters require high capacity, low power, non-blocking fabrics that are easy to deploy, easy to manage, and cost-effective. Their deployment requires a flat switching architecture implemented with high-speed top of rack switches with low-latency network adapters to transfer large amounts of data between huge server memories in each compute node. Until now HPC deployments generally had to choose between high performance (Infiniband) and low price (Gigabit Ethernet - GbE), as the bandwidth and latency gaps between Ethernet and Infiniband have been significant. However, the emergence of 10 Gigabit Ethernet (10GbE) closes this performance gap and provides a compelling alternative. 10GbE leverages the highest volume and lowest cost networking technology, and is starting to ramp in volume and drive down the cost curve. Already cheaper than Infiniband in absolute terms and more cost-effective than GbE in price/performance terms, 10 GbE offers other attractive benefits, including: Network flexibility. Because 10 GbE is well on its way to becoming the network of choice in the data center, it can be used not only for inter-processor communications, but also for general networking and storage. In an HPC network, 10 GbE can be used for small and large clusters alike. As a result, network administrators have flexibility to dynamically configure compute clusters according to the needs of a particular application, without being constrained by pre-configured Infiniband and GbE compute nodes. Backwards compatibility. 10 GbE is fully backwards compatible with GbE, so it can fully leverage the existing networking infrastructure, while allowing compute nodes and new switches to provide much higher performance. Easy to deploy and manage. 10 GbE is not only compatible with existing network infrastructure, it is also fully compatible with existing applications, so it can be deployed seamlessly with minimal effort and cost. A key driver in the transition to 10 GbE is the significant increase in capabilities of the latest generation of servers. These new servers, powered by the latest processors with 8 cores each or more, provide a dramatic increase in compute capabilities. This in turn drives the need for increased network I/O capabilities to keep pace with improvements in [email protected] US x2050 UK +44 (0) x5530
2 Page 2 of 6 application processing. 10 GbE not only provides a much higher bandwidth pipe, but has also proven to be extremely efficient at network I/O delivering high performance while minimizing resource utilization. This performance capability, combined with an aggressive cost curve that continues to push pricing down illustrates why 10 GbE will become a standard feature on server motherboards in the coming years. The bottom line is GbE is no longer sufficient nor the most cost-effective solution for HPC clustering. This paper illustrates how 10 GbE closes the Infiniband performance gap and offers a significant price/performance value proposition over GbE. THE CASE FOR 10 GIGABIT ETHERNET IN HPC APPLICATIONS When compared to legacy GbE, Infiniband can provide 10x performance benefits in terms of both bandwidth and latency. However, these performance benefits come with increases in both capital and operational costs. Infiniband equipment is procured at a 5x cost per port over GbE, while the additional cost of learning and installing different networking hardware and modifying the IP-based applications further increases the IT budget. To manage costs, Infiniband networks are used judiciously and only where the highest performance HPC clusters are required, leaving little or no headroom in the infrastructure. As a result, Infiniband-based clusters are a relatively fixed resource that cannot be easily expanded, so configurations are relatively inflexible. Where low cost is needed, GbE cluster networks are used. These clusters typically use the GbE network port available on the server and a cost-effective Ethernet switch. Installing, configuring, and supporting these cluster networks is relatively easy and low-cost since it leverages general networking expertise already available in most organizations, and does not involve installing and configuring special transport protocols. In addition, these networks provide the most flexibility, as cluster size can be easily adapted to the size required. The trade-off using GbE is performance with one tenth the bandwidth and 10 times the latency of Infiniband this is a significant compromise. The emergence of 10 GbE has provided a viable alternative for HPC cluster networks, providing users with significant advantages, including: Ease of installation and management Raw high performance high bandwidth/ low latency Exceptional price/performance Scalability Low power consumption Performance/watt Ease of Installation and Management Based on the ubiquitous Ethernet standard, 10 GbE offers the same interoperability and compatibility as GbE offers, and leverages the same management techniques, so IT personnel can readily upgrade their physical networks to the higher speed version with little or no impact.
3 Page 3 of 6 Another aspect of ease of installation is cabling choice. 10 GbE utilizes a variety of connector and cabling options. A popular option today is SFP+, which uses a small-form factor module to connect to a cable. The most popular module types are: optical transceiver that connects to fiber cabling; direct attach copper, which is an active cable with an SFP+ connector that plugs directly into the SFP+ cage; 1000BASE-T SFP module, which includes a physical layer transceiver that is compatible with the 1000BASE-T standard, and adapts RJ45 connectors to a standard SFP+ cage. In addition to SFP+, 10 GbE server adapters are also available with native 10GBASE-T, which leverages existing Ethernet cabling and RJ45 connectors. 10GBASE-T provides backwards compatibility and automatic speed negotiation, which enables HPC cluster nodes to utilize existing GbE switching infrastructure or upgrade to 10 GbE switches. This ability to upgrade servers and server adapters independently from network infrastructure provides important flexibility in managing purchase costs and upgrade cycles. High Performance, High Value, and Low Latency Table 1 compares the performance and cost between GbE and 10 GbE alternatives of HPC cluster networks being discussed. As the table shows, 10 GbE improves GbE bandwidth by 10x. What is less obvious is the dramatic 5x reduction in latency 10GbE offers, which becomes critical in high traffic networks. Additionally, 10GbE enables scalable I/O for high bandwidth applications with modest CPU utilization that cannot be addressed by GbE. For example, Solarflare testing has demonstrated that application level TCP/IP throughput in excess of 120Gbps is possible on a quad socket Nehalem-EX platform while consuming only 25% CPU and 38Gbps is easily achieved on single socket (Westmere) servers at approximately 20% utilization. This level of network bandwidth cannot be attained by scaling multiple GbE ports, and unlocks the I/O potential of multi-core servers, enabling large cluster scaling for many HPC codes without the use of proprietary interconnects. Most importantly, on a bandwidth-adjusted basis 10 GbE is less expensive to acquire than GbE. 1Gb Ethernet 10Gb Ethernet Bandwidth 1 Gbps 10 Gbps Latency (1/2 RTT) 20 µsec 4 µsec Efficiency (Gbps/%CPU) Price Per Gbps/Port $160 $66 Table 1: HPC Interconnect Comparison: GbE vs. 10GbE Compared to Infiniband, 10 GbE performance and cost compare favorably. Additionally, 10 GbE leverages GbE installation cost, which eliminates both special training and application modification, thus significantly lowering OpEx in addition to CapEx.
4 Page 4 of 6 Greater Power Efficiency, Performance/Watt Table 1 also compares the power efficiency between GbE and 10 GbE, reflecting that the 10 GbE products available today are extremely power efficient. For example, a single port of 10 GbE can deliver 5x the bandwidth at the same power consumption level as an integrated dual-port GbE LAN on motherboard (LOM) device. Furthermore, the GbE LOM can be disabled and replaced with a 10GbE server adapter which only consumes 2x the power and is far more power efficient than scaling bandwidth using multiple GbE ports. This superior power efficiency enables far greater performance and cluster scaling, as power and cooling limitations typically become an issue. 10 GIGABIT ETHERNET HPC MIGRATION These technology and business reasons are supported by market analyses that suggest a strong migration of HPC deployments to 10 GbE in the next three years, from today s dominant Infiniband and GbE deployments. As a reference point, today GbE and Infiniband each hold approximately 40% share of the HPC cluster interconnect market, and will ship approximately 1.3 million ports into this market in 2010, according to IDC. Although not yet a significant factor in HPC clustering, in 2010 overall server-based port shipments of 10 GbE are expected to exceed 2.7 million ports, according to Dell Oro. Looking ahead just a few years to 2013, Infiniband server port shipments are expected to roughly double, while overall 10 GbE server port shipments are forecast to grow more than 8x to over 23 million ports. Analogous to the 1GbE transition, 10 GbE is on the leading edge of a steep price curve that will make it a compelling, cost-effective technology. Since 10 GbE provides substantial increases in performance (latency and bandwidth), ease of installation and management, and full application compatibility, the trade off between performance and cost is no longer necessary 10 GbE provides both. 1G Ethernet, 43% Other, 17% Infiniband, 40% Ethernet Infiniband Other 2009 Source: Market_and_Interconnects_Report.pdf, Solarflare Table 2: HPC Interconnect market trends SOLARFLARE 10 GIGABIT ETHERNET TECHNOLOGY LEADERSHIP While 10 GbE provides significant performance and cost advantages over the alternatives, like all technologies some implementations are better than others. Solarflare server adapters deliver the industry s lowest power, lowest latency, lowest CPU utilization, and highest application performance, along with industry leading scalability. Benchmark results show Solarflare s technology beats all other competitors in both bandwidth and latency. For example, Table 3 shows results of performance testing using MPI 7 (HP MPI) cluster stack, running over 64-bit RHEL 5.5 on a twoway quad-core 2.4GHz Westmere (E5620) with 6x 1Gbyte DIMMs. This testing compares the performance of Solarflare s SFN5122F server adapter with that of adapters from popular HPC competitors. The results show that Solarflare s
5 Page 5 of 6 SFN5122F using OpenOnload application acceleration software outperforms all other competitors. Further, when compared to server adapters using RDMA, the SFN5122F delivers 30% lower latency than its nearest competitor. These results show that with Solarflare, the highest performance can be achieved without compromise no need to modify applications, upgrade the network, use proprietary or dual-ended protocols, or other techniques that require modifications or specialization. Table 3: HP MPI Send/Receive latency results SUMMARY Solarflare 10 GbE server adapters provide exceptional price/performance for HPC networks. Delivering latency to within 2μs of Infiniband and 40Gbps aggregate bandwidth for a dual-port server adapter, 10 GbE is more cost-effective than GbE, and easily replaces or scales out GbE installations.
6 Page 6 of 6 OPENONLOAD APPLICATION ACCELERATION MIDDLEWARE FURTHER LOWERS LATENCY Solarflare server adapters deliver the lowest kernel latency performance on the market, and OpenOnload significantly improves that latency advantage. When combined with Solarflare s OpenOnload application accelerator middleware, Solarflare 10 GbE adapters can deliver sub-4 microsecond TCP/UDP application to application latencies, while supporting message rates in the millions, reducing latency jitter, and bringing a greater level of predictability to message processing latency. SOLARFLARE S FAMILY OF HPC SOLUTIONS Solarflare offers single- and dual-port 10 GbE server adapters that deliver high bandwidth, industry leading latency and power, with stateless offloads that minimize CPU utilization. The Solarflare family supports both SFP+ and 10GBASE-T media. The SFP+ adapter supports optical modules or direct attach copper twin-ax cables, while the 10GBASE-T supports Category 6A, 6, 5E cables which are compatible with existing data center infrastructures for distances up to 100 meters. Solarflare s two families of server adapters meet a broad range of HPC networking needs. Enterprise server adapters are targeted at applications demanding the lowest latency, and most scalable virtualization. Midrange server adapters offer an exceptional 10GbE value. OpenOnload is binary compatible with the BSD sockets API, requires no modification of the end user s application, and because it is completely compatible with TCP/IP and Ethernet, requires no new wire protocols nor upgrades to the network. ABOUT SOLARFLARE COMMUNICATIONS, INC. Solarflare Communications is the leading provider of 10 Gigabit Ethernet (10GbE) silicon and server adapters. Solarflare s robust and powerefficient solutions are cost effective and easy to deploy. Ready for primetime, Solarflare 10GbE products and OpenOnload make possible next-generation applications such as low-latency networking for market data applications, cloud computing, server virtualization, and network convergence. For more information on Solarflare, please visit WORLDWIDE OFFICES USA: Solarflare Communications 9501 Jeronimo Road, Suite 250 Irvine, CA 92618, USA Phone: x2050 Fax: [email protected] EMEA: Solarflare Communications Development Office Westbrook Centre, Building 2, Milton Road Cambridge UK CB4 1YG Phone: +44 (0) x5530 Fax: +44 (0)
10GBASE T for Broad 10_Gigabit Adoption in the Data Center
10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
10G Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Future Cloud Applications
10G Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Future Cloud Applications Testing conducted by Solarflare Communications and Arista Networks shows that
10Gb Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Latency-Sensitive Applications
10Gb Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Latency-Sensitive Applications Testing conducted by Solarflare and Arista Networks reveals single-digit
Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration
Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration White Paper By: Gary Gumanow 9 October 2007 SF-101233-TM Introduction With increased pressure on IT departments to do
Low Latency Test Report Ultra-Low Latency 10GbE Switch and Adapter Testing Bruce Tolley, PhD, Solarflare
Ultra-Low Latency 10GbE Switch and Adapter Testing Bruce Tolley, PhD, Solarflare Testing conducted by Solarflare and Fujitsu shows that ultra low latency application-level performance can be achieved with
Ultra-Low Latency, High Density 48 port Switch and Adapter Testing
Ultra-Low Latency, High Density 48 port Switch and Adapter Testing Testing conducted by Solarflare and Force10 shows that ultra low latency application level performance can be achieved with commercially
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
10GBASE-T for Broad 10 Gigabit Adoption in the Data Center
WHITE PAPER 10GBASE-T Ethernet Network Connectivity 10GBASE-T for Broad 10 Gigabit Adoption in the Data Center The increasing use of virtualization and unified networking places extreme I/O demands on
Interoperability Testing and iwarp Performance. Whitepaper
Interoperability Testing and iwarp Performance Whitepaper Interoperability Testing and iwarp Performance Introduction In tests conducted at the Chelsio facility, results demonstrate successful interoperability
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers
White Paper Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th As the deployment of bandwidth-intensive applications such as public and private cloud computing continues to increase, IT administrators
10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling
Technical White Paper: 10GBASE-T SFP+ Transceiver Module February 24, 2016 10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling Enabling More Cat 6a Connectivity for 10GbE Networking
Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012
Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012 White Paper sponsored by Broadcom Corporation Impact of Romley/Sandy Bridge Servers: As we discussed in our first
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520
COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX
Doubling the I/O Performance of VMware vsphere 4.1
White Paper Doubling the I/O Performance of VMware vsphere 4.1 with Broadcom 10GbE iscsi HBA Technology This document describes the doubling of the I/O performance of vsphere 4.1 by using Broadcom 10GbE
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks
HIGHSPEED ETHERNET THE NEED FOR SPEED Jim Duran, Product Manager - Americas WHITE PAPER Molex Premise Networks EXECUTIVE SUMMARY Over the years structured cabling systems have evolved significantly. This
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Optimizing Infrastructure Support For Storage Area Networks
Optimizing Infrastructure Support For Storage Area Networks December 2008 Optimizing infrastructure support for Storage Area Networks Mission critical IT systems increasingly rely on the ability to handle
Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers
Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Introduction Adoption of
Cloud-Based Apps Drive the Need for Frequency-Flexible Clock Generators in Converged Data Center Networks
Cloud-Based Apps Drive the Need for Frequency-Flexible Generators in Converged Data Center Networks Introduction By Phil Callahan, Senior Marketing Manager, Timing Products, Silicon Labs Skyrocketing network
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
The Future of Cloud Networking. Idris T. Vasi
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Addressing Scaling Challenges in the Data Center
Addressing Scaling Challenges in the Data Center DELL PowerConnect J-Series Virtual Chassis Solution A Dell Technical White Paper Dell Juniper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
Ultra Low Latency Data Center Switches and iwarp Network Interface Cards
WHITE PAPER Delivering HPC Applications with Juniper Networks and Chelsio Communications Ultra Low Latency Data Center Switches and iwarp Network Interface Cards Copyright 20, Juniper Networks, Inc. Table
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure
Top of Rack: An Analysis of a Cabling Architecture in the Data Center
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
Boosting Data Transfer with TCP Offload Engine Technology
Boosting Data Transfer with TCP Offload Engine Technology on Ninth-Generation Dell PowerEdge Servers TCP/IP Offload Engine () technology makes its debut in the ninth generation of Dell PowerEdge servers,
Data Center Architecture with Panduit, Intel, and Cisco
Data Center Architecture with Panduit, Intel, and Cisco 0GBASE-T Application Note Integrating Panduit Category 6A Interconnects with the Cisco Nexus TM and Intel Ethernet Server Adapter X0-T 0 PANDUIT
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O
Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O Introduction Data centers are growing at an unprecedented rate, creating challenges for enterprises. Enterprise-level
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
Virtual Compute Appliance Frequently Asked Questions
General Overview What is Oracle s Virtual Compute Appliance? Oracle s Virtual Compute Appliance is an integrated, wire once, software-defined infrastructure system designed for rapid deployment of both
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,
How To Evaluate Netapp Ethernet Storage System For A Test Drive
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.1 June 2010 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel 1. This work represents
How To Design A Data Centre
DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is
White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology
Network Simplification with Juniper Networks Technology 1 Network Simplification with Juniper Networks Technology Table of Contents Executive Summary... 3 Introduction... 3 Data Center Network Challenges...
How To Get 10Gbe (10Gbem) In Your Data Center
Product Highlight o 10 Gigabit Ethernet (10GbE) Performance for the Entire Datacenter o Standard CAT-6a Cabling with RJ45 Connectors o Backward Compatibility with Existing 1000BASE- T Networks Simplifies
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
1-Gigabit TCP Offload Engine
White Paper 1-Gigabit TCP Offload Engine Achieving greater data center efficiencies by providing Green conscious and cost-effective reductions in power consumption. June 2009 Background Broadcom is a recognized
Managing Data Center Power and Cooling
White PAPER Managing Data Center Power and Cooling Introduction: Crisis in Power and Cooling As server microprocessors become more powerful in accordance with Moore s Law, they also consume more power
100 Gigabit Ethernet is Here!
100 Gigabit Ethernet is Here! Introduction Ethernet technology has come a long way since its humble beginning in 1973 at Xerox PARC. With each subsequent iteration, there has been a lag between time of
Next-Generation IP Networking
Position Paper Next-Generation IP Networking Ethernet emerges as an ideal choice for global networking Ethernet is breaking new ground as a key technology for service providers and carriers in wide area
Flash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers
CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW Dell PowerEdge M-Series Blade Servers Simplifying IT The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering
Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking
Cluster Grid Interconects Tony Kay Chief Architect Enterprise Grid and Networking Agenda Cluster Grid Interconnects The Upstart - Infiniband The Empire Strikes Back - Myricom Return of the King 10G Gigabit
Informatica Ultra Messaging SMX Shared-Memory Transport
White Paper Informatica Ultra Messaging SMX Shared-Memory Transport Breaking the 100-Nanosecond Latency Barrier with Benchmark-Proven Performance This document contains Confidential, Proprietary and Trade
InfiniBand in the Enterprise Data Center
InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900
Simplifying the Data Center Network to Reduce Complexity and Improve Performance
SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,
Cisco SFS 7000P InfiniBand Server Switch
Data Sheet Cisco SFS 7000P Infiniband Server Switch The Cisco SFS 7000P InfiniBand Server Switch sets the standard for cost-effective 10 Gbps (4X), low-latency InfiniBand switching for building high-performance
A Platform Built for Server Virtualization: Cisco Unified Computing System
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable
10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable StarTech ID: IES101002SFP The IES101002SFP 10-port Ethernet switch delivers flexibility and control of your network by
LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.
- LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA. Summary Industry: Financial Institution Challenges: Provide a reliable,
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters
WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V
White Paper Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V What You Will Learn The modern virtualized data center is today s new IT service delivery foundation,
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology Evaluation report prepared under contract with Brocade Executive Summary As CIOs
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
Cisco SFS 7000D Series InfiniBand Server Switches
Q&A Cisco SFS 7000D Series InfiniBand Server Switches Q. What are the Cisco SFS 7000D Series InfiniBand switches? A. A. The Cisco SFS 7000D Series InfiniBand switches are a new series of high-performance
