InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity



Similar documents
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

SX1012: High Performance Small Scale Top-of-Rack Switch

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Connecting the Clouds

Power Saving Features in Mellanox Products

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Mellanox Accelerated Storage Solutions

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Mellanox WinOF for Windows 8 Quick Start Guide

Introduction to Cloud Design Four Design Principals For IaaS

Cisco SFS 7000P InfiniBand Server Switch

Mellanox Global Professional Services

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Building a Scalable Storage with InfiniBand

Large Scale Clustering with Voltaire InfiniBand HyperScale Technology

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Mellanox Academy Online Training (E-learning)

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

How To Build A Cisco Uniden Computing System

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Mellanox OpenStack Solution Reference Architecture

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Mellanox HPC-X Software Toolkit Release Notes

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

SummitStack in the Data Center

SummitStack in the Data Center

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

How To Buy An Fujitsu Primergy Bx400 S1 Blade Server

Data Sheet FUJITSU Server PRIMERGY CX400 S2 Multi-Node Server Enclosure

Data Sheet Fujitsu PRIMERGY BX900 S1 Blade Server

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

RoCE vs. iwarp Competitive Analysis

Unified Computing Systems

The following InfiniBand adaptor products based on Mellanox technologies are available from HP

Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

Data Sheet Fujitsu PRIMERGY CX1000 S1 with 38 cloud server nodes

Block based, file-based, combination. Component based, solution based

Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

Fujitsu PRIMERGY BX600 S3 Blade Server

State of the Art Cloud Infrastructure

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

SUN BLADE 6000 CHASSIS

Datasheet Fujitsu PRIMERGY CX1000 S1 with 38 CX120 S1 The Cool-Central Architecture

Juniper Networks QFabric: Scaling for the Modern Data Center

Deliver More Applications for More Users

How To Write An Article On An Hp Appsystem For Spera Hana

HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report

New Brocade 10-port and 20-port 8Gb SAN Switch Modules, and Brocade 8Gb SFP+ SW Optical Transceiver for IBM BladeCenter

Datasheet Fujitsu PRIMERGY CX1000 S1 with 38 cloud server nodes The Cool-Central Architecture

Data Sheet Fujitsu PRIMERGY BX600 S3 Blade Server

Intel Omni-Path Fabric Edge Switches 100 Series

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Data Sheet FUJITSU Storage ETERNUS DX200F All Flash Array

Cisco UCS B-Series M2 Blade Servers

The On-Demand Application Delivery Controller

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

White Paper Solarflare High-Performance Computing (HPC) Applications

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Virtualizing the SAN with Software Defined Storage Networks

Virtual Compute Appliance Frequently Asked Questions

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Advancing Applications Performance With InfiniBand

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

Cisco SFS 7000D Series InfiniBand Server Switches

3G Converged-NICs A Platform for Server I/O to Converged Networks

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Replacing SAN with High Performance Windows Share over a Converged Network

Mellanox Technologies, Ltd. Announces Record Quarterly Results

CloudEngine Series Data Center Switches

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

Data Sheet Fujitsu ETERNUS CS High End V5.1 Data Protection Appliance

Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System

Cisco TelePresence MSE 8000

Arista and Leviton Technology in the Data Center

A 10 GbE Network is the Backbone of the Virtual Data Center

Cisco UCS B440 M2 High-Performance Blade Server

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Data Sheet FUJITSU Storage ETERNUS LT260 Tape System

DEDICATED NETWORKS FOR IP STORAGE

Sun Constellation System: The Open Petascale Computing Architecture

Transcription:

InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox Smart InfiniBand Switch Systems the highest performing interconnect solution for Web 2.0, Big Data, Cloud Computing, Enterprise Data Centers (EDC) and High-Performance Computing (HPC). VALUE PROPOSITIONS Mellanox switches come with port configurations from 8 to 648 at speeds up to 100Gb/s per port with the ability to build clusters that can scale out to ten-of-thousands of nodes. Mellanox switches delivers high bandwidth with sub 90ns latency to get the highest server efficiency and application productivity. Best price/performance solution with error-free 40-100Gb/s link speed. Mellanox managed switches support Virtual Protocol Interconnect (VPI) - allowing them to run seamlessly over both InfiniBand and Ethernet fabrics. Switch-IB 2 is the world s first smart switch, enabling in-network computing through the Co-Design SHArP technology. Switch-IB switches support InfiniBand Router to enable highest scalability (>100K nodes) and full fabric isolation.

Mellanox s family of InfiniBand switches delivers the highest performance and port density with a complete chassis and fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. The Mellanox family of switches includes a broad portfolio of Edge and Director switches that range from 8 to 648 ports, and support 40-100Gb/s per port with the lowest latency. Mellanox InfiniBand Software Defined Networking (SDN) Switches ensure separation between control and data planes. InfiniBand centralized management and programmability of the network by external applications enables cost effective, simple and flat interconnect infrastructure. Virtual Protocol Interconnect (VPI) Virtual Protocol Interconnect (VPI) flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. Scalable Hierarchical Aggregation Protocol (SHArP) Switch-IB 2 is the world s first smart network switch, designed to enable in-network computing through the Co-Design SHArP technology. The Co-Design architecture enables the usage of all active data center devices to accelerate the communications frameworks, resulting in order of magnitude applications performance improvements. BENEFITS Industry-leading energy efficiency, density, and cost savings Ultra low latency Granular QoS for Cluster, LAN and SAN traffic Quick and easy setup and management Maximizes performance by removing fabric congestions Fabric Management for cluster and converged I/O applications

Edge Switches 8 to 36-port non blocking 40 to 100Gb/s InfiniBand Switch Systems The Mellanox family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 7.2Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service. Director Switches 108 to 648-port full bi-directional bandwidth 40 to 100Gb/s InfiniBand Switch Systems Mellanox director switches provide the highest density switching solution, scaling from 8.64Tb/s up to 130Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 100Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand director switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time. Sustained Network Performance The Mellanox switch family enables efficient computing for clusters of all sizes from the very small to the extremely large while offering near-linear scaling in performance. Advanced features such as static routing, adaptive routing, and congestion management allows the switch fabric to dynamically detect and avoid congestion and to re-route around points of congestion. These features ensure the maximum effective fabric performance under all types of traffic conditions. Reduce Complexity Mellanox switches reduce complexity by providing seamless connectivity between InifiniBand, Ethernet and Fibre Channel based networks. You no longer need separate network technologies with multiple network adapters to operate your data center fabric. Granular QoS and guaranteed bandwidth allocation can be applied per traffic type. This ensures that each type of traffic has the resources needed to sustain the highest application performance. Reduce Environmental Costs Improved application efficiency along with the need for fewer network adapters allows you to accomplish the same amount of work with fewer, more costeffective servers. Improved cooling mechanism and reduced power and heat consumption allow data centers to reduce the cost associated with physical space.

Enhanced Management Capabilities Mellanox managed switches comes with an onboard subnet manager, enabling simple, out-of-the-box fabric bring up for up to 2K nodes. Mellanox FabricIT (IS5000 family) or MLNX-OS (SX6000 and SB7000 families) chassis management provides administrative tools to manage the firmware, power supplies, fans, ports, and other interfaces. All Mellanox switches can also be coupled with Mella nox s Unified Fabric Manager (UFM) software for managing scale-out InfiniBand computing environments. UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric. UFM boosts application performance and ensures that the fabric is up and running at all times. MLNX-OS provides a license activated embedded diagnostic tool, Fabric Inspector, to check node-to-node, node-to-switch connectivity and ensures the fabric health. Chassis Management UFM Software

Edge Switches SX6025 SB7790/SB7890 Ports 36 36 Height 1U 1U Switching Capacity 4.032Tb/s 7.2Tb/s Link Speed 56Gb/s 100Gb/s Interface Type QSFP+ QSFP28 Management No No PSU Redundancy Yes Yes Fan Redundancy Yes Yes Integrated Gateway SX6005 SX6012 SX6015 SX6018 SX6036 SB7700/SB7800 Ports 12 12 18 18 36 36 Height 1U 1U 1U 1U 1U 1U Switching Capacity 1.3 Tb/s 1.3 Tb/s 2.016 Tb/s 2.016Tb/s 4.032Tb/s 7.2Tb/s Link Speed 56 Gb/s 56 Gb/s 56 Gb/s 56Gb/s 56Gb/s 100Gb/s Interface Type QSFP+ OSFP+ QSFP+ QSFP+ QSFP+ QSFP28 Management No Yes No Yes Yes Yes 648 No 648 nodes 648 nodes 2048 nodes Management Ports 1 2 2 2 PSU Redundancy No Optional Yes Yes Yes Yes Fan Redundancy No No Yes Yes Yes Yes Integrated Gateway Optional Optional Optional

Director Switches SX6506 SX6512 CS7520 SX6518 CS7510 SX6536 CS7500 Ports 108 216 216 324 324 648 648 Height 6U 9U 12U 16U 16U 29U 28U Switching Capacity 12.12Tb/s 24.24Tb/s 43.2Tb/s 36.36Tb/s 64.8Tb/s 72.52Tb/s 130Tb/s Link Speed 56Gb/s 56Gb/s 100Gb/s 56Gb/s 100Gb/s 56Gb/s 100Gb/s Interface Type QSFP+ QSFP+ QSFP28 QSFP+ QSFP28 QSFP+ QSFP28 Management 648 nodes 648 nodes 2048 nodes 648 nodes 2048 nodes 648 nodes 2048 nodes Management HA Yes Yes Yes Yes Yes Yes Yes Console Cables Yes Yes Yes Yes Yes Yes Yes Spine Modules 3 6 6 9 9 18 18 Leaf Modules (Max) 6 12 6 18 9 36 18 PSU Redundancy YES (N+N) YES (N+N) YES (N+N) YES (N+N) YES (N+N) YES (N+N) YES (N+N) Fan Redundancy Yes Yes Yes Yes Yes Yes Yes

FEATURE SUMMARY COMPLIANCE HARDWARE 40-100Gb/s per port Full bisectional bandwidth to all ports IBTA 1.21 and 1.3 compliant QSFP connectors supporting passive and active cables Redundant auto-sensing 110/220VAC power supplies Per port status LED Link, Activity System, Fans and PS status LEDs Hot-swappable replaceable fan trays MANAGEMENT Mellanox Operating System (MLNX-OS) Switch chassis management Embedded Subnet Manager Error, event and status notifications Quality of Service based on traffic type and service levels Coupled with Mellanox Unified Fabric Manager (UFM) Comprehensive fabric management Secure, remote configuration and management Performance/provisioning manager Fabric Inspector Cluster diagnostics tools for single node, peer-to-peer and network verification SAFETY USA/Canada: ctuvus EU: IEC60950 International: CB Scheme Russia: GOST-R Argentina: S-mark EMC (EMISSIONS) USA: FCC, Class A Canada: ICES, Class A EU: EN55022, Class A EU: EN55024, Class A EU: EN61000-3-2, Class A EU: EN61000-3-3, Class A Japan: VCCI, Class A Australia: C-TICK ENVIRONMENTAL EU: IEC 60068-2-64: Random Vibration EU: IEC 60068-2-29: Shocks, Type I / II EU: IEC 60068-2-32: Fall Test OPERATING CONDITIONS Operating 0 C to 45 C, Non Operating -40 C to 70 C Humidity: Operating 5% to 95% Altitude: Operating -60 to 2000m ACCOUSTIC ISO 7779 ETS 300 753 OTHERS RoHS-6 compliant 1-year warranty 350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com Copyright 2016. Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, IPtronics, Kotura, MLNX-OS, PhyX, SwitchX, UltraVOA, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, MetroDX, Mellanox Open Ethernet, Mellanox Virtual Modular Switch, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 3531BR Rev 4.8