InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity



Similar documents
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

SX1012: High Performance Small Scale Top-of-Rack Switch

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Connecting the Clouds

Power Saving Features in Mellanox Products

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Mellanox Accelerated Storage Solutions

Introduction to Cloud Design Four Design Principals For IaaS

Cisco SFS 7000P InfiniBand Server Switch

Large Scale Clustering with Voltaire InfiniBand HyperScale Technology

Mellanox Global Professional Services

Mellanox Academy Online Training (E-learning)

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Building a Scalable Storage with InfiniBand

Mellanox WinOF for Windows 8 Quick Start Guide

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Data Sheet Fujitsu PRIMERGY BX900 S1 Blade Server

How To Buy An Fujitsu Primergy Bx400 S1 Blade Server

How To Build A Cisco Uniden Computing System

Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server

The following InfiniBand adaptor products based on Mellanox technologies are available from HP

Data Sheet FUJITSU Server PRIMERGY CX400 S2 Multi-Node Server Enclosure

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

SummitStack in the Data Center

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

SummitStack in the Data Center

Block based, file-based, combination. Component based, solution based

Cisco SFS 7000D Series InfiniBand Server Switches

Mellanox OpenStack Solution Reference Architecture

Data Sheet FUJITSU Storage ETERNUS DX200F All Flash Array

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure

Data Sheet Fujitsu PRIMERGY CX1000 S1 with 38 cloud server nodes

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Fujitsu PRIMERGY BX600 S3 Blade Server

SUN BLADE 6000 CHASSIS

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Datasheet Fujitsu PRIMERGY CX1000 S1 with 38 CX120 S1 The Cool-Central Architecture

Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array

Enterprise Switches. Accelar 8000

Cisco UCS B-Series M2 Blade Servers

A 10 GbE Network is the Backbone of the Virtual Data Center

Data Sheet Fujitsu PRIMERGY BX600 S3 Blade Server

Datasheet Fujitsu PRIMERGY CX1000 S1 with 38 cloud server nodes The Cool-Central Architecture

Unified Computing Systems

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

K2 LxO RAID Storage Systems

alcatel-lucent converged network solution The cost-effective, application fluent approach to network convergence

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

Virtual Compute Appliance Frequently Asked Questions

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

ORACLE BIG DATA APPLIANCE X3-2

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes

Juniper Networks QFabric: Scaling for the Modern Data Center

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Deliver More Applications for More Users

State of the Art Cloud Infrastructure

CloudEngine Series Data Center Switches

Intel Omni-Path Fabric Edge Switches 100 Series

Enhanced IBM BladeCenter Open Fabric Manager offers an easy to use Web-based interface to easily configure, deploy, and manage BladeCenter products

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

Mellanox Technologies, Ltd. Announces Record Quarterly Results

Virtualizing the SAN with Software Defined Storage Networks

Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

Application Server V240 Platform

The On-Demand Application Delivery Controller

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Replacing SAN with High Performance Windows Share over a Converged Network

Oracle Quad 10Gb Ethernet Adapter

DEDICATED NETWORKS FOR IP STORAGE

Open Ethernet. April

How To Write An Article On An Hp Appsystem For Spera Hana

Cisco UCS B440 M2 High-Performance Blade Server

White Paper Solarflare High-Performance Computing (HPC) Applications

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

Mellanox HPC-X Software Toolkit Release Notes

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

ORACLE PRIVATE CLOUD APPLIANCE

Get the Most Out of Data Center Consolidation

Transcription:

InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox continues its leadership by providing InfiniBand SDN Switch Systems the highest performing interconnect solution for Web 2.0, Big Data, Cloud Computing, Enterprise Data Centers (EDC) and High-Performance Computing (HPC). VALUE PROPOSITIONS Mellanox switches come with port configurations from 8 to 648 at speeds up to 56Gb/s per port with the ability to build clusters that can scale out to ten-of-thousands of nodes. Mellanox Unified Fabric Manager (UFM ) ensures optimal cluster and data center performance with high availability and reliability. Support for Software Defined Network (SDN). Mellanox switches delivers high bandwidth with low latency to get the highest server efficiency and application productivity. Best price/performance solution with error-free 40-56Gb/s performance. Mellanox switches support Virtual Protocol Interconnect (VPI) - allowing them to run seamlessly over both InfiniBand and Ethernet.

Mellanox s family of InfiniBand switches delivers the highest performance and port density with a complete chassis and fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. The Mellanox family of switches includes a broad portfolio of Edge and Director switches that range from 8 to 648 ports, and support 40-56Gb/s per port with the lowest latency. These switches allow IT managers to build the most cost-effective and scalable switch fabrics ranging from small clusters up to tens-of-thousands of nodes. Virtual Protocol Interconnect (VPI) Virtual Protocol Interconnect (VPI) flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. Why Software Defined Network (SDN)? Data center networks have become exceedingly complex. IT managers cannot optimize the networks for their applications which leads to high CAPEX/OPEX, low ROI and IT headaches. Mellanox InfiniBand SDN Switches ensure separation between control and data planes. InfiniBand enables centralized management and view of network. Programmability of the network by external applications and enable cost effective, simple and flat interconnect infrastructure. BENEFITS Built with Mellanox s InfiniScale and SwitchX switch silicons Industry-leading energy efficiency, density, and cost savings Ultra low latency Granular QoS for Cluster, LAN and SAN traffic Quick and easy setup and management Maximizes performance by removing fabric congestions Fabric Management for cluster and converged I/O applications

Edge Switches 8 to 36-port non blocking 40Gb/s and 56Gb/s InfiniBand Switch Systems The Mellanox family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 4Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service. Director Switches 108 to 648-port Non-blocking 40Gb/s and 56Gb/s InfiniBand Switch Systems Mellanox director switches provide the highest density switching solution, scaling from 8.64Tb/s up to 72.52Tb/s of bandwidth in a single enclosure, with lowlatency and the highest per port speeds of up to 56Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand director switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time. Sustained Network Performance Mellanox switch family enables efficient computing for clusters of all sizes from the very small to the extremely large while offering near-linear scaling in performance. Advanced features such as static routing, adaptive routing, and congestion management allows the switch fabric to dynamically detect and avoid congestion and to re-route around points of congestion. These features ensure the maximum effective fabric performance under all types of traffic conditions. Reduce Complexity Mellanox switches reduce complexity by providing seamless connectivity between InifiniBand, Ethernet and Fibre Channel based networks. You no longer need separate network technologies with multiple network adapters to operate your data center fabric. Granular QoS and guaranteed bandwidth allocation can be applied per traffic type. This ensures that each type of traffic has the resources needed to sustain the highest application performance. Reduce Environmental Costs Improved application efficiency along with the need for fewer network adapters allows you to accomplish the same amount of work with fewer, more costeffective servers. Improved cooling mechanism and reduced power and heat consumption allow data centers to reduce the cost associated with physical space.

Enhanced Management Capabilities Mellanox managed switches comes with an onboard subnet manager, enabling simple, out-of-the-box fabric bring up for up to 648 nodes. Mellanox FabricIT (IS5000 family) or MLNX-OS (SX6000 family) chassis management provides administrative tools to manage the firmware, power supplies, fans, ports, and other interfaces. All Mellanox switches can also be coupled with Mella nox s Unified Fabric Manager (UFM) software for managing scale-out InfiniBand computing environments. UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric. UFM boosts application performance and ensures that the fabric is up and running at all times. MLNX-OS provides a license activated embedded diagnostic tool, Fabric Inspector, to check node-to-node, node-to-switch connectivity and ensures the fabric health. Chassis Management UFM Software

Edge Switches IS5022 IS5023 IS5024 IS5025 SX6025 Ports 8 18 36 36 36 Height 1U 1U 1U 1U 1U Switching Capacity 640Gb/s 1.44Tb/s 2.88Tb/s 2.88Tb/s 4.032Tb/s Link Speed 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s Interface Type QSFP QSFP QSFP QSFP QSFP+ Management No No No No No PSU Redundancy No No No Yes Yes Fan Redundancy No No No Yes Yes Integrated Gateway SX6005 SX6012 SX6015 SX6018 IS5030 IS5035 4036 4036E SX6036 Ports 12 12 18 18 36 36 36 34 + 2Eth 36 Height 1U 1U 1U 1U 1U 1U 1U 1U 1U Switching Capacity 1.3 Tb/s 1.3 Tb/s 2.016 Tb/s 2.016Tb/s 2.88Tb/s 2.88Tb/s 2.88Tb/s 2.72Tb/s 4.032Tb/s Link Speed 56 Gb/s 56 Gb/s 56 Gb/s 56Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s Interface Type QSFP+ OSFP+ QSFP+ QSFP+ QSFP QSFP QSFP QSFP QSFP+ Management No Yes No Yes Yes Yes Yes Yes Yes 648 No 648 nodes 108 nodes 648 nodes 648 nodes 648 nodes 648 nodes Management Ports 1 2 1 2 1 1 2 PSU Redundancy No Optional Yes Yes Yes Yes Yes Yes Yes Fan Redundancy No No Yes Yes Yes Yes Yes Yes Yes Integrated Gateway Optional Optional Yes Optional

Director Switches IS5100 SX6506 IS5200 SX6512 IS5300 SX6518 IS5600 SX6536 4200 4700 Ports 108 108 216 216 324 324 648 648 144/162 324 (648 HS) Height 6U 6U 9U 9U 16U 16U 29U 29U 11U 19U Switching Capacity 8.64Tb/s 12.12Tb/s 17.28Tb/s 24.24Tb/s 25.9Tb/s 36.36Tb/s 51.8Tb/s 72.52Tb/s 11.52Tb/s 25.92Tb/s (51.8Tb/s) Link Speed 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s 40Gb/s Interface Type QSFP QSFP+ QSFP QSFP+ QSFP QSFP+ QSFP QSFP+ QSFP QSFP Management 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes Management HA Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Console Cables Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Spine Modules 3 3 6 6 9 9 18 18 4 9 Leaf Modules (Max) 6 6 12 12 18 18 36 36 9 18 PSU Redundancy Yes (N+1) YES (N+N) YES (N+1) YES (N+N) YES (N+2) YES (N+N) YES (N+2) YES (N+N) YES (N+N) YES (N+N) Fan Redundancy Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Feature SUmmary COMPLIANCE HARDWARE 40-56Gb/s per port Full bisectional bandwidth to all ports IBTA 1.21 and 1.3 compliant QSFP connectors supporting passive and active cables Redundant auto-sensing 110/220VAC power supplies Per port status LED Link, Activity System, Fans and PS status LEDs Hot-swappable replaceable fan trays MANAGEMENT Mellanox Operating System (MLNX-OS/ FabricIT) Switch chassis management Embedded Subnet Manager (648 nodes) Error, event and status notifications Quality of Service based on traffic type and service levels Coupled with Mellanox Unified Fabric Manager (UFM) Comprehensive fabric management Secure, remote configuration and management Performance/provisioning manager Fabric Inspector Cluster diagnostics tools for single node, peer-to-peer and network verification SAFETY USA/Canada: ctuvus EU: IEC60950 International: CB Scheme Russia: GOST-R Argentina: S-mark EMC (EMISSIONS) USA: FCC, Class A Canada: ICES, Class A EU: EN55022, Class A EU: EN55024, Class A EU: EN61000-3-2, Class A EU: EN61000-3-3, Class A Japan: VCCI, Class A Australia: C-TICK ENVIRONMENTAL EU: IEC 60068-2-64: Random Vibration EU: IEC 60068-2-29: Shocks, Type I / II EU: IEC 60068-2-32: Fall Test OPERATING CONDITIONS Operating 0 C to 45 C, Non Operating -40 C to 70 C Humidity: Operating 5% to 95% Altitude: Operating -60 to 2000m ACCOUSTIC ISO 7779 ETS 300 753 OTHERS RoHS-6 compliant 1-year warranty C o m p u t e r G m b H Carl-Zeiss-Str. 27-29 D-73230 Kirchheim / Teck Tel. +49(0)7021-487 200 Fax +49(0)7021-487 400 www.starline.de Copyright 2012. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. ConnectIB, FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 3531BR Rev 4.1