Paving The Road to Exascale Computing HPC Technology Update
|
|
|
- Lorin Harrington
- 9 years ago
- Views:
Transcription
1 Paving The Road to Exascale Computing HPC Technology Update Todd Wilde, Director of Technical Computing and HPC Mellanox HPC Technology Accelerates the TOP500 InfiniBand has become the de-factor interconnect solution for High Performance Computing InfiniBand is the most used interconnect on the TOP500 list with 224 systems FDR InfiniBand connected systems doubled to 45 systems since June 12 Mellanox InfiniBand is the interconnect of choice for Petascale systems Comprehensive End-to-End 10/40/56Gb/s Ethernet and 56Gb/s InfiniBand Portfolio ICs Adapter Cards Switches/Gateways Software Cables Scalability, Reliability, Power, Performance
2 Mellanox InfiniBand Paves the Road to Exascale PetaFlop Mellanox Connected The Mellanox Advantage Host/Fabric Software Management - UFM, MLNX-OS - Integration with job schedulers - Inbox drivers from major distributions Application Accelerations Networking Efficiency/Scalability Collectives Accelerations (FCA/CORE-Direct) GPU Accelerations (RDMA for GPUDirect) MPI/SHMEM/PGAS RDMA Quality of Service - Dynamically Connected Transport - Adaptive Routing - Congestion Management Server and Storage High-Speed Connectivity - Latency - Bandwidth - CPU Utilization - Message rate
3 FDR InfiniBand New Features and Capabilities Performance / Scalability >100Gb/s bandwidth, <0.7usec latency PCI Express 3.0 InfiniBand Routing and IB-Ethernet Bridging Reliability / Efficiency Unbreakable Link Technology Forward Error Correction Link quality auto-sensing Link Level Retransmission Link bit encoding 64/66 Lower power consumption Virtual Protocol Interconnect (VPI) Technology Applications VPI Adapter Switch OS Layer VPI Switch Unified Fabric Manager Networking Storage Clustering Management Acceleration Engines 3.0 Ethernet: 10/40 Gb/s InfiniBand:10/20/40/ 56 Gb/s 64 ports 10GbE 36 ports 40GbE 48 10GbE GbE 36 ports IB up to 56Gb/s 8 VPI subnets LOM Adapter Card Mezzanine Card
4 FDR Application Benchmarks Computational Fluid Dynamics Oil and Gas Reservoir Simulation Molecular Dynamics Weather and Earth Sciences Up to 32% ROI on equipment and operating costs FDR/QDR InfiniBand Comparisons Linpack Efficiency Derived from 6/12 TOP500 List Highest &Lowest Outlier Removed from each group
5 Roadmap of Interconnect Innovations InfiniHost InfiniHost III ConnectX (1,2,3) Connect-IB World s first InfiniBand HCA World s first PCIe InfiniBand HCA World s first Virtual Protocol Interconnect (VPI) Adapter Built from the Ground up for the Exascale Foundation 10Gb/s InfiniBand PCI-X host interface 1 million msg/sec 20Gb/s InfiniBand PCIe million msg/sec 40/56Gb/s InfiniBand PCIe 2.0, 3.0 x8 33 million msg/sec >100 Gb/s InfiniBand PCIe 3.0 x16 >135 million msg/sec Connect-IB Performance Highlights World s first 100Gb/s InfiniBand interconnect adapter PCIe 3.0 x16, dual FDR 56Gb/s InfiniBand ports to provide >100Gb/s Highest InfiniBand message rate: 130 million messages per second 4X higher than other InfiniBand solutions Enter the World of Boundless Performance
6 Connect-IB Scalability Features Inter-node Scalability (Scale Out) New innovative transport Dynamically Connected Transport service The new transport service combines the best of: - Unreliable Datagram (UD) no resources reservation - Reliable Connected Service transport reliability Scale out for unlimited clustering size of compute and storage Eliminates overhead and reduces memory footprint Enter the World of Unlimited Scalability Connect-IB Virtualization Enhancements Scale out virtualization solution Full dual-port virtual HCA support for each guest VM VM HCA Driver VM HCA Driver VM HCA Driver SRIOV with 256 Virtual Functions (VFs) 2X higher than previous solutions Hypervisor 1K egress QoS levels Guaranteed Quality of Service for VMs VF VF VF IB virtualization Embedded virtual IB leaf switch LID/GID based forwarding Up to 1K virtual ports PF InfiniBand HCA Enter the World of Hardware Virtualization
7 Connect-IB Memory Management Enhancements Extended atomics byte argument On Demand Paging Save pinning of HCA registered memory Derived Data Types - Noncontiguous Data Elements Allows noncontiguous data elements to be grouped together in a single message Eliminates unnecessary data copy operations and multiple I/O transactions Hardware-Based Sophisticated Memory Operations Mellanox ScalableHPC Accelerate Parallel Applications MXM - Reliable Messaging Optimized for Mellanox HCA - Hybrid Transport Mechanism - Efficient Memory Registration - Receive Side Tag Matching FCA - Topology Aware Collective Optimization - Hardware Multicast - Separate Virtual Fabric for Collectives - CoreDIrect Hardware Offload InfiniBand Verbs API
8 FCA Collective Performance with OpenMPI Double Hadoop Performance with UDA Lower is better Disk Writes 40% Disk Reads 15% CPU Utilization 2.5X *TeraSort is a popular benchmark used to measure the performance of Hadoop cluster ~2X Faster Job Completion
9 GPU Direct RDMA for Fastest GPU to GPU Communications GPU Direct RDMA (previously known as GPU Direct 3.0) Enables peer to peer communication directly between HCA and GPU Dramatically reduces overall latency for GPU to GPU communications System Memory GDDR5 Memory GDDR5 Memory System Memory CPU GPU GPU CPU PCI Express 3.0 PCI Express 3.0 GPU Mellanox HCA Mellanox VPI Mellanox HCA Most Efficient GPU Computing Optimizing GPU and Accelerator Communications NVIDIA GPUs Mellanox were original partners in Co-Development of GPUDirect 1.0 Recently announced support of GPUDirect RDMA Peer-to-Peer GPU-to-HCA data path AMD GPUs Sharing of System Memory: AMD DirectGMA Pinned supported today AMD DirectGMA P2P: Peer-to-Peer GPU-to-HCA data path under development Intel MIC MIC software development system enables the MIC to communicate directly over the InfiniBand verbs API to Mellanox devices
10 MetroX Bringing InfiniBand to Campus Wide Networks Extends InfiniBand across campus/metro Low-cost, low-power InfiniBand Subnet 40Gb FDR-10 InfiniBand links RDMA over distant sites InfiniBand Subnet InfiniBand Subnet Support up to 10KM over dark fiber InfiniBand Subnet QSFP to SMF LC connectors module UFM Comprehensive, Robust Management Automatic Discovery Central Device Management Fabric Dashboard Fabric Health Reports Service Oriented Provisioning Health & Perf Monitoring Congestion Analysis
11 Mellanox HPC Paving the way to Exascale Computing ScalableHPC Ultimate Scalability with Connect-IB 100Gb/s throughput to network Over 130-million messages/second Dynamically Connected Transport service for unlimited inter-node scaling Highest Performing Interconnect Accelerating GPU Communications <0.7usec latency 56Gb/s throughput Higher scalability Maximum Reliability Thank You
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand Pak Lui, Application Performance Manager September 12, 2013 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server and
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520
COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX
Interconnecting Future DoE leadership systems
Interconnecting Future DoE leadership systems Rich Graham HPC Advisory Council, Stanford, 2015 HPC The Challenges 2 Proud to Accelerate Future DOE Leadership Systems ( CORAL ) Summit System Sierra System
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Mellanox continues its leadership providing InfiniBand Host Channel
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Mellanox Accelerated Storage Solutions
Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.
Connecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox Smart InfiniBand Switch Systems the highest performing interconnect
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology
Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:
Overview HP supports 56 Gbps Fourteen Data Rate (FDR) and 40Gbps 4X Quad Data Rate (QDR) InfiniBand (IB) products that include mezzanine Host Channel Adapters (HCA) for server blades, dual mode InfiniBand
Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School
Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving Growth Kevin Deierling Vice President of Marketing October 25, 2013 Delivering the Highest ROI Across all Markets HPC Web 2.0 DB/Enterprise Cloud Financial Services
PCI Express IO Virtualization Overview
Ron Emerick, Oracle Corporation Author: Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
Mellanox OpenStack Solution Reference Architecture
Mellanox OpenStack Solution Reference Architecture Rev 1.3 January 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY
PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems
PCI Express Impact on Storage Architectures Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
PCI Express and Storage. Ron Emerick, Sun Microsystems
Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2015 InfiniBand FDR and EDR Continue Growth and Leadership The Most Used Interconnect On The TOP500
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Rev 1.1 March 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Deep Learning GPU-Based Hardware Platform
Deep Learning GPU-Based Hardware Platform Hardware and Software Criteria and Selection Mourad Bouache Yahoo! Performance Engineering Group Sunnyvale, CA +1.408.784.1446 [email protected] John Glover
High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman
High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud
HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report
WHITE PAPER July 2012 HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report Executive Summary...1 The Four New 2012 Technologies Evaluated In This Benchmark...1 Benchmark Objective...2 Testing
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Mellanox HPC-X Software Toolkit Release Notes
Mellanox HPC-X Software Toolkit Release Notes Rev 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation
PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Mellanox WinOF for Windows 8 Quick Start Guide
Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012 1 Market Trends Big Data Growing technology deployments are creating an exponential increase in the volume
Mellanox Academy Course Catalog. Empower your organization with a new world of educational possibilities 2014-2015
Mellanox Academy Course Catalog Empower your organization with a new world of educational possibilities 2014-2015 Mellanox offers a variety of training methods and learning solutions for instructor-led
Enabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
Installing Hadoop over Ceph, Using High Performance Networking
WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3
High Speed I/O Server Computing with InfiniBand
High Speed I/O Server Computing with InfiniBand José Luís Gonçalves Dep. Informática, Universidade do Minho 4710-057 Braga, Portugal [email protected] Abstract: High-speed server computing heavily relies on
Power Saving Features in Mellanox Products
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters
High Throughput File Servers with SMB Direct, Using the 3 Flavors of network adapters Jose Barreto Principal Program Manager Microsoft Corporation Abstract In Windows Server 2012, we introduce the SMB
Storage Architectures. Ron Emerick, Oracle Corporation
PCI Express PRESENTATION and Its TITLE Interfaces GOES HERE to Flash Storage Architectures Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
PCI Express Impact on Storage Architectures and Future Data Centers
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation Author: Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes
Mellanox ConnectX -3 Firmware (fw-connectx3) Notes Rev 2.11.55 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 [email protected] Why Storage at a Distance the Storage Cloud Following
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
The following InfiniBand adaptor products based on Mellanox technologies are available from HP
Overview HP supports 56 Gbps Fourteen Data rate (FDR), 40Gbps 4x Quad Data Rate (QDR) InfiniBand products that include Host Channel Adapters (HCA), HP FlexLOM adaptors, switches, and cables for HP ProLiant
Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster
Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster Ryousei Takano Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking
Cluster Grid Interconects Tony Kay Chief Architect Enterprise Grid and Networking Agenda Cluster Grid Interconnects The Upstart - Infiniband The Empire Strikes Back - Myricom Return of the King 10G Gigabit
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.
Mellanox Technologies
Mellanox Technologies May 2011 Yossi Avni VP Sales, EMEA Connectivity Solutions for Efficient Computing Cloud Computing High-Performance Computing Enterprise and Storage Leading Connectivity Solution Provider
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX
White Paper BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH Abstract This white paper explains how to configure a Mellanox SwitchX Series switch to bridge the external network of an EMC Isilon
RDMA over Ethernet - A Preliminary Study
RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Outline Introduction Problem Statement
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing [email protected] Michael Kagan Chief Technology Officer [email protected]
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009
ECLIPSE Best Practices Performance, Productivity, Efficiency March 29 ECLIPSE Performance, Productivity, Efficiency The following research was performed under the HPC Advisory Council activities HPC Advisory
Mellanox Global Professional Services
Mellanox Global Professional Services User Guide Rev. 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Running Native Lustre* Client inside Intel Xeon Phi coprocessor
Running Native Lustre* Client inside Intel Xeon Phi coprocessor Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci 08 April 2014 * Some names and brands may be claimed as the property of others. What is the
Understanding PCI Bus, PCI-Express and In finiband Architecture
White Paper Understanding PCI Bus, PCI-Express and In finiband Architecture 1.0 Overview There is some confusion in the market place concerning the replacement of the PCI Bus (Peripheral Components Interface)
Supercomputing Clusters with RapidIO Interconnect Fabric
Supercomputing Clusters with RapidIO Interconnect Fabric Devashish Paul, Director Strategic Marketing, Systems Solutions [email protected] Ethernet Summit 2015 April 14-16, 2015 Santa Clara, CA Integrated
Oracle Big Data Appliance: Datacenter Network Integration
An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Mellanox Technologies, Ltd. Announces Record Quarterly Results
PRESS RELEASE Press/Media Contacts Ashley Paula Waggener Edstrom +1-415-547-7024 [email protected] USA Investor Contact Gwyn Lauber Mellanox Technologies +1-408-916-0012 [email protected] Israel
