Mellanox Technologies



Similar documents
Advancing Applications Performance With InfiniBand

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

SMB Direct for SQL Server and Private Cloud

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

State of the Art Cloud Infrastructure

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

Building a Scalable Storage with InfiniBand

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox Accelerated Storage Solutions

Introduction to Cloud Design Four Design Principals For IaaS

White Paper Solarflare High-Performance Computing (HPC) Applications

SX1012: High Performance Small Scale Top-of-Rack Switch

The following InfiniBand adaptor products based on Mellanox technologies are available from HP

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Connecting the Clouds

Michael Kagan.

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

3G Converged-NICs A Platform for Server I/O to Converged Networks

Cisco SFS 7000P InfiniBand Server Switch

Mellanox Academy Online Training (E-learning)

Sun Constellation System: The Open Petascale Computing Architecture

HUAWEI Tecal E6000 Blade Server

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

PRIMERGY server-based High Performance Computing solutions

LS DYNA Performance Benchmarks and Profiling. January 2009

HPC Update: Engagement Model

HUAWEI Tecal E9000 Converged Infrastructure Blade Server

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Replacing SAN with High Performance Windows Share over a Converged Network

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

ECLIPSE Performance Benchmarks and Profiling. January 2009

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Mellanox WinOF for Windows 8 Quick Start Guide

Large Scale Clustering with Voltaire InfiniBand HyperScale Technology

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Unified Computing Systems

Interconnecting Future DoE leadership systems

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

Hadoop on the Gordon Data Intensive Cluster

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Block based, file-based, combination. Component based, solution based

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

InfiniBand in the Enterprise Data Center

Oracle Big Data Appliance: Datacenter Network Integration

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

Mellanox Technologies, Ltd. Announces Record Quarterly Results

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

A-CLASS The rack-level supercomputer platform with hot-water cooling

Power Saving Features in Mellanox Products

Enabling High performance Big Data platform with RDMA

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Mellanox OpenStack Solution Reference Architecture

Storage, Cloud, Web 2.0, Big Data Driving Growth

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:

Mellanox Academy Course Catalog. Empower your organization with a new world of educational possibilities

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

HUAWEI E9000 Converged Infrastructure Blade Server

Mellanox Global Professional Services

Data Center Networking Designing Today s Data Center

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

The Future of Cloud Networking. Idris T. Vasi

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Cloud Computing and the Internet. Conferenza GARR 2010

Cisco SFS 7000D Series InfiniBand Server Switches

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Transcription:

Mellanox Technologies May 2011 Yossi Avni VP Sales, EMEA

Connectivity Solutions for Efficient Computing Cloud Computing High-Performance Computing Enterprise and Storage Leading Connectivity Solution Provider For Servers and Storage Providing Users Best-in-Class Networking Bandwidth, Application Performance and Response Time 2

Company Overview Ticker: MLNX Leading connectivity solutions provider for data center servers and storage systems Foundation for the world s most powerful and energy-efficient systems More than 7M ports shipped to date 10 Tapeouts went to production on 1 st pass Recent Awards Company headquarters: Yokneam, Israel; Sunnyvale, California ~ 700 employees; worldwide sales & support Solid financial position 2009: $116M 2010: > $150M 2011: > $0.25B Strong gross margin; Cash in the bank / no debt Completed acquisition of Voltaire, Ltd. 3

Combining Best-in-Class Systems Knowledge and Software with Best-in-Class Silicon InfiniBand Ethernet/VPI Mellanox Brings InfiniBand and 10GbE Silicon Technology & Roadmap Adapter Leadership Advanced HW Features End to End Experience Strong OEM Engagements Voltaire Brings InfiniBand and 10GbE Switch Systems Experience IB Switch Market Share Leadership End to End SW & Systems Solutions Strong Enterprise Customer Engagements + Combined Entity Silicon, Adapters and Systems Leadership IB Market Share Leadership Full Service Offering Strong Customer and OEMs Engagements InfiniScale & ConnectX HCA and Switch Silicon HCA Adapters, FIT Scalable Switch Systems Dell, HP, IBM, and Oracle + Grid Directors & Software UFM Fabric Management SW Applications Acceleration SW Enterprise Class Switches HP, IBM = InfiniBand Market Leader End to End Silicon, Systems, Software Solutions FDR/EDR Roadmap Application Acceleration and Fabric Management Software Full OEM Coverage 10GbE and 40GbE Adapters Highest performance Ethernet Silicon 10GbE LOM and Mezz Adapters at Dell, HP and IBM 10GbE Vantage Switches & SW UFM Fabric Management SW Applications Acceleration SW 24, 48, 288 Port 10GbE Switches + = HP, IBM Ethernet Innovator End to End Silicon, Systems, Software Solutions 10GbE, 40GbE and 100GbE Roadmap Application Acceleration and Fabric Management Software Strong OEM Coverage 4

Leading End-to-End Connectivity Solution Provider for Servers and Storage Systems Server / Compute Switch / Gateway Storage Front / Back-End Virtual Protocol Interconnect 40G IB & FCoIB 10/40GigE & FCoE Virtual Protocol Interconnect 40G InfiniBand 10/40GigE Fibre Channel ICs Industries Only End-to-End InfiniBand and Ethernet Portfolio Adapter Cards Host/Fabric Software Switches/Gateways Cables 5

Number of Clusters Mellanox in the TOP500 250 Top500 InfiniBand Trends 200 150 100 142 182 215 50 0 Nov 08 Nov 09 Nov 10 Mellanox InfiniBand builds the most powerful clusters Connects 4 out of the Top 10 and 61 systems in the Top 100 InfiniBand represents 43% of the TOP500 98% of the InfiniBand clusters use Mellanox solutions Mellanox InfiniBand enables the highest utilization on the TOP500 Up to 96% system utilization Mellanox 10GigE is the highest ranked 10GigE system (#126) 6

Sun Oracle Database Machine Exadata 2 Mellanox 40Gb/s InfiniBand-accelerated rack servers and storage Servers with Oracle 11g Solves I/O bottleneck between database servers and storage servers At least 10X Oracle data warehousing query performance Faster access to critical business information 7

Connecting the Data Center Ecosystem Hardware OEMs Servers Software Partners End Users Enterprise Data Centers Storage High-Performance Computing Embedded Embedded

Mellanox The Interconnect Company Full support for both InfiniBand and Ethernet

Flexibility / Consolidation: Virtual Protocol Interconnect (VPI) Broad OS / Virtualization support Strong software ecosystem foundation Consolidation / Extensive connectivity options and features Cost-Effective convergence over InfiniBand/FCoIB or Ethernet/FCoE Applications Protocols Acceleration Engines Networking, Cluster, Storage, Virtualization, RDMA Performance Application acceleration, PCIe 2.0, low-latency, high-bandwidth On-Demand Performance and Services 40Gb IB FCoIB 40GbE FCoE 10

Expanding End-to-End Ethernet Leadership Industry s highest performing Ethernet NIC 10/40GigE w/fcoe with hardware offload Ethernet industry s lowest 1.3μs end-to-end latency Faster application completion, better server utilization Tremendous ecosystem support momentum Multiple Tier-1 OEM design wins (Dell, IBM, HP) Servers, LAN on Motherboard (LOM), and storage systems Comprehensive OS Support VMware, Citrix, Windows, Linux High capacity, low latency 10GigE switches 24 to 288 ports with 600-1200ns latency Sold through multiple Tier-1 OEMs (IBM, HP) Consolidation over shared fabrics Integrated, complete management offering Service Oriented Infrastructure Management, with Open APIs Up-coming low latency 40GigE switches 11

Award Winning End-to-End InfiniBand Adapter market and performance leadership First to market with 40Gb/s (QDR) adapters Roadmap to end-to-end 56Gb/s (FDR) in 2011 Delivers next-gen application efficiency capabilities Global Tier-1 server and storage availability Bull, Dawning, Dell, Fujitsu, HP, IBM, Oracle, SGI, T-Platforms Comprehensive, performance-leading switch family Industry s highest density and scalability World s lowest port-to-port latency (25-50% lower than competitors) Comprehensive and feature-rich management/acceleration software Enhancing application performance and network ease-of-use High-performance converged I/O gateways Optimal scaling, consolidation, energy efficiency Lowers space and power and increases application performance Copper and Fiber Cables Exceeds IBTA mechanical & electrical standards Ultimate reliability and signal integrity 12

Mellanox Solutions VPI: Virtual Protocol Interconnect Supporting both InfiniBand and Ethernet Silicon: PhyX, CX, BX, SX Adapters: HCAs for InfiniBand NICs for Ethernet Ethernet & InfiniBand Gateways and Switches SW Solutions: UFM and Acceleration packages Cables

Breadth and Leadership: 10 and 40Gbps Ethernet NICs Adapters Industry-Firsts Dual-port PCIe 2.0 10GigE adapters 10/40GigE w/fcoe with HW offload Lowest latency in the Ethernet industry 1.3μs end-to-end latency Enables faster application completion, better server utilization Tremendous ecosystem support momentum Multiple Tier-1 OEM design wins ~10% worldwide market share and growing Servers, LAN on Motherboard (LOM), and storage systems VMware vsphere 4 Ready Citrix XenServer 4.1 in-the-box support Windows Server 2003 & 2008, RedHat 5, SLES 11 14

Industry-Leading InfiniBand HCAs Adapters InfiniBand market and performance leadership First to market with 40Gb/s adapters 56Gb/s in 2H11 100Gb/s Roadmap adapters in 2012 Critical next-generation efficiency capabilities GPUDirect Improved application performance on GPU-based clusters CORE-Direct In-hardware application communications offload MPI Collective Offload in the HCAs Strong industry adoption of 40Gb/s InfiniBand ~73% of revenue Close to 100% of IB-connected Top500 systems (Worldwide Tier-1 Server OEM Availability) 15

ConnectX-2 Adapter Products Part Number MHGH29B-XTR MHRH19B-XTR MHQH19B-XTR Connector and Port Speed IB/CX4, 20Gb/s QSFP, 20, 40Gb/s MHRH29B-XTR MHQH29B-XTR QSFP, 20, 40Gb/s MHZH29-XTR MNPH29C-XTR MNQH19-XTR QSFP (40Gb/s), SFP+ (10Gb/s) SFP+, DAC, optical module QSFP, Copper, AOC PCIe 2.0 Speed 5.0GT/s (40Gb/s) 5.0GT/s (40Gb/s) Features Hardware-based Transport/Applications Offloads, RDMA, GPU Communication Acceleration, QoS and Congestion Control Stateless Offload, FCoE Offload, SR-IOV OS Support RHEL, SLES, Windows, ESX RHEL, SLES, CentOS, Windows, ESX, XenServer Typ. Power Both ports 6.7W 7W 7.7W 8W 8.7W 8W 6.4W Cu 8.4W -SR 7W VPI Ethernet Only 16

High Performance Switches and Gateways InfiniBand Switches: InfiniBand market leader Broad solution: 8 to 648 Ports Industry s highest density switches at 51.8TB Comprehensive and feature-rich fabric management SW World s lowest port-to-port latency Advanced networking features Congestion management, QoS, dynamic routing BridgeX: High-Performance Converged I/O Gateways EoIB for Linux based hosts is GA Up-coming support bridging from IB to Ethernet (EoIB) IB to Fibre Channel (FCoIB) 17

40Gb/s InfiniBand Switch System Portfolio IS5025-36 ports Switch, Unmanaged (Externally Managed) - Host Subnet Manager based on MLNX_OFED - For cost conscious HPC/EDC customers with their own management software IS5030-36 ports Switch, Chassis Management - Fabric Management for small clusters (up to 108 nodes) - Low cost entry level managed switch IS5035-36 ports Switch, Fully managed - Fabric Management for large clusters (up to 648 nodes) - Comes with FabIT integrated without license IS5x00 - Modular chassis systems with up to 648 Ports - Designed for large to Peta-scale computing - Redundant components for high availability 18

Lowest cost 40Gb/s Switch System Portfolio IS5022-8 ports Switch, Unmanaged - Fixed (no FRUs) - For lowest cost conscious HPC/EDC customers with their own management software InfiniScale IV QDR InfiniBand Switch, 8, 18, 36 QSFP ports, 1 power supply, Unmanaged, Connector side airflow exhaust, no FRUs, with rack rails, Short Depth Form Factor, RoHS 6 IS5023-18 ports Switch, Unmanaged - Fixed (no FRUs) - For lowest cost conscious HPC/EDC customers with their own management software IS5024-36 ports Switch, Unmanaged - Fixed (no FRUs) - For lowest cost conscious HPC/EDC customers with their own management software 19

High Performance Ethernet switches Vantage 10GbE Portfolio Vantage 6024 Layer 2/3 Low-latency Switch 24 ports Managed as a single logical switch Vantage 8500 Layer 2 Core 288 ports Vantage 6048 Layer 2/3 Cloud Switch 48 ports

IB Gateway System: BX5020 Uplinks: 4 QSFP (IB) Downlinks: 16 SFP+ 12 1/10 GigE or 16 1/2/4/8 Gb/s FC or combination EN and FC ports Requires CX/CX2 HCAs with EoIB/FCoIB drivers Default EoIB enabled, separate license needed for FCoIB EoIB: GA April 11 FCoIB: Beta Now Dual hot-swappable redundant power supplies Replaceable fan drawer Embedded management PowerPC CPU, GigE and RS232 out-band management port BX5020 Best fit for HPC, Linux environment

Grid Director 4036E: 34 ports Switch + GW 1U High Voltaire IB-ETH Bridge Silicon 2 x 1/10GbE SFP+ 34 x 40Gb/s IB QSFP Based on Voltaire Grid Director 4036 + IB-ETH Bridging Silicon 34 x QDR/DDR/SDR (auto-negotiating) InfiniBand ports (QSFP) 2 x 1/10GbE ports (SFP+) OOB Ethernet and Serial ports for management Redundant hot-swappable power supplies Shared FRUs with 4036 Best fit for FSI

UFM: United Fabric Management End-to-end fabric management switches and nodes Supports all Mellanox portfolio, InfiniBand and Ethernet Provides Deep Visibility Real-time and historical monitoring of fabric health and performance Central fabric dashboard and fabric-wide congestion map Optimizes performance QoS, Traffic Aware Routing Algorithm (TARA) Multicast routing optimization Eliminates Complexity One pane of glass to monitor and configure fabrics of thousand of nodes Enable advanced features like segmentation and QoS by automating provisioning Abstract the physical layer into logical entities such as jobs and resource groups Maximizes Fabric Utilization Threshold based alerts to quickly identify fabric faults Performance optimization for maximum link utilization Open architecture for integration with other tools in-context actions and fabric database

Mellanox Acceleration Packages Accelerates Messaging FSI Accelerates iscsi block storage Variable Accelerates MPI Collectives HPC

What is VMA? Key Features: Multicast message acceleration using OS bypass A BSD-Socket compliant dynamically linked library Seamlessly supports any socket based application Scales well to 100 s of nodes, 1000 s of subs Use cases: Enterprise Low latency messaging Exchange ingress/egress connection acceleration Low latency trading for Hedge Funds / Banks Publisher-Subscriber (Pub-sub) applications

What is FCA? Mellanox Fabric Collectives Accelerator (FCA) Host software library that plugs into the MPI library Utilized topology and device aware algorithms to make Collective computation run faster Offloads collective computations to hardware engines CORE-Direct when applicable Switch icpu when applicable Provides simple interface for 3rd party MPIs No need to change the application

What is VSA? RDMA Storage Target Software iscsi over RDMA (iser) Target for data path Enhanced Target management (CLI and GUI) Scalability Multiple boxes scale linearly Managed as one virtual cluster (with HA & Load-balancing) Choice of transport InfiniBand - iser 10GigE iscsi (RoCE coming next) Persistent Messaging for Financial Services Fastest Storage for Database Applications Fast Cache for Cloud Computing Real-Time Billing in Telecommunications

Copper and Fiber Cable Solutions Complete portfolio Passive copper up to 8 meters Active copper 8-12 meters Assembled optical up to 300 meters Full validation Exceeds IBTA mechanical & electrical standards Verification on Mellanox switches and HCA Interoperability and signal integrity Qualification and quality assurance program QSFP To SFP+ Adapter (QSA) World s first solution for the QSFP to SFP+ conversion challenge Smooth, cost-effective, connections between 10GigE, 40GigE and 40Gb/s InfiniBand Future-proofs and enhances 40Gb/s ecosystem today 28

InfiniBand For Fabric Consolidation High bandwidth pipe for capacity provisioning Dedicated I/O channels enable convergence For Networking, Storage, Management Application compatibility QoS - differentiates different traffic types Partitions logical fabrics, isolation Flexibility Soft servers / fabric repurposing 29

Summary and Mellanox WW Global Support

Complete End-to-End Connectivity Host/Fabric Software Management Application Accelerations - UFM, FabricIT - Integration with job schedulers - Inbox Drivers - Collectives Accelerations (FCA/CORE-Direct) - GPU Accelerations (GPUDirect) - MPI/SHMEM - RDMA - Quality of Service Networking Efficiency/Scalability - Adaptive Routing - Congestion Management - Traffic aware Routing (TARA) Server and Storage High-Speed Connectivity - Latency - Bandwidth - CPU Utilization - Message rate

Efficient Scale-out Data Center from End-to-End Ease of Use / Management CPU Efficiency & Accelerated Application Performance Deterministic Network at Scale - Monitor / Analyze - Optimize - Troubleshoot - Low Latency Sockets & RDMA - Core-Direct, GPU-Direct - Software Acceleration - Adaptive Routing - Congestion Control - Quality of Service Reliable Connection of Servers and Storage Infrastructure Reduction 60% Energy Cost Reduction 65% Performance Increase 10X 32

Mellanox M-1 Support Services: Expert Cluster Service and Support Service Level Agreements: Bronze, Silver, Gold Professional Services: Cluster design for every size cluster Best Practices review for cluster bring-up Best Practices review for cluster installation and maintenance Fabric Management configuration and validation assistance Benchmarking assistance and performance tuning optimizations Assistance in bringing up any InfiniBand attached storage Best practices review, bring up and optimization of Mellanox Gateway products for bridging to Ethernet or Fibre Channel systems Online Customer Support Portal Complete case management create, escalate, track, notify Easy access: documentation, SW/FW 33

Appendix -- TOP 500 November 2010 Mellanox references

Interconnect Trends Top500 (Line Chart) InfiniBand is the only growing high speed clustering standard interconnect 216 systems on the Nov 2010 list, 19% increase since Nov 2009 InfiniBand is the HPC interconnect of choice Connecting 43.2% of the Top500 systems Over 98% of the InfiniBand connected systems utilize Mellanox technology and solutions 35

InfiniBand/Mellanox in The Top500 InfiniBand is the only growing standard interconnect technology 216 clusters, 43.2% of the list, 19% increase since Nov 2009 (YoY) Ethernet shows decline in number of systems: -13% YoY Clustered proprietary interconnects show decline in number of systems: -40% YoY In the Top100, 24 new InfiniBand systems were added in 2010, 0 (zero) new Ethernet systems Mellanox InfiniBand the only standard interconnect solution for Petascale systems Connects 4 out of the 7 sustained Petascale performance systems on the list Connects nearly 3x the number of Cray based system in the Top100, 7.5x in Top500 Mellanox end-to-end 40G scalable HPC solution accelerates NVIDIA GPU based systems GPUDirect technology enables faster communications and performance increase for systems on the list The most used interconnect in the Top100, Top200, Top300 62% of the Top100, 58% of the Top200, 52% of the Top300 Mellanox 10GigE the highest ranked 10GigE system (#127) at Purdue university Mellanox 10GigE RoCE enables 23% higher efficiency compared to 10GigE iwarp connected system 36

InfiniBand Unsurpassed System Efficiency Non-Mellanox InfiniBand Mellanox 10GigE iwarp IB- GPU Clusters Top500 systems listed according to their efficiency InfiniBand is the key element responsible for the highest systems efficiency Average efficiency: InfiniBand 90%, 10GigE: 75%, GigE: 50% Mellanox delivers efficiencies of up to 96% with IB, 80% with 10GigE (highest 10GigE efficiency) 37

Performance Leadership Across Industries 30%+ of Fortune-100 & top global-high Performance Computers 6 of Top 10 Global Banks 9 of Top 10 Automotive Manufacturers 4 of Top 10 Pharmaceutical Companies 7 of Top 10 Oil and Gas Companies 38

Commis l'energie Atomique (CEA) - #6 Tera 100, the fastest supercomputer in Europe First PetaFlop system in Europe - 1.25 PF performance 4,300 Bull S Series servers 140,000 Intel Xeon 7500 processing cores 300TB of central memory, 20PB of storage Mellanox InfiniBand 40Gb/s 39

Julich - #13, 40Gbs for Highest System Utilization Mellanox End-to-End 40Gb/s Connectivity Network Adaptation: ensures highest efficiency Self Recovery: ensures highest reliability Scalability: the solution for Peta/Exa flops systems On-demand resources: allocation per demand Green HPC: lowering system power consumption JuRoPA and HPC FF Supercomputer 274.8 TeraFlops at 91.6% Efficiency #13 on the Top500 3,288 Bull compute nodes 79 TB main memory 26,304 CPU cores 40

Cambridge University - HPC Cloud Provides on-demand access to support over 400 users Spread across 70 research groups Coupled atmosphere-ocean models Traditional sciences such as chemistry, physics and biology HPC-based research such as bio-medicine, clinicalmedicine and social sciences. Dell based, Mellanox end-to-end 40Gb/s InfiniBand connected ConnectX-2 HCAs BridgeX gateway system, (InfiniBand to Ethernet) Mellanox 40Gb/s InfiniBand switches

Goethe University LOEWE-CSC - #22 470TFlops, 20K CPU cores and 800 GPUs Clustervision based, AMD Opteron Magny-Cours, ATI GPUs, Supermicro Servers Mellanox 40Gb/s InfiniBand solutions System supports wide range of research activities Theoretical physics Chemistry Life sciences Computer science

Moscow State University: Research Computing Center Lomonosov cluster, #17 420 TFlops, T-Platforms TB2-XN T-Blade Chassis 4.4K nodes, 36K cores Mellanox end-to-end 40Gb/s InfiniBand connectivity

KTH Royal University Stockholm Ekman cluster, Dell based, 1300 nodes, 86Tflops Located at PDC, Stockholm Mellanox 20Gb/s InfiniBand solutions Fat Tree, full bisection bandwidth

AIRBUS HPC Centers 4K nodes total in two locations France and Germany HP based, Mellanox InfiniBand end-to-end adapters and switches Mellanox UFM for advanced fabric management To support their business needs, the AIRBUS HPC computing capability needs 1K nodes to double every year 1K nodes 2K nodes Source:

ORNL Spider System Lustre File System Oak Ridge Nation Lab central storage system 13,400 drives 192 Lustre OSS 240GB/s bandwidth Mellanox InfiniBand interconnect 10PB capacity World leading high- performance InfiniBand storage system

Swiss Supercomputing Center (CSCS) GPFS Storage 108 Node IBM GPFS Cluster Mellanox FabricIT management software allows the GPFS clusters to be part of several class C networks Using redundant switch fabric Mellanox BridgeX bridge the InfiniBand network to a 10G/1G Ethernet intra-network Partitioning ensures no traffic cross between the different clusters connected to the GPFS file system Uses High Availability for IPoIB

National Supercomputing Centre in Shenzhen - #3 Dawning TC3600 Blade Supercomputer 5,200 nodes, 120,640 cores, NVIDIA GPUs Mellanox end-to-end 40Gb/s InfiniBand solutions ConnectX-2 and IS5000 switches 1.27 sustained PetaFlop performance The 1 st PetaFlop systems in China 48

Tokyo Institute of Technology - #4 TSUBAME 2.0, the fastest supercomputer in Japan First PetaFlop system in Japan, 1.2 PF performance HP ProLiant SL390s G7 1,400 servers 4,200 NVIDIA Tesla M2050 Fermi-based GPUs Mellanox 40Gb/s InfiniBand 49

LANL Roadrunner #7, World First PetaFlop System World first machine to break the PetaFlop barrier Los Alamos Nation Lab, #2 on Nov 2009 Top500 list Usage - national nuclear weapons, astronomy, human genome science and climate change CPU-GPU-InfiniBand combination for balanced computing More than 1,000 trillion operations per second 12,960 IBM PowerXCell CPUs, 3,456 tri-blade units Mellanox ConnectX 20Gb/s InfiniBand adapters Mellanox based InfiniScale III 20Gb/s switches Mellanox Interconnect is the only scalable high-performance solution for PetaScale computing 50

NASA Ames Research Center (NAS) - #11 9,200 nodes 97,000 cores with SGI ICE Servers 773 Teraflops sustained performance Mellanox InfiniBand adapters and switch based systems Supports variety of scientific and engineering projects Coupled atmosphere-ocean models Future space vehicle design Large-scale dark matter halos and galaxy evolution 51

LRZ SuperMUC Supercomputer Leibniz Supercomputing Centre Schedule: 648 Nodes 2011, 9,000 Nodes in 2012, 5,000 Nodes 2013 System will be available for European researchers To expand the frontiers of medicine, astrophysics and other scientific disciplines System will provide a peak performance of 3 Petaflop/s IBM idataplex based Mellanox end-to-end FDR InfiniBand switches and adapters

InfiniBand in the Cloud and Enterprise Runs Oracle 10X Faster The World s Fastest Database Machine Mellanox 40Gb/s InfiniBand Accelerated 95% Efficiency and Scalability 53

Sun Oracle Database Machine Exadata 2 Mellanox 40Gb/s InfiniBand-accelerated rack servers and storage Servers with Oracle 11g Solves I/O bottleneck between database servers and storage servers At least 10X Oracle data warehousing query performance Faster access to critical business information 54

Thank You Yossi Avni yossia@mellanox.com Cell: +972 54 9000 967