Mellanox Technologies

Size: px
Start display at page:

Download "Mellanox Technologies"

Transcription

1 Mellanox Technologies May 2011 Yossi Avni VP Sales, EMEA

2 Connectivity Solutions for Efficient Computing Cloud Computing High-Performance Computing Enterprise and Storage Leading Connectivity Solution Provider For Servers and Storage Providing Users Best-in-Class Networking Bandwidth, Application Performance and Response Time 2

3 Company Overview Ticker: MLNX Leading connectivity solutions provider for data center servers and storage systems Foundation for the world s most powerful and energy-efficient systems More than 7M ports shipped to date 10 Tapeouts went to production on 1 st pass Recent Awards Company headquarters: Yokneam, Israel; Sunnyvale, California ~ 700 employees; worldwide sales & support Solid financial position 2009: $116M 2010: > $150M 2011: > $0.25B Strong gross margin; Cash in the bank / no debt Completed acquisition of Voltaire, Ltd. 3

4 Combining Best-in-Class Systems Knowledge and Software with Best-in-Class Silicon InfiniBand Ethernet/VPI Mellanox Brings InfiniBand and 10GbE Silicon Technology & Roadmap Adapter Leadership Advanced HW Features End to End Experience Strong OEM Engagements Voltaire Brings InfiniBand and 10GbE Switch Systems Experience IB Switch Market Share Leadership End to End SW & Systems Solutions Strong Enterprise Customer Engagements + Combined Entity Silicon, Adapters and Systems Leadership IB Market Share Leadership Full Service Offering Strong Customer and OEMs Engagements InfiniScale & ConnectX HCA and Switch Silicon HCA Adapters, FIT Scalable Switch Systems Dell, HP, IBM, and Oracle + Grid Directors & Software UFM Fabric Management SW Applications Acceleration SW Enterprise Class Switches HP, IBM = InfiniBand Market Leader End to End Silicon, Systems, Software Solutions FDR/EDR Roadmap Application Acceleration and Fabric Management Software Full OEM Coverage 10GbE and 40GbE Adapters Highest performance Ethernet Silicon 10GbE LOM and Mezz Adapters at Dell, HP and IBM 10GbE Vantage Switches & SW UFM Fabric Management SW Applications Acceleration SW 24, 48, 288 Port 10GbE Switches + = HP, IBM Ethernet Innovator End to End Silicon, Systems, Software Solutions 10GbE, 40GbE and 100GbE Roadmap Application Acceleration and Fabric Management Software Strong OEM Coverage 4

5 Leading End-to-End Connectivity Solution Provider for Servers and Storage Systems Server / Compute Switch / Gateway Storage Front / Back-End Virtual Protocol Interconnect 40G IB & FCoIB 10/40GigE & FCoE Virtual Protocol Interconnect 40G InfiniBand 10/40GigE Fibre Channel ICs Industries Only End-to-End InfiniBand and Ethernet Portfolio Adapter Cards Host/Fabric Software Switches/Gateways Cables 5

6 Number of Clusters Mellanox in the TOP Top500 InfiniBand Trends Nov 08 Nov 09 Nov 10 Mellanox InfiniBand builds the most powerful clusters Connects 4 out of the Top 10 and 61 systems in the Top 100 InfiniBand represents 43% of the TOP500 98% of the InfiniBand clusters use Mellanox solutions Mellanox InfiniBand enables the highest utilization on the TOP500 Up to 96% system utilization Mellanox 10GigE is the highest ranked 10GigE system (#126) 6

7 Sun Oracle Database Machine Exadata 2 Mellanox 40Gb/s InfiniBand-accelerated rack servers and storage Servers with Oracle 11g Solves I/O bottleneck between database servers and storage servers At least 10X Oracle data warehousing query performance Faster access to critical business information 7

8 Connecting the Data Center Ecosystem Hardware OEMs Servers Software Partners End Users Enterprise Data Centers Storage High-Performance Computing Embedded Embedded

9 Mellanox The Interconnect Company Full support for both InfiniBand and Ethernet

10 Flexibility / Consolidation: Virtual Protocol Interconnect (VPI) Broad OS / Virtualization support Strong software ecosystem foundation Consolidation / Extensive connectivity options and features Cost-Effective convergence over InfiniBand/FCoIB or Ethernet/FCoE Applications Protocols Acceleration Engines Networking, Cluster, Storage, Virtualization, RDMA Performance Application acceleration, PCIe 2.0, low-latency, high-bandwidth On-Demand Performance and Services 40Gb IB FCoIB 40GbE FCoE 10

11 Expanding End-to-End Ethernet Leadership Industry s highest performing Ethernet NIC 10/40GigE w/fcoe with hardware offload Ethernet industry s lowest 1.3μs end-to-end latency Faster application completion, better server utilization Tremendous ecosystem support momentum Multiple Tier-1 OEM design wins (Dell, IBM, HP) Servers, LAN on Motherboard (LOM), and storage systems Comprehensive OS Support VMware, Citrix, Windows, Linux High capacity, low latency 10GigE switches 24 to 288 ports with ns latency Sold through multiple Tier-1 OEMs (IBM, HP) Consolidation over shared fabrics Integrated, complete management offering Service Oriented Infrastructure Management, with Open APIs Up-coming low latency 40GigE switches 11

12 Award Winning End-to-End InfiniBand Adapter market and performance leadership First to market with 40Gb/s (QDR) adapters Roadmap to end-to-end 56Gb/s (FDR) in 2011 Delivers next-gen application efficiency capabilities Global Tier-1 server and storage availability Bull, Dawning, Dell, Fujitsu, HP, IBM, Oracle, SGI, T-Platforms Comprehensive, performance-leading switch family Industry s highest density and scalability World s lowest port-to-port latency (25-50% lower than competitors) Comprehensive and feature-rich management/acceleration software Enhancing application performance and network ease-of-use High-performance converged I/O gateways Optimal scaling, consolidation, energy efficiency Lowers space and power and increases application performance Copper and Fiber Cables Exceeds IBTA mechanical & electrical standards Ultimate reliability and signal integrity 12

13 Mellanox Solutions VPI: Virtual Protocol Interconnect Supporting both InfiniBand and Ethernet Silicon: PhyX, CX, BX, SX Adapters: HCAs for InfiniBand NICs for Ethernet Ethernet & InfiniBand Gateways and Switches SW Solutions: UFM and Acceleration packages Cables

14 Breadth and Leadership: 10 and 40Gbps Ethernet NICs Adapters Industry-Firsts Dual-port PCIe GigE adapters 10/40GigE w/fcoe with HW offload Lowest latency in the Ethernet industry 1.3μs end-to-end latency Enables faster application completion, better server utilization Tremendous ecosystem support momentum Multiple Tier-1 OEM design wins ~10% worldwide market share and growing Servers, LAN on Motherboard (LOM), and storage systems VMware vsphere 4 Ready Citrix XenServer 4.1 in-the-box support Windows Server 2003 & 2008, RedHat 5, SLES 11 14

15 Industry-Leading InfiniBand HCAs Adapters InfiniBand market and performance leadership First to market with 40Gb/s adapters 56Gb/s in 2H11 100Gb/s Roadmap adapters in 2012 Critical next-generation efficiency capabilities GPUDirect Improved application performance on GPU-based clusters CORE-Direct In-hardware application communications offload MPI Collective Offload in the HCAs Strong industry adoption of 40Gb/s InfiniBand ~73% of revenue Close to 100% of IB-connected Top500 systems (Worldwide Tier-1 Server OEM Availability) 15

16 ConnectX-2 Adapter Products Part Number MHGH29B-XTR MHRH19B-XTR MHQH19B-XTR Connector and Port Speed IB/CX4, 20Gb/s QSFP, 20, 40Gb/s MHRH29B-XTR MHQH29B-XTR QSFP, 20, 40Gb/s MHZH29-XTR MNPH29C-XTR MNQH19-XTR QSFP (40Gb/s), SFP+ (10Gb/s) SFP+, DAC, optical module QSFP, Copper, AOC PCIe 2.0 Speed 5.0GT/s (40Gb/s) 5.0GT/s (40Gb/s) Features Hardware-based Transport/Applications Offloads, RDMA, GPU Communication Acceleration, QoS and Congestion Control Stateless Offload, FCoE Offload, SR-IOV OS Support RHEL, SLES, Windows, ESX RHEL, SLES, CentOS, Windows, ESX, XenServer Typ. Power Both ports 6.7W 7W 7.7W 8W 8.7W 8W 6.4W Cu 8.4W -SR 7W VPI Ethernet Only 16

17 High Performance Switches and Gateways InfiniBand Switches: InfiniBand market leader Broad solution: 8 to 648 Ports Industry s highest density switches at 51.8TB Comprehensive and feature-rich fabric management SW World s lowest port-to-port latency Advanced networking features Congestion management, QoS, dynamic routing BridgeX: High-Performance Converged I/O Gateways EoIB for Linux based hosts is GA Up-coming support bridging from IB to Ethernet (EoIB) IB to Fibre Channel (FCoIB) 17

18 40Gb/s InfiniBand Switch System Portfolio IS ports Switch, Unmanaged (Externally Managed) - Host Subnet Manager based on MLNX_OFED - For cost conscious HPC/EDC customers with their own management software IS ports Switch, Chassis Management - Fabric Management for small clusters (up to 108 nodes) - Low cost entry level managed switch IS ports Switch, Fully managed - Fabric Management for large clusters (up to 648 nodes) - Comes with FabIT integrated without license IS5x00 - Modular chassis systems with up to 648 Ports - Designed for large to Peta-scale computing - Redundant components for high availability 18

19 Lowest cost 40Gb/s Switch System Portfolio IS ports Switch, Unmanaged - Fixed (no FRUs) - For lowest cost conscious HPC/EDC customers with their own management software InfiniScale IV QDR InfiniBand Switch, 8, 18, 36 QSFP ports, 1 power supply, Unmanaged, Connector side airflow exhaust, no FRUs, with rack rails, Short Depth Form Factor, RoHS 6 IS ports Switch, Unmanaged - Fixed (no FRUs) - For lowest cost conscious HPC/EDC customers with their own management software IS ports Switch, Unmanaged - Fixed (no FRUs) - For lowest cost conscious HPC/EDC customers with their own management software 19

20 High Performance Ethernet switches Vantage 10GbE Portfolio Vantage 6024 Layer 2/3 Low-latency Switch 24 ports Managed as a single logical switch Vantage 8500 Layer 2 Core 288 ports Vantage 6048 Layer 2/3 Cloud Switch 48 ports

21 IB Gateway System: BX5020 Uplinks: 4 QSFP (IB) Downlinks: 16 SFP+ 12 1/10 GigE or 16 1/2/4/8 Gb/s FC or combination EN and FC ports Requires CX/CX2 HCAs with EoIB/FCoIB drivers Default EoIB enabled, separate license needed for FCoIB EoIB: GA April 11 FCoIB: Beta Now Dual hot-swappable redundant power supplies Replaceable fan drawer Embedded management PowerPC CPU, GigE and RS232 out-band management port BX5020 Best fit for HPC, Linux environment

22 Grid Director 4036E: 34 ports Switch + GW 1U High Voltaire IB-ETH Bridge Silicon 2 x 1/10GbE SFP+ 34 x 40Gb/s IB QSFP Based on Voltaire Grid Director IB-ETH Bridging Silicon 34 x QDR/DDR/SDR (auto-negotiating) InfiniBand ports (QSFP) 2 x 1/10GbE ports (SFP+) OOB Ethernet and Serial ports for management Redundant hot-swappable power supplies Shared FRUs with 4036 Best fit for FSI

23 UFM: United Fabric Management End-to-end fabric management switches and nodes Supports all Mellanox portfolio, InfiniBand and Ethernet Provides Deep Visibility Real-time and historical monitoring of fabric health and performance Central fabric dashboard and fabric-wide congestion map Optimizes performance QoS, Traffic Aware Routing Algorithm (TARA) Multicast routing optimization Eliminates Complexity One pane of glass to monitor and configure fabrics of thousand of nodes Enable advanced features like segmentation and QoS by automating provisioning Abstract the physical layer into logical entities such as jobs and resource groups Maximizes Fabric Utilization Threshold based alerts to quickly identify fabric faults Performance optimization for maximum link utilization Open architecture for integration with other tools in-context actions and fabric database

24 Mellanox Acceleration Packages Accelerates Messaging FSI Accelerates iscsi block storage Variable Accelerates MPI Collectives HPC

25 What is VMA? Key Features: Multicast message acceleration using OS bypass A BSD-Socket compliant dynamically linked library Seamlessly supports any socket based application Scales well to 100 s of nodes, 1000 s of subs Use cases: Enterprise Low latency messaging Exchange ingress/egress connection acceleration Low latency trading for Hedge Funds / Banks Publisher-Subscriber (Pub-sub) applications

26 What is FCA? Mellanox Fabric Collectives Accelerator (FCA) Host software library that plugs into the MPI library Utilized topology and device aware algorithms to make Collective computation run faster Offloads collective computations to hardware engines CORE-Direct when applicable Switch icpu when applicable Provides simple interface for 3rd party MPIs No need to change the application

27 What is VSA? RDMA Storage Target Software iscsi over RDMA (iser) Target for data path Enhanced Target management (CLI and GUI) Scalability Multiple boxes scale linearly Managed as one virtual cluster (with HA & Load-balancing) Choice of transport InfiniBand - iser 10GigE iscsi (RoCE coming next) Persistent Messaging for Financial Services Fastest Storage for Database Applications Fast Cache for Cloud Computing Real-Time Billing in Telecommunications

28 Copper and Fiber Cable Solutions Complete portfolio Passive copper up to 8 meters Active copper 8-12 meters Assembled optical up to 300 meters Full validation Exceeds IBTA mechanical & electrical standards Verification on Mellanox switches and HCA Interoperability and signal integrity Qualification and quality assurance program QSFP To SFP+ Adapter (QSA) World s first solution for the QSFP to SFP+ conversion challenge Smooth, cost-effective, connections between 10GigE, 40GigE and 40Gb/s InfiniBand Future-proofs and enhances 40Gb/s ecosystem today 28

29 InfiniBand For Fabric Consolidation High bandwidth pipe for capacity provisioning Dedicated I/O channels enable convergence For Networking, Storage, Management Application compatibility QoS - differentiates different traffic types Partitions logical fabrics, isolation Flexibility Soft servers / fabric repurposing 29

30 Summary and Mellanox WW Global Support

31 Complete End-to-End Connectivity Host/Fabric Software Management Application Accelerations - UFM, FabricIT - Integration with job schedulers - Inbox Drivers - Collectives Accelerations (FCA/CORE-Direct) - GPU Accelerations (GPUDirect) - MPI/SHMEM - RDMA - Quality of Service Networking Efficiency/Scalability - Adaptive Routing - Congestion Management - Traffic aware Routing (TARA) Server and Storage High-Speed Connectivity - Latency - Bandwidth - CPU Utilization - Message rate

32 Efficient Scale-out Data Center from End-to-End Ease of Use / Management CPU Efficiency & Accelerated Application Performance Deterministic Network at Scale - Monitor / Analyze - Optimize - Troubleshoot - Low Latency Sockets & RDMA - Core-Direct, GPU-Direct - Software Acceleration - Adaptive Routing - Congestion Control - Quality of Service Reliable Connection of Servers and Storage Infrastructure Reduction 60% Energy Cost Reduction 65% Performance Increase 10X 32

33 Mellanox M-1 Support Services: Expert Cluster Service and Support Service Level Agreements: Bronze, Silver, Gold Professional Services: Cluster design for every size cluster Best Practices review for cluster bring-up Best Practices review for cluster installation and maintenance Fabric Management configuration and validation assistance Benchmarking assistance and performance tuning optimizations Assistance in bringing up any InfiniBand attached storage Best practices review, bring up and optimization of Mellanox Gateway products for bridging to Ethernet or Fibre Channel systems Online Customer Support Portal Complete case management create, escalate, track, notify Easy access: documentation, SW/FW 33

34 Appendix -- TOP 500 November 2010 Mellanox references

35 Interconnect Trends Top500 (Line Chart) InfiniBand is the only growing high speed clustering standard interconnect 216 systems on the Nov 2010 list, 19% increase since Nov 2009 InfiniBand is the HPC interconnect of choice Connecting 43.2% of the Top500 systems Over 98% of the InfiniBand connected systems utilize Mellanox technology and solutions 35

36 InfiniBand/Mellanox in The Top500 InfiniBand is the only growing standard interconnect technology 216 clusters, 43.2% of the list, 19% increase since Nov 2009 (YoY) Ethernet shows decline in number of systems: -13% YoY Clustered proprietary interconnects show decline in number of systems: -40% YoY In the Top100, 24 new InfiniBand systems were added in 2010, 0 (zero) new Ethernet systems Mellanox InfiniBand the only standard interconnect solution for Petascale systems Connects 4 out of the 7 sustained Petascale performance systems on the list Connects nearly 3x the number of Cray based system in the Top100, 7.5x in Top500 Mellanox end-to-end 40G scalable HPC solution accelerates NVIDIA GPU based systems GPUDirect technology enables faster communications and performance increase for systems on the list The most used interconnect in the Top100, Top200, Top300 62% of the Top100, 58% of the Top200, 52% of the Top300 Mellanox 10GigE the highest ranked 10GigE system (#127) at Purdue university Mellanox 10GigE RoCE enables 23% higher efficiency compared to 10GigE iwarp connected system 36

37 InfiniBand Unsurpassed System Efficiency Non-Mellanox InfiniBand Mellanox 10GigE iwarp IB- GPU Clusters Top500 systems listed according to their efficiency InfiniBand is the key element responsible for the highest systems efficiency Average efficiency: InfiniBand 90%, 10GigE: 75%, GigE: 50% Mellanox delivers efficiencies of up to 96% with IB, 80% with 10GigE (highest 10GigE efficiency) 37

38 Performance Leadership Across Industries 30%+ of Fortune-100 & top global-high Performance Computers 6 of Top 10 Global Banks 9 of Top 10 Automotive Manufacturers 4 of Top 10 Pharmaceutical Companies 7 of Top 10 Oil and Gas Companies 38

39 Commis l'energie Atomique (CEA) - #6 Tera 100, the fastest supercomputer in Europe First PetaFlop system in Europe PF performance 4,300 Bull S Series servers 140,000 Intel Xeon 7500 processing cores 300TB of central memory, 20PB of storage Mellanox InfiniBand 40Gb/s 39

40 Julich - #13, 40Gbs for Highest System Utilization Mellanox End-to-End 40Gb/s Connectivity Network Adaptation: ensures highest efficiency Self Recovery: ensures highest reliability Scalability: the solution for Peta/Exa flops systems On-demand resources: allocation per demand Green HPC: lowering system power consumption JuRoPA and HPC FF Supercomputer TeraFlops at 91.6% Efficiency #13 on the Top500 3,288 Bull compute nodes 79 TB main memory 26,304 CPU cores 40

41 Cambridge University - HPC Cloud Provides on-demand access to support over 400 users Spread across 70 research groups Coupled atmosphere-ocean models Traditional sciences such as chemistry, physics and biology HPC-based research such as bio-medicine, clinicalmedicine and social sciences. Dell based, Mellanox end-to-end 40Gb/s InfiniBand connected ConnectX-2 HCAs BridgeX gateway system, (InfiniBand to Ethernet) Mellanox 40Gb/s InfiniBand switches

42 Goethe University LOEWE-CSC - #22 470TFlops, 20K CPU cores and 800 GPUs Clustervision based, AMD Opteron Magny-Cours, ATI GPUs, Supermicro Servers Mellanox 40Gb/s InfiniBand solutions System supports wide range of research activities Theoretical physics Chemistry Life sciences Computer science

43 Moscow State University: Research Computing Center Lomonosov cluster, # TFlops, T-Platforms TB2-XN T-Blade Chassis 4.4K nodes, 36K cores Mellanox end-to-end 40Gb/s InfiniBand connectivity

44 KTH Royal University Stockholm Ekman cluster, Dell based, 1300 nodes, 86Tflops Located at PDC, Stockholm Mellanox 20Gb/s InfiniBand solutions Fat Tree, full bisection bandwidth

45 AIRBUS HPC Centers 4K nodes total in two locations France and Germany HP based, Mellanox InfiniBand end-to-end adapters and switches Mellanox UFM for advanced fabric management To support their business needs, the AIRBUS HPC computing capability needs 1K nodes to double every year 1K nodes 2K nodes Source:

46 ORNL Spider System Lustre File System Oak Ridge Nation Lab central storage system 13,400 drives 192 Lustre OSS 240GB/s bandwidth Mellanox InfiniBand interconnect 10PB capacity World leading high- performance InfiniBand storage system

47 Swiss Supercomputing Center (CSCS) GPFS Storage 108 Node IBM GPFS Cluster Mellanox FabricIT management software allows the GPFS clusters to be part of several class C networks Using redundant switch fabric Mellanox BridgeX bridge the InfiniBand network to a 10G/1G Ethernet intra-network Partitioning ensures no traffic cross between the different clusters connected to the GPFS file system Uses High Availability for IPoIB

48 National Supercomputing Centre in Shenzhen - #3 Dawning TC3600 Blade Supercomputer 5,200 nodes, 120,640 cores, NVIDIA GPUs Mellanox end-to-end 40Gb/s InfiniBand solutions ConnectX-2 and IS5000 switches 1.27 sustained PetaFlop performance The 1 st PetaFlop systems in China 48

49 Tokyo Institute of Technology - #4 TSUBAME 2.0, the fastest supercomputer in Japan First PetaFlop system in Japan, 1.2 PF performance HP ProLiant SL390s G7 1,400 servers 4,200 NVIDIA Tesla M2050 Fermi-based GPUs Mellanox 40Gb/s InfiniBand 49

50 LANL Roadrunner #7, World First PetaFlop System World first machine to break the PetaFlop barrier Los Alamos Nation Lab, #2 on Nov 2009 Top500 list Usage - national nuclear weapons, astronomy, human genome science and climate change CPU-GPU-InfiniBand combination for balanced computing More than 1,000 trillion operations per second 12,960 IBM PowerXCell CPUs, 3,456 tri-blade units Mellanox ConnectX 20Gb/s InfiniBand adapters Mellanox based InfiniScale III 20Gb/s switches Mellanox Interconnect is the only scalable high-performance solution for PetaScale computing 50

51 NASA Ames Research Center (NAS) - #11 9,200 nodes 97,000 cores with SGI ICE Servers 773 Teraflops sustained performance Mellanox InfiniBand adapters and switch based systems Supports variety of scientific and engineering projects Coupled atmosphere-ocean models Future space vehicle design Large-scale dark matter halos and galaxy evolution 51

52 LRZ SuperMUC Supercomputer Leibniz Supercomputing Centre Schedule: 648 Nodes 2011, 9,000 Nodes in 2012, 5,000 Nodes 2013 System will be available for European researchers To expand the frontiers of medicine, astrophysics and other scientific disciplines System will provide a peak performance of 3 Petaflop/s IBM idataplex based Mellanox end-to-end FDR InfiniBand switches and adapters

53 InfiniBand in the Cloud and Enterprise Runs Oracle 10X Faster The World s Fastest Database Machine Mellanox 40Gb/s InfiniBand Accelerated 95% Efficiency and Scalability 53

54 Sun Oracle Database Machine Exadata 2 Mellanox 40Gb/s InfiniBand-accelerated rack servers and storage Servers with Oracle 11g Solves I/O bottleneck between database servers and storage servers At least 10X Oracle data warehousing query performance Faster access to critical business information 54

55 Thank You Yossi Avni Cell:

Advancing Applications Performance With InfiniBand

Advancing Applications Performance With InfiniBand Advancing Applications Performance With InfiniBand Pak Lui, Application Performance Manager September 12, 2013 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server and

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014 Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,

More information

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems

More information

State of the Art Cloud Infrastructure

State of the Art Cloud Infrastructure State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks

More information

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Mellanox continues its leadership providing InfiniBand Host Channel

More information

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP: Overview HP supports 56 Gbps Fourteen Data Rate (FDR) and 40Gbps 4X Quad Data Rate (QDR) InfiniBand (IB) products that include mezzanine Host Channel Adapters (HCA) for server blades, dual mode InfiniBand

More information

Building a Scalable Storage with InfiniBand

Building a Scalable Storage with InfiniBand WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5

More information

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015 Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2015 InfiniBand FDR and EDR Continue Growth and Leadership The Most Used Interconnect On The TOP500

More information

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox Smart InfiniBand Switch Systems the highest performing interconnect

More information

Mellanox Accelerated Storage Solutions

Mellanox Accelerated Storage Solutions Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.

More information

Introduction to Cloud Design Four Design Principals For IaaS

Introduction to Cloud Design Four Design Principals For IaaS WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud

More information

White Paper Solarflare High-Performance Computing (HPC) Applications

White Paper Solarflare High-Performance Computing (HPC) Applications Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters

More information

SX1012: High Performance Small Scale Top-of-Rack Switch

SX1012: High Performance Small Scale Top-of-Rack Switch WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2

More information

The following InfiniBand adaptor products based on Mellanox technologies are available from HP

The following InfiniBand adaptor products based on Mellanox technologies are available from HP Overview HP supports 56 Gbps Fourteen Data rate (FDR), 40Gbps 4x Quad Data Rate (QDR) InfiniBand products that include Host Channel Adapters (HCA), HP FlexLOM adaptors, switches, and cables for HP ProLiant

More information

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink

More information

Connecting the Clouds

Connecting the Clouds Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server

More information

Michael Kagan. [email protected]

Michael Kagan. michael@mellanox.com Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service

More information

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance

More information

3G Converged-NICs A Platform for Server I/O to Converged Networks

3G Converged-NICs A Platform for Server I/O to Converged Networks White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview

More information

Cisco SFS 7000P InfiniBand Server Switch

Cisco SFS 7000P InfiniBand Server Switch Data Sheet Cisco SFS 7000P Infiniband Server Switch The Cisco SFS 7000P InfiniBand Server Switch sets the standard for cost-effective 10 Gbps (4X), low-latency InfiniBand switching for building high-performance

More information

Mellanox Academy Online Training (E-learning)

Mellanox Academy Online Training (E-learning) Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),

More information

Sun Constellation System: The Open Petascale Computing Architecture

Sun Constellation System: The Open Petascale Computing Architecture CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical

More information

HUAWEI Tecal E6000 Blade Server

HUAWEI Tecal E6000 Blade Server HUAWEI Tecal E6000 Blade Server Professional Trusted Future-oriented HUAWEI TECHNOLOGIES CO., LTD. The HUAWEI Tecal E6000 is a new-generation server platform that guarantees comprehensive and powerful

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology

More information

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used

More information

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test

More information

PRIMERGY server-based High Performance Computing solutions

PRIMERGY server-based High Performance Computing solutions PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating

More information

LS DYNA Performance Benchmarks and Profiling. January 2009

LS DYNA Performance Benchmarks and Profiling. January 2009 LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The

More information

HPC Update: Engagement Model

HPC Update: Engagement Model HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value

More information

HUAWEI Tecal E9000 Converged Infrastructure Blade Server

HUAWEI Tecal E9000 Converged Infrastructure Blade Server E9000 Converged Infrastructure Blade Server Professional Trusted Future-oriented HUAWEI TECHNOLOGIES CO., LTD. The E9000 is a new-generation blade server that integrates computing, storage, switching,

More information

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction

More information

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments As the value and volume of business data continues to rise, small/medium/ enterprise (SME) businesses need high-performance Network Attached

More information

Replacing SAN with High Performance Windows Share over a Converged Network

Replacing SAN with High Performance Windows Share over a Converged Network WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN

More information

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]

More information

ECLIPSE Performance Benchmarks and Profiling. January 2009

ECLIPSE Performance Benchmarks and Profiling. January 2009 ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster

More information

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments As the value and volume of business data continues to rise, small/medium/ enterprise (SME) businesses need high-performance Network Attached

More information

Mellanox WinOF for Windows 8 Quick Start Guide

Mellanox WinOF for Windows 8 Quick Start Guide Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES

More information

Large Scale Clustering with Voltaire InfiniBand HyperScale Technology

Large Scale Clustering with Voltaire InfiniBand HyperScale Technology Large Scale Clustering with Voltaire InfiniBand HyperScale Technology Scalable Interconnect Topology Tradeoffs Since its inception, InfiniBand has been optimized for constructing clusters with very large

More information

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap

More information

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

Interconnecting Future DoE leadership systems

Interconnecting Future DoE leadership systems Interconnecting Future DoE leadership systems Rich Graham HPC Advisory Council, Stanford, 2015 HPC The Challenges 2 Proud to Accelerate Future DOE Leadership Systems ( CORAL ) Summit System Sierra System

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009 ECLIPSE Best Practices Performance, Productivity, Efficiency March 29 ECLIPSE Performance, Productivity, Efficiency The following research was performed under the HPC Advisory Council activities HPC Advisory

More information

Hadoop on the Gordon Data Intensive Cluster

Hadoop on the Gordon Data Intensive Cluster Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,

More information

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS) PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4

More information

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp

More information

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center 10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers

More information

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards WHITE PAPER Delivering HPC Applications with Juniper Networks and Chelsio Communications Ultra Low Latency Data Center Switches and iwarp Network Interface Cards Copyright 20, Juniper Networks, Inc. Table

More information

InfiniBand in the Enterprise Data Center

InfiniBand in the Enterprise Data Center InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900

More information

Oracle Big Data Appliance: Datacenter Network Integration

Oracle Big Data Appliance: Datacenter Network Integration An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu

More information

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview

More information

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,

More information

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX

More information

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview

More information

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Rev 1.1 March 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED

More information

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW Dell PowerEdge M-Series Blade Servers Simplifying IT The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering

More information

Mellanox Technologies, Ltd. Announces Record Quarterly Results

Mellanox Technologies, Ltd. Announces Record Quarterly Results PRESS RELEASE Press/Media Contacts Ashley Paula Waggener Edstrom +1-415-547-7024 [email protected] USA Investor Contact Gwyn Lauber Mellanox Technologies +1-408-916-0012 [email protected] Israel

More information

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s Uncompromising Integrity Making 100Gb/s deployments as easy as 10Gb/s Supporting Data Rates up to 100Gb/s Mellanox 100Gb/s LinkX cables and transceivers make 100Gb/s deployments as easy as 10Gb/s. A wide

More information

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel

More information

A-CLASS The rack-level supercomputer platform with hot-water cooling

A-CLASS The rack-level supercomputer platform with hot-water cooling A-CLASS The rack-level supercomputer platform with hot-water cooling INTRODUCTORY PRESENTATION JUNE 2014 Rev 1 ENG COMPUTE PRODUCT SEGMENTATION 3 rd party board T-MINI P (PRODUCTION): Minicluster/WS systems

More information

Power Saving Features in Mellanox Products

Power Saving Features in Mellanox Products WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power

More information

Enabling High performance Big Data platform with RDMA

Enabling High performance Big Data platform with RDMA Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery

More information

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure

More information

Mellanox OpenStack Solution Reference Architecture

Mellanox OpenStack Solution Reference Architecture Mellanox OpenStack Solution Reference Architecture Rev 1.3 January 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY

More information

Storage, Cloud, Web 2.0, Big Data Driving Growth

Storage, Cloud, Web 2.0, Big Data Driving Growth Storage, Cloud, Web 2.0, Big Data Driving Growth Kevin Deierling Vice President of Marketing October 25, 2013 Delivering the Highest ROI Across all Markets HPC Web 2.0 DB/Enterprise Cloud Financial Services

More information

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX White Paper BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH Abstract This white paper explains how to configure a Mellanox SwitchX Series switch to bridge the external network of an EMC Isilon

More information

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5

More information

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence: WHITE PAPER Scaling-out Ethernet for the Data Center Applying the Scalability, Efficiency, and Fabric Virtualization Capabilities of InfiniBand to Converged Enhanced Ethernet (CEE) The Need for Scalable

More information

Mellanox Academy Course Catalog. Empower your organization with a new world of educational possibilities 2014-2015

Mellanox Academy Course Catalog. Empower your organization with a new world of educational possibilities 2014-2015 Mellanox Academy Course Catalog Empower your organization with a new world of educational possibilities 2014-2015 Mellanox offers a variety of training methods and learning solutions for instructor-led

More information

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,

More information

HUAWEI E9000 Converged Infrastructure Blade Server

HUAWEI E9000 Converged Infrastructure Blade Server HUAWEI E9000 Converged Infrastructure Blade Server Professional Trusted Future-oriented HUAWEI TECHNOLOGIES CO., LTD. The E9000 is a new-generation blade server that integrates computing, storage, switching,

More information

Mellanox Global Professional Services

Mellanox Global Professional Services Mellanox Global Professional Services User Guide Rev. 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007

More information

The Future of Cloud Networking. Idris T. Vasi

The Future of Cloud Networking. Idris T. Vasi The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Cloud Computing and the Internet. Conferenza GARR 2010

Cloud Computing and the Internet. Conferenza GARR 2010 Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,

More information

Cisco SFS 7000D Series InfiniBand Server Switches

Cisco SFS 7000D Series InfiniBand Server Switches Q&A Cisco SFS 7000D Series InfiniBand Server Switches Q. What are the Cisco SFS 7000D Series InfiniBand switches? A. A. The Cisco SFS 7000D Series InfiniBand switches are a new series of high-performance

More information

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS ADVANCED PCIE 2.0 10GBASE-T ETHERNET NETWORKING FOR SUN BLADE AND RACK SERVERS KEY FEATURES Low profile adapter and ExpressModule form factors for Oracle

More information

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater

More information

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012 Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012 White Paper sponsored by Broadcom Corporation Impact of Romley/Sandy Bridge Servers: As we discussed in our first

More information

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging

More information

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized

More information