Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct



Similar documents
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

SMB Direct for SQL Server and Private Cloud

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

State of the Art Cloud Infrastructure

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Connecting the Clouds

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Mellanox Academy Online Training (E-learning)

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Building Enterprise-Class Storage Using 40GbE

Enabling High performance Big Data platform with RDMA

Microsoft SMB Running Over RDMA in Windows Server 8

LS DYNA Performance Benchmarks and Profiling. January 2009

InfiniBand vs Fibre Channel Throughput. Figure 1. InfiniBand vs 2Gb/s Fibre Channel Single Port Storage Throughput to Disk Media

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

ECLIPSE Performance Benchmarks and Profiling. January 2009

Advancing Applications Performance With InfiniBand

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

HP 3PAR StoreServ 8000 Storage - what s new

Cloud Computing and the Internet. Conferenza GARR 2010

Michael Kagan.

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal by SGI Federal. Published by The Aerospace Corporation with permission.

New Cluster-Ready FAS3200 Models

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Highly-Available Distributed Storage. UF HPC Center Research Computing University of Florida

Mellanox Accelerated Storage Solutions

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

3G Converged-NICs A Platform for Server I/O to Converged Networks

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Windows 8 SMB 2.2 File Sharing Performance

Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team

Windows Server Infrastructure for SQL Server

Block based, file-based, combination. Component based, solution based

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

PCI Express and Storage. Ron Emerick, Sun Microsystems

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

PCI Express IO Virtualization Overview

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Replacing SAN with High Performance Windows Share over a Converged Network

Enabling Technologies for Distributed Computing

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

Hadoop on the Gordon Data Intensive Cluster

New Data Center architecture

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Virtual Compute Appliance Frequently Asked Questions

Enabling Technologies for Distributed and Cloud Computing

A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks

Building a Scalable Storage with InfiniBand

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

New!! - Higher performance for Windows and UNIX environments

High Speed I/O Server Computing with InfiniBand

HP iscsi storage for small and midsize businesses

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Storage Architectures. Ron Emerick, Oracle Corporation

Introduction to Cloud Design Four Design Principals For IaaS

Evaluation of 40 Gigabit Ethernet technology for data servers

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Private cloud computing advances

White Paper Solarflare High-Performance Computing (HPC) Applications

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

CON Software-Defined Networking in a Hybrid, Open Data Center

Microsoft Windows Server in a Flash

Connecting Flash in Cloud Storage

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Future technologies for storage networks. Shravan Pargal Director, Compellent Consulting

Storage at a Distance; Using RoCE as a WAN Transport

StarWind Virtual SAN for Microsoft SOFS

Microsoft Windows Server Hyper-V in a Flash

Unified Computing Systems

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Microsoft Windows Server Hyper-V in a Flash

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

Mellanox OpenStack Solution Reference Architecture

Quantum StorNext. Product Brief: Distributed LAN Client

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:

Solution Brief July All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Fibre Channel over Ethernet in the Data Center: An Introduction

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Cost Efficient VDI. XenDesktop 7 on Commodity Hardware

Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

Transcription:

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development Motti@mellanox.com

Microsoft Windows Server 2012 and Direct New class of enterprise file storage File Client File Server Low latency, high throughout, low CPU overhead (50% lower versus Ethernet) Application User Kernel Fibre Channel replacement at a lower cost and higher performance Client Server Leverages Windows Server 2012 integrated Mellanox Ethernet and InfiniBand Network w/ support Network w/ support NTFS SCSI Accelerates Microsoft HyperV- and SQL- solutions Adapter Adapter Disk 10X Performance Improvement Over 10GbE* *Preliminary results based on Windows Server 2012 RC 2012 MELLANOX TECHNOLOGIES 2

What is? Remote Direct Memory Access Protocol Accelerated delivery model which works by allowing application software to bypass most layers of software and communicate directly with the hardware Client Memory Memory File Server benefits Low latency High throughput Zero copy capability OS / Stack bypass Client Direct Server Direct Mellanox based Interconnects InfiniBand RoCE: over Converged Ethernet NDKPI NDKPI Ethernet or InfiniBand 2012 MELLANOX TECHNOLOGIES 3

Direct over InfiniBand and RoCE User Client 1 Application Unchanged API Memory 4 Memory File Server 1. Application (Hyper-V, SQL Server) does not need to change. Kernel Client 2 Server 2. client makes the decision to use Direct at run time 2 Direct NDKPI Direct NDKPI 33. NDKPI provides a much thinner layer than TCP/IP 3 44. Remote Direct Memory Access performed by the network interfaces. RoCE and/or InfiniBand 2012 MELLANOX TECHNOLOGIES 4

Mellanox End-to-End VPI Solution Mellanox provides end-to-end InfiniBand and Ethernet connectivity solutions (adapters, switches, cables) Connecting data center servers and storage Up to 56Gb/s InfiniBand and 40Gb/s Ethernet per port Low latency, Low CPU overhead, InfiniBand to Ethernet Gateways for seamless operation Windows Server 2012 exposes the great value of Mellanox Interconnect Solution for storage traffic, virtualization and low latency InfiniBand and Ethernet (with RoCE) integration Highest Efficiency, Performance and return on investment 2012 MELLANOX TECHNOLOGIES 5

Measuring Direct Performance Client Micro Benchmark Client Micro Benchmark Client Micro Benchmark 10GbE IB QDR IB FDR Single Server Micro Benchmark Server Server Server 10 GbE IB QDR IB FDR 2012 MELLANOX TECHNOLOGIES 6

Microsoft Delivers Low-Cost Replacement to High-End Storage FDR 56Gb/s InfiniBand delivers 5X higher throughput with 50% less CPU overhead vs. 10GbE Native Throughput Performance over FDR InfiniBand 2012 MELLANOX TECHNOLOGIES 7

2012 MELLANOX TECHNOLOGIES 8 File Client ( 3.0) Measuring Direct Performance in Virtualized Environment Single Server SQL File Server ( 3.0) SQL Hyper-V ( 3.0) File Server ( 3.0) VM SQL

Direct Performance in Virtualized Environment Configuration BW MB/sec PS 512KB s/sec %CPU Privileged Latency milliseconds Local 10,090 38,492 ~2.5% ~3ms Remote 9,852 37,584 ~5.1% ~3ms Remote VM 10,367 39,548 ~4.6% ~3 ms 2012 MELLANOX TECHNOLOGIES 9

Microsoft s Cluster in the Box (CiB) Reference Design At least one node and storage always available, despite failure or replacement of any component. Dual power domains Internal interconnect between nodes, controllers Flexible PCIe slot for LAN options External ports for expansion Office-level power and acoustics for entry-level NAS Network x4 x8 PCIe Server A CPU x8 PCIe Storage Mellanox VPI Interconnect Solutions 10GbE, RoCE or InfiniBand Expander A ports Expander Server Enclosure 1/10G Ethernet cluster connect A port s x4 through midplane x4 through midplane 0 External 0 1 1 23 23 B port s B ports x8 PCIe Server B Expander Network CPU Expander x8 PCIe Storage x4 Additional s 2012 MELLANOX TECHNOLOGIES 10

Products Announced: X- Storage More than 15GB/sec Throughput Demonstrated Remote Storage Systems Windows Server 2012 with 3.0 PCI Express 3.0 based Servers - HP DL380p G8 Mellanox 56Gb/s FDR InfiniBand adapters and switches 2012 MELLANOX TECHNOLOGIES 11

Products Announced: Supermicro More than 10GB/sec Throughput Demonstrated under Hyper-V Hyper-V with Windows Server 2012 RC Supermicro PCIe 3.0 based Servers, File Server & LSI Mega 9285 storage controllers with LSI FastPath I/O acceleration software OCZ s Talos 2R s 2012 MELLANOX TECHNOLOGIES 12

Summary Together with Microsoft we deliver 10X performance acceleration for Remote File Server under physical or virtual environment that boosts next generation Cloud and database applications The first time we demo record performance for remote file server under Hyper-V using 2 FDR ports that delivers more than 10GB/sec using less than 5% CPU overhead Mellanox interconnect solution integrated with Direct in Widows Server 2012 delivers the most cost effective solution for File Server replacing FC and TCP/IP Ethernet. Now with Microsoft and Mellanox integrated solution, remote file server delivers no less performance than native storage. Customers already announced products using Mellanox enabled interconnect demonstrated extreme performance Boost File Server to a Block Storage Performance Level 2012 MELLANOX TECHNOLOGIES 13

THANK YOU 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 14