Microsoft RDMA Update



Similar documents
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

Microsoft SMB Running Over RDMA in Windows Server 8

SMB remote file protocol (including SMB 3.0) SW Worth, Microsoft

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

SMB Direct for SQL Server and Private Cloud

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

What s new in Hyper-V 2012 R2

Windows 8 SMB 2.2 File Sharing Performance

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Building Enterprise-Class Storage Using 40GbE

Software Defined Microsoft. PRESENTATION TITLE GOES HERE Siddhartha Roy Cloud + Enterprise Division Microsoft Corporation

Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst

SMB 3.0 New Opportunities for Windows Environments

Cloud Optimize Your IT

State of the Art Cloud Infrastructure

Choices for implementing SMB 3 on non Windows Servers Dilip Naik HvNAS Pty Ltd Australians good at NAS protocols!

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Bringing the Public Cloud to Your Data Center

Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team

Microsoft Windows Server Hyper-V in a Flash

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Enabling High performance Big Data platform with RDMA

EMC VNX Series: Introduction to SMB 3.0 Support


SDN in the Public Cloud: Windows Azure. Albert Greenberg Partner Development Manager Windows Azure Networking

The last 18 months. AutoScale. IaaS. BizTalk Services Hyper-V Disaster Recovery Support. Multi-Factor Auth. Hyper-V Recovery.

Microsoft Windows Server Hyper-V in a Flash

How To Test Nvm Express On A Microsoft I7-3770S (I7) And I7 (I5) Ios 2 (I3) (I2) (Sas) (X86) (Amd)

RoCE vs. iwarp Competitive Analysis

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Connecting Flash in Cloud Storage

Comparing the Network Performance of Windows File Sharing Environments

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Private cloud computing advances

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Cool New Hyper-V Features in Windows Server 2012 R2. Aidan Finn

StarWind Virtual SAN for Microsoft SOFS

Scale-Out File Server. Subtitle

SAMBA AND SMB3: ARE WE THERE YET? Ira Cooper Principal Software Engineer Red Hat Samba Team

Microsoft Windows Server in a Flash

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Zadara Storage Cloud A

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Storage Spaces. Storage Spaces

Windows Server Infrastructure for SQL Server

Microsoft Private Cloud Fast Track

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Linux NIC and iscsi Performance over 40GbE

A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

High Availability (HA) Aidan Finn

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Virtualised MikroTik

Pre-Conference Seminar E: Flash Storage Networking

Cloud Infrastructure Planning. Chapter Six

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Enabling Technologies for Distributed Computing

Windows Server 2008 R2 Hyper-V Live Migration

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

Maxta Storage Platform Enterprise Storage Re-defined

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Ceph Optimization on All Flash Storage

EMC BACKUP-AS-A-SERVICE

Dell High Availability Solutions Guide for Microsoft Hyper-V

Enabling Technologies for Distributed and Cloud Computing

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Microsoft SQL Server 2014 in a Flash

Veeam Cloud Connect. Version 8.0. Administrator Guide

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

NET ACCESS VOICE PRIVATE CLOUD

Live Migration. Aidan Finn

Validating Long-distance VMware vmotion

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

A virtual SAN for distributed multi-site environments

CompTIA Cloud+ 9318; 5 Days, Instructor-led

Overview: X5 Generation Database Machines

Brocade Solution for EMC VSPEX Server Virtualization

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

Powering the Next Generation Cloud with Azure Stack, Nano Server & Windows Server 2016! Jeff Woolsey Principal Program Manager Cloud & Enterprise

QoS & Traffic Management

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

Storage at a Distance; Using RoCE as a WAN Transport

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Tyche: An efficient Ethernet-based protocol for converged networked storage

Transcription:

Microsoft RDMA Update Tom Talpey File Server Architect Microsoft #OFADevWorkshop

Outline Introduction Microsoft RDMA use (highlights) SMB3 and SMB Direct Windows Networking Azure Software Interfaces Network Direct and Endure Linux Virtualization March 15 18, 2015 #OFADevWorkshop 2

SMB3 and SMB Direct March 15 18, 2015 #OFADevWorkshop 3

SMB3 The primary Windows remote file protocol Long SMB history, since 1980 s SMB1 since Windows 2000 ( CIFS before that) SMB2.0 with Vista and Windows Server 2008, 2.1 in 7/2008R2 Now at dialect 3 SMB3.0 with Windows 8/Server 2012, SMB3.02 in 8.1/WS2012R2 SMB3.1.1 in Windows 10 and Windows Server Technical Preview Supported: Full Windows File API Enterprise applications Hyper-V Virtual Hard Disks SQL Server New in Windows Server 2012 R2: Hyper-V Live Migration Shared VHDX - Remote Shared Virtual Disk MS-RSVD New in Windows 10 and Windows Server Technical Preview SMB3 Encryption hardening and additional cipher negotiation Hyper-V Storage QoS Shared VHDX RSVD snapshot and backup March 15 18, 2015 #OFADevWorkshop 4

SMB Direct (RDMA) Transport layer protocol adapting SMB3 to RDMA Fabric agnostic iwarp, InfiniBand, RoCE IP addressing IANA registered (smbdirect 5445) Minimal provider requirements Enables greatest compatibility and future adoption Only send/receive/rdma Write/RDMA Read RC-style, no atomics, no immediate, etc. Supported inbox in Windows Server 2012 and 2012R2: iwarp (Intel and Chelsio RNICs at 10 and 40GbE) RoCE (Mellanox HCAs at 10 and 40GbE) InfiniBand (Mellanox HCAs at up to FDR 54Gb) March 15 18, 2015 #OFADevWorkshop 5

SMB3 Multichannel Full Throughput Bandwidth aggregation with multiple NICs Multiple CPUs cores engaged when NIC offers Receive Side Scaling (RSS) or Remote Direct Memory Access (RDMA) Single 10GbE RSS-capable NIC SMB Client Sample Configurations Multiple 1GbE NICs SMB Client Multiple 10GbE in NIC team SMB Client NIC Team Multiple RDMA NICs SMB Client Automatic Failover SMB Multichannel implements end-toend failure detection Leverages NIC teaming if present, but does not require it NIC 10GbE Switch 10GbE NIC 10GbE SMB Server NIC 1GbE Switch 1GbE NIC 1GbE NIC 1GbE SMB Server Switch 1GbE NIC 1GbE NIC 10GbE Switch 10GbE NIC 10GbE NIC Team NIC 10GbE NIC 10GbE SMB Server Switch 10GbE NIC 10+GbE/IB Switch 10+GbE/IB NIC 10+GbE/IB SMB Server NIC 10+GbE/IB Switch 10+GbE/IB NIC 10+GbE/IB Automatic Configuration SMB detects and uses multiple paths Discovers and steps up to RDMA 6 March 15 18, 2015 #OFADevWorkshop

SMB Direct Advantages Scalable, fast and efficient storage access High throughput with low latency Minimal CPU utilization for I/O processing Load balancing, automatic failover and bandwidth aggregation via SMB Multichannel SMB Client Application User SMB Server Scenario High performance remote file access for application servers like Virtualization and Databases Required hardware RDMA-capable network interface (R-NIC) Three types: iwarp, RoCE and InfiniBand SMB Client Network w/ RDMA support Kernel Network w/ RDMA support SMB Server Local File System Disk 7 March 15 18, 2015 #OFADevWorkshop

SMB Workload Performance Key file I/O workloads Small/random 4KB/8KB, IOPS sensitive Large/sequential 512KB, bandwidth sensitive Smaller, and larger (up to 8MB), are also relevant Multichannel achieves higher scaling Bandwidth and IOPS from additional network interfaces Affinity and parallelism in endnodes Unbuffered I/O allows zero-touch to user buffer Strict memory register/invalidate per-i/o Key to enterprise application integrity Performance maintained with remote invalidate and careful local behaviors March 15 18, 2015 #OFADevWorkshop 8

Other SMB Uses SMB as a Transport Provides transparent use of RDMA Example: Virtual Machine Live Migration Peer-to-peer memory image transfer Bulk data transfer, ultimate throughput Memory-to-memory unconstrained by disk physics Very low CPU with RDMA more CPU to support live migration Minimal VM operational blackout 9 March 15 18, 2015 #OFADevWorkshop

Performance: IOPS and Bandwidth Single client, single server, 3xIB FDR multichannel connection, to storage and to RAM Workload IOPs 8KB reads, mirrored space (disk) ~600,000 8KB reads, from cache (RAM) ~1,000,000 32KB reads, mirrored space (disk) ~500,000 SAS HBA SAS HBA SAS HBA SAS HBA SAS HBA SAS HBA Throughput >16 Gbytes/second SAS SAS SAS SAS SAS SAS Larger I/Os (>32KB) similar results, i.e. larger i/o not needed for achieving full performance! JBOD JBOD JBOD JBOD JBOD JBOD Link to full demo in Resources slide below March 15 18, 2015 #OFADevWorkshop 10

Latency (ms) Latency(ms) Performance: SMB3 Storage with RDMA on 40GbE iwarp PRELIMINARY 8 KB IOPs* and Latency (ms) Single client, 8 file servers Incast Performance QD R / Thread Read IOPs Latency 50 th Read Latency 99 th Read 3.000 1 163,800 0.056 0.249 2.500 2.000 2 291,000 0.094 0.270 1.500 4 440,900 0.129 0.397 Near line-rate with small IOs 1.000 8 492,400 0.195 1.391 33.4 Gbps at 8 KB IO 0.500 16 510,200 0.363 2.810 Excellent latency variance 0.000 8K Read Incast - 2 Threads / Server @ QD R / Th Series1 Series2 Series3 Series4 Series5 0 20 40 60 80 100 Large IOPs* (512 KB) in Gbps 512KB Read BW Write BW 1 Thread 1 IO 17.82 16.14 2 IO 29.74 23.13 Client-to-Server Performance Near line-rate with large IOs 2 Thread 0.500 2 IO/t 37.21 30.61 37.2 Gbps with 2 threads 0.000 2.000 1.500 1.000 512KB Reads Series1 Series2 Series3 0 20 40 60 80 100 Percentage 11 March 15 18, 2015 Excellent Log Write Performance 7.2M Read IOPs*, 512 Byte, single outstanding IO 3.3M Write IOPs*, 512 Byte, single outstanding IO #OFADevWorkshop *IOs are not written to non-volatile storage iwarp used for these results Test configuration details in backup

Seconds Performance: VM Live Migration on 2x10GbE RoCE TCP/IP Compression SMB+RDMA TCP for Live Migration CPU core-limited, long migration time TCP with Compression for Live Migration Even more CPU-intensive, reduced migration time SMB Direct for Live Migration Minimal CPU, 10x better migration time Greatly increased VM live availability Live Migration 2x10 GbE RoCE Live Migration Times 70 60 50 40 30 20 10 0 12 March 15 18, 2015 #OFADevWorkshop

Windows Networking March 15 18, 2015 #OFADevWorkshop 13

Converged Fabric Overview Converged fabric supporting virtualized tenant and RDMAenabled disaggregated storage traffic, with quality-of-service guarantees Integrated with SDN-vSwitch Teaming of NICs supported http://channel9.msdn.com/events/teched/europe/2014/cdp-b248 Discussion and demo begins at minute 40:00 March 15 18, 2015 #OFADevWorkshop 14

Azure March 15 18, 2015 #OFADevWorkshop 15

March 15 18, 2015 #OFADevWorkshop 16

Summary Scenario: BYO Virtual Network to the Cloud Per customer, with capabilities equivalent to on premise counterpart Challenge: How do we scale virtual networks across millions of servers? Solution: Host SDN solves it: scale, flexibility, timely feature rollout, debuggabililty Virtual networks, software load balancing, How: Scaling flow processing to millions of nodes Flow tables on the host, with on-demand rule dissemination RDMA to storage Demo: ExpressRoute to the Cloud (Bing it!) March 15 18, 2015 #OFADevWorkshop 17

Flow Tables VMSwitch exposes a typed Match- Action-Table API to the controller One table per policy Key insight: Let controller tell the switch exactly what to do with which packets (e.g. encap/decap), rather than trying to use existing abstractions (Tunnels, ) VNet Routing Policy Node: 10.4.1.5 VNet Description Controller NAT Endpoints Tenant Description ACLs NIC Flow TO: 10.2/16 TO: 10.1.1.5 TO:!10/8 Azure VMSwitch VNET LB NAT ACLS Action Encap to GW Encap to 10.5.1.7 NAT out of VNET Flow TO: 79.3.1.2 TO:!10/8 Action DNAT to 10.1.1.2 SNAT to 79.3.1.2 Flow TO: 10.1.1/24 Action Allow 10.4/16 Block TO:!10/8 Allow Blue VM1 10.1.1.2 March 15 18, 2015 #OFADevWorkshop 18

A complete virtual network needs storage as well as compute! How do we make Azure Storage scale? March 15 18, 2015 #OFADevWorkshop 19

Azure Storage is Software Defined, Too We want to make storage clusters scale cheaply on commodity servers Erasure Coding provides durability of 3-copy writes Commit with small (<1.5x) overhead by distributing coded blocks over many servers Requires amplified network I/O for each Erasure Code storage I/O To make storage cheaper, we use lots more network! Write March 15 18, 2015 #OFADevWorkshop 20

RDMA High Performance Transport for Storage Memory NIC NIC Memory Buffer A Write local buffer at Address A to remote buffer at Address B Buffer B is filled Buffer B Application Application Remote DMA primitives (e.g. Read address, Write address) implemented on-nic Zero Copy (NIC handles all transfers via DMA) Zero CPU Utilization at 40Gbps (NIC handles all packetization) <2μs E2E latency RoCEv2 enables Infiniband RDMA transport over IP/Ethernet network (all L3) Enabled at 40GbE for Windows Azure Storage, achieving massive COGS savings by eliminating many CPUs in the rack All the logic is in the host: Software Defined Storage now scales with the Software Defined Network March 15 18, 2015 #OFADevWorkshop 21

Software Interfaces March 15 18, 2015 #OFADevWorkshop 22

Azure RDMA Software Network Direct high performance networking abstraction layer for IHVs Endure Enlightened RDMA on Azure Linux Guest RDMA March 15 18, 2015 #OFADevWorkshop 23

What is Network Direct? Microsoft defined interfaces for RDMA Transparent support of IB, iwarp, and RoCE IP-addressing based March 15 18, 2015 #OFADevWorkshop 24

Why Network Direct? Designed for Windows Higher level of abstraction Fabric agnostic Stable ABI No Callbacks Dynamic provider discovery Extensible Easier to understand Easier to develop for March 15 18, 2015 #OFADevWorkshop 25

What is Endure Virtualization layer for RDMA Enables full bypass, with zero overhead to the VM Targets Azure-supported RDMA fabric(s) In production on Azure Scalable to 1000 s of cores Now supporting Intel MPI Future support Linux guest Under development

Enlightened Network Direct On Azure Host Operating System Guest Virtual Machine Guest Virtual Machine Application Application Application IHV Driver IHV Driver IHV Driver User-mode. Kernel-mode VSP VSC VSC IHV Driver VMBus Shared Memory VMBus VMBus Processors Hypervisor Memory RDMA Hardware March 15 18, 2015 #OFADevWorkshop 27

Linux Guest RDMA On Azure Based on Network Direct Technology All Control-plane operations over VMBUS Connection establishment/teardown Creation/management of all resources: PD, QP, CQ etc. Data-plane operations bypass the guest and host kernels RDMA NIC directly operates on pinned application memory New Linux driver bridges the gap between the Linux programming model and the Windows programming model No change to the Linux user-level environment March 15 18, 2015 #OFADevWorkshop 28

Network Direct Kernel Programming Interface (NDKPI) Derived from Network Direct Specifically for kernel consumers Used by SMB Direct Versions NDKPI 1.1 Windows Server 2012 NDKPI 1.2 Windows Server 2012R2 NDKPI 2.0 Under development NDKPI Reference (resources, below) March 15 18, 2015 #OFADevWorkshop 29

Thank You #OFADevWorkshop

Backup March 15 18, 2015 #OFADevWorkshop 31

SMB implementers (alphabetical order) Apple MacOS X 10.2 Jaguar CIFS/SMB 1.x (via Samba) MacOS X 10.7 Lion SMB 1.x (via Apple s SMBX) MacOS X 10.9 Mavericks SMB 2.1 (default file protocol) MacOS X 10.10 Yosemite SMB 3.0 (default file protocol) EMC Older versions CIFS/SMB 1.x VNX SMB 3.0 Isilon OneFS 6.5 SMB 2 Isilon OneFS 7.0 SMB 2.1 Isilon OneFS 7.1.1 SMB 3.0 Microsoft Microsoft LAN Manager SMB Windows NT 4.0 CIFS Windows 2000, Server 2003 or Windows XP SMB 1.x Windows Server 2008 or Windows Vista SMB 2 Windows Server 2008 R2 or Windows 7 SMB 2.1 Windows Server 2012 or Windows 8 SMB 3.0 Windows Server 2012 R2 or Windows 8.1 SMB 3.0.2 Windows Technical Preview SMB 3.1.1 NetApp Older versions CIFS/SMB 1.x Data ONTAP 7.3.1 SMB 2 Data ONTAP 8.1 SMB 2.1 Data ONTAP 8.2 SMB 3.0 Samba (Linux or others) Older versions CIFS/SMB 1.x Samba 3.6 SMB 2 (some SMB 2.1) Samba 4.1 SMB 3.0 And many others Most widely implemented remote file protocol in the world, available in ~every NAS and File Server See the SDC participants on slide 30 Information on this slide gathered from publicly available information as of February 2015. Please contact the implementers directly to obtain the accurate, up-to-date information on their SMB implementation. 32

SMB 3.0.2 Asymmetric Scale-Out File Server Clusters SMB share ownership which can move within the File Server Cluster Witness protocol enhanced to allow moving client per SMB share In Windows, SMB clients automatically rebalance SMB Direct Remote Invalidation Avoids specific invalidation operations, improving RDMA performance Especially important for workloads with high rate of small IOs Unbuffered read/write operations Per-request flags for read/write operations Remote Shared Virtual Disk Protocol New protocol defines block semantics for shared virtual disk files Implements SCSI over SMB (SMB protocol used as a transport) 33

SMB 3.1.1 (future) Under Development as of this presentation More details in SNIA SDC 2014 presentations, and Microsoft protocol document previews Features include: Extensible Negotiation Preauthentication Integrity Increased Man-in-the-Middle protection Encryption improvements Faster AES-128-GCM Cluster improvements Dialect rolling upgrade, Cluster Client Failover v2 Related new and enhanced protocols in preview: Storage Quality of Service (MS-SQOS) Shared VHDX v2 (supporting virtual disk snapshots) (MS-RSVD) 34

Links to protocol documentation Specification [MS-CIFS]: Common Internet File System (CIFS) Protocol Specification [MS-SMB]: Server Message Block (SMB) Protocol Specification [MS-SMB2]: Server Message Block (SMB) Protocol Versions 2 and 3 Specification [MS-SMBD]: SMB Remote Direct Memory Access (RDMA) Transport Protocol Specification [MS-SWN]: Service Witness Protocol Specification [MS-FSRVP]: File Server Remote VSS Provider Protocol Specification Description Specifies the Common Internet File System (CIFS) Protocol, a cross-platform, transport-independent protocol that provides a mechanism for client systems to use file and print services made available by server systems over a network. Specifies the Server Message Block (SMB) Protocol, which defines extensions to the existing Common Internet File System (CIFS) specification that have been implemented by Microsoft since the publication of the [CIFS] specification. Specifies the Server Message Block (SMB) Protocol Versions 2 and 3, which support the sharing of file and print resources between machines and extend the concepts from the Server Message Block Protocol. Specifies the SMB Remote Direct Memory Access (RDMA) Transport Protocol, a wrapper for the existing SMB protocol that allows SMB packets to be delivered over RDMA-capable transports such as iwarp or InfiniBand while utilizing the direct data placement (DDP) capabilities of these transports. Benefits include reduced CPU overhead, lower latency, and improved throughput. Specifies the Service Witness Protocol, which enables an SMB clustered file server to notify SMB clients with prompt and explicit notifications about the failure or recovery of a network name and associated services. Specifies the File Server Remote VSS Protocol, an RPC-based protocol used for creating shadow copies of file shares on a remote computer, and for facilitating backup applications in performing application-consistent backup and restore of data on SMB shares. [MS-RSVD]: Remote Shared Virtual Disk Protocol Specifies the Remote Shared Virtual Disk Protocol, which supports accessing and manipulating virtual disks stored as files on an SMB3 file server. This protocol enables opening, querying, administering, reserving, reading, and writing the virtual disk objects, providing for flexible access by single or multiple consumers. It also provides for forwarding of SCSI operations, to be processed by the virtual disk. Note: Protocols published by Microsoft, and available to anyone to implement in non-windows platforms. http://www.microsoft.com/openspecifications/ 35

SNIA SMB2/SMB3 Plugfest SMB/SMB2/SMB3 Plugfest happens every year side-byside with the Storage Developer Conference (SNIA SDC) in September Intense week of interaction across operating systems and SMB implementations. Agenda, Calendar and past content at: http://www.snia.org/events/storage-developer Participants in the 2014 edition of the SNIA SMB2 / SMB3 Plugfest Santa Clara, CA September 2014 36

SMB3 / SMB Direct / RDMA Resources Microsoft protocol documentation http://www.microsoft.com/openspecifications SNIA education SMB3 tutorial http://www.snia.org/education/tutorials/fast2015 SNIA SDC 2014 http://www.snia.org/events/storagedeveloper/presentations14#smb 37 March 15 18, 2015 #OFADevWorkshop

More SMB3 Resources Jose Barreto s blog http://smb3.info/ The Rosetta Stone: Updated Links on Windows Server 2012 R2 File Server and SMB 3.02 http://blogs.technet.com/b/josebda/archive/2014/03/30/updated-links-onwindows-server-2012-r2-file-server-and-smb-3-0.aspx Performance Demo http://blogs.technet.com/b/josebda/archive/2014/03/09/smb-direct-and-rdmaperformance-demo-from-teched-includes-summary-powershell-scripts-andlinks.aspx Microsoft Technet Improve Performance of a File Server with SMB Direct http://technet.microsoft.com/en-us/library/jj134210.aspx Windows Kernel RDMA interface NDKPI Reference (provider) http://msdn.microsoft.com/en-us/library/windows/hardware/jj206456.aspx March 15 18, 2015 #OFADevWorkshop 38

Performance: SMB3 with RDMA on 40GbE CONFIGURATION Arista Switches Interconnect constrained to single 40GbE link Chelsio T580 2-port 40GbE iwarp Single Port Connected Server Chassis 2x Intel Xeon processor E5-2660 (2.20 Ghz) I/O not written to non-volatile storage Single Port Connected Constrained Inter-Switch Link 1x 40GbE Arista 7050QX-32 Arista 7050QX-32 1 U 1 U 1 U 1 U 1 U 1 U 1 U 1 U 8 U 1x 40GbE Each 1 U 1 U 8 U 1x 40GbE Each Client Systems Server Systems 39 March 15 18, 2015 #OFADevWorkshop