OFA Training Program. Writing Application Programs for RDMA using OFA Software. Author: Rupert Dance Date: 11/15/
|
|
|
- Camron Shaw
- 10 years ago
- Views:
Transcription
1 OFA Training Program Writing Application Programs for RDMA using OFA Software Author: Rupert Dance Date: 11/15/
2 Agenda OFA Training Program Program Goals Instructors Programming course format Course requirements & syllabus UNH-IOL facilities & cluster equipment RDMA Benefits Programming course examples Future courses Course availability 2
3 OFA Training Program - Overall Goals Provide application developers with classroom instruction and hands on experience writing, compiling and executing an application using OFED verbs API Illustrate how RDMA programming is different from sockets programming and provide the rationale for using RDMA. Focus on the OFED API, RDMA concepts and common design patterns Opportunity to develop applications on the OFA cluster at the University of New Hampshire includes the latest hardware from Chelsio, DDN, Intel, Mellanox, NetApp & QLogic 3
4 Instructors Dr. Robert D. Russell: Professor in the CS Department at UNH Dr. Russell has been an esteemed member of the University of New Hampshire faculty for more than 30 years and has worked with the InterOperability Laboratory's iscsi consortium, iwarp consortium and the OpenFabrics Interoperability Logo Program. Paul Grun: Chief Scientist for System Fabric Works Paul has worked for more than 30 years on server I/O architecture and design, ranging from large scale disk storage subsystems to high performance networks. He served as chair of the IBTA's Technical Working Group, contributed to many IBTA specifications and chaired the working group responsible for creating the RoCE specification. Rupert Dance: Co-Chair of the OFA Interoperability Working Group Rupert helped to form and has led both the IBTA Compliance and Interoperability and OFA Interoperability programs since their inception. His company, Software Forge, worked with the OFA to create and provide the OFA Training Program. 4
5 Programming Course Format Part One - Introduction to RDMA I/O Architecture and RDMA Architecture Address translation and network operations Verbs Introduction and the OFED Stack Introduction to wire protocols Part Two - Programming with RDMA Hardware resources: HCAs, RNICs, etc Protection Domains and Memory Registration keys Connection Management Explicit Queue Manipulation Explicit Event Handling Explicit Asynchronous Operation Explicit Manipulation of System Data Structures 5
6 Programming Course Requirements Requirements Knowledge of C programming including concepts such as structures, memory management, pointers, threads and asynchronous programming Knowledge of Linux since this course does not include Windows programming Helpful Knowledge of Event Handlers Knowledge of sockets or network programming Familiarity with storage protocols 6
7 Programming Course Syllabus Introduction to OFA architecture Verbs and the verbs API A Network perspective RDMA Operations SEND/RECEIVE, RDMA READ & WRITE RDMA Services Isolation and Protection Mechanisms A brief overview of InfiniBand Management A quick introduction to the OFED stack Host perspective Asynchronous processing Channel vs. RDMA semantics Basic Data Structures Connection Manager IDs Connection Manager Events Queue Pairs Completion Queues Completion Channels Protection Domains Memory Registration Keys Work Requests Work Completions Connection management basics Establishing connections using RDMACM RDMACM API Basic RDMA programming Memory registration Object creation Posting requests Polling Waiting for completions using events Common practices for implementing blocking wait Design patterns Send-receive RDMA cyclic buffers Rendezvous Advanced topics Work Request chaining Multicast Unsignaled Completions RDMA ecosystems Native InfiniBand iwarp RoCE 7
8 UNH Interoperability Lab 8
9 Linux OFA Cluster at UNH-IOL Windows Thanks to AMD, Intel and OFA for the addition of 16 new nodes in October
10 OFA Software Benefits Remote Direct Memory Access provides Low latency stack bypass and copy avoidance Kernel bypass reduces CPU utilization Reduces memory bandwidth bottlenecks High bandwidth utilization Cross Platform support InfiniBand iwarp RoCE 10
11 Conventional I/O versus RDMA I/O OS involved in all operations Conventional I/O RDMA I/O Channel interface runs in user space No need to access the kernel 11
12 Address Translation Sockets Based Channel Based 12
13 Many apps, one interface, three wires 13
14 OFED the whole picture Application Level User APIs Diag Tools Open SM User Level MAD API OpenFabrics User Level User Space IP Based App Access Sockets Based Access SDP Lib UDAPL Various MPIs Block Storage Access Verbs & CMA / API Clustered DB Access Access to File Systems SA MAD SMA PMA IPoIB Subnet Administrator Management Datagram Subnet Manager Agent Performance Manager Agent IP over InfiniBand Upper Layer Protocol Mid-Layer Provider Hardware Kernel Space Kernel bypass SA Client VNIC MAD Hardware Specific Driver InfiniBand HCA SMA IPoIB SDP SRP iser RDS Connection Manager Connection Manager Abstraction (CMA) OpenFabrics Kernel Level Verbs / API Connection Manager 2011 OpenFabrics Alliance, Inc. NFS-RDMA RPC Cluster File Sys Hardware Specific Driver iwarp R-NIC Kernel bypass SDP SRP iser RDS VNIC UDAPL HCA R-NIC Sockets Direct Protocol SCSI RDMA Protocol (Initiator) iscsi RDMA Protocol (Initiator) Reliable Datagram Service Virtual NIC User Direct Access Programming Lib Host Channel Adapter RDMA NIC Common Apps & Key Access InfiniBand Methods for using iwarp OF Stack 08/23/
15 Programming Course Sample Description of the verbs Description of the data structures Preparation for posting a send operation Create the work request Gathering data from memory Putting gathered elements on the wire Multicast concept The Big Picture 15
16 Programming Course - OFED Verbs 16
17 Programming Course Data Structures 17
18 Bottom-up client setup phase rdma_create_id() - create struct rdma_cm_id identifier rdma_resolve_addr() - bind struct rdma_cm_id to local device rdma_resolve_route() - resolve route to remote server ibv_alloc_pd() - create struct ibv_pd protection domain ibv_create_cq() - create struct ibv_cq completion queue rdma_create_qp() - create struct ibv_qp queue pair ibv_reg_mr() - create struct ibv_mr memory region rdma_connect() - create connection to remote server 18
19 Creating Scatter Gather Elements 19
20 Protection Domains Memory Regions 20
21 Gather during ibv_post_send() 21
22 Send Work Request (SWR) Purpose: tell network adaptor what data to send Data structure: struct ibv_send_wr Fields visible to programmer: next wr_id sg_list opcode num_sge send_flags pointer to next SWR in linked list user-defined identification of this SWR array of scatter-gather elements (SGE) IBV_WR_SEND number of elements in sg_list array IBV_SEND_SIGNALED Programmer must fill in these fields before calling ibv_post_send() 22
23 Posting to send data Verb: ibv_post_send() Parameters: Queue Pair - QP Pointer to linked list of Send Work Requests SWR Pointer to bad SWR in list in case of error Return value: == 0 all SWRs successfully added to send queue (SQ)!= 0 error code 23
24 Bottom-up client break-down phase rdma_disconnect() - destroy connection to remote server ibv_dereg_mr() - destroy struct ibv_mr memory region rdma_destroy_qp() - destroy struct ibv_qp queue pair ibv_destroy_cp() - destroy struct ibv_cq completion queue ibv_dealloc_pd() - deallocate struct ibv_pd protection domain rdma_destroy_id() - destroy struct rdma_cm_id identifier 24
25 Multicast concept 2011 OpenFabrics Alliance, Inc 6/13/
26 Multicast Optional to implement in IB CAs and switches Uses Unreliable Datagram (UD) mode Only Send/Recv operations allowed Both sides must actively participate in data transfers Receiver must have RECV posted for next SEND Receiver must process each RECV completion Only possible with IB, not iwarp 2011 OpenFabrics Alliance, Inc 6/13/
27 Programming Course The Big Picture 27
28 Future OFA Software Training Course System Administration System configuration Cluster optimization Advanced Programming topics Kernel level programming ULP Training MPI RDS SRP 28
29 OFA Programming Course Availability Available quarterly at the University of New Hampshire Interoperability Lab (UNH-IOL) January 18-19, 2012 OFA Programming Course January 20 th, 2012 Free Ski Trip to Loon Mountain March OFA Programming Course Registration: The course can be presented at your company location Additional fees apply for travel and equipment required to support the training materials and exercises. Minimum of 8 attendees required Europe and Asia supported in addition to USA New for 2012, the course will be made available via Webinar This is a 4 day course made available for developers in Asia and other countries requiring extensive travelling Minimum of 8 attendees required For more information contact: [email protected] 29
OFA Training Programs
OFA Training Programs Programming and Fabric Admin Author: Rupert Dance Date: 03/26/2012 www.openfabrics.org 1 Agenda OFA Training Programs Writing Applications for RDMA using OFA Software Program Goals
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
A Tour of the Linux OpenFabrics Stack
A Tour of the OpenFabrics Stack Johann George, QLogic June 2006 1 Overview Beyond Sockets Provides a common interface that allows applications to take advantage of the RDMA (Remote Direct Memory Access),
Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School
Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap
RDMA Aware Networks Programming User Manual
RDMA Aware Networks Programming User Manual Rev 1.7 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Why Compromise? A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics
Why Compromise? A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing [email protected] Michael Kagan Chief Technology Officer [email protected]
WinOF Updates. Gilad Shainer Ishai Rabinovitz Stan Smith Sean Hefty. www.openfabrics.org
WinOF Updates Gilad Shainer Ishai Rabinovitz Stan Smith Sean Hefty Windows OpenFabrics (WinOF) Collaborative effort to develop, test and release OFA software for Windows Components Kernel and User Space
Introduction to InfiniBand for End Users Industry-Standard Value and Performance for High Performance Computing and the Enterprise
Introduction to InfiniBand for End Users Industry-Standard Value and Performance for High Performance Computing and the Enterprise Paul Grun InfiniBand Trade Association INTRO TO INFINIBAND FOR END USERS
High Speed I/O Server Computing with InfiniBand
High Speed I/O Server Computing with InfiniBand José Luís Gonçalves Dep. Informática, Universidade do Minho 4710-057 Braga, Portugal [email protected] Abstract: High-speed server computing heavily relies on
Multi-Path RDMA. Elad Raz Mellanox Technologies
Multi-Path Elad Raz Mellanox Technologies Agenda Motivation Introducing Multi-Path Design Status and initial results Next steps Conclusions March 15 18, 2015 #OFADevWorkshop 2 MP- Motivation (1) Failovers
A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks
A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks Xiaoyi Lu, Md. Wasi- ur- Rahman, Nusrat Islam, and Dhabaleswar K. (DK) Panda Network- Based Compu2ng Laboratory Department
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
RDMA over Ethernet - A Preliminary Study
RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Outline Introduction Problem Statement
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman
High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud
Microsoft SMB 2.2 - Running Over RDMA in Windows Server 8
Microsoft SMB 2.2 - Running Over RDMA in Windows Server 8 Tom Talpey, Architect Microsoft March 27, 2012 1 SMB2 Background The primary Windows filesharing protocol Initially shipped in Vista and Server
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 [email protected] Why Storage at a Distance the Storage Cloud Following
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC Wenhao Wu Program Manager Windows HPC team Agenda Microsoft s Commitments to HPC RDMA for HPC Server RDMA for Storage in Windows 8 Microsoft
Pre-Conference Seminar E: Flash Storage Networking
Pre-Conference Seminar E: Flash Storage Networking Rob Davis, Chris DePuy, Tameesh Suri, Saurabh Sureka, Gunna Marripudi, and Asgeir Eiriksson Santa Clara, CA 1 Agenda Networked Flash Storage Overview
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand Hari Subramoni *, Ping Lai *, Raj Kettimuthu **, Dhabaleswar. K. (DK) Panda * * Computer Science and Engineering Department
Lustre Networking BY PETER J. BRAAM
Lustre Networking BY PETER J. BRAAM A WHITE PAPER FROM CLUSTER FILE SYSTEMS, INC. APRIL 2007 Audience Architects of HPC clusters Abstract This paper provides architects of HPC clusters with information
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
Introduction to the InfiniBand Core Software
Introduction to the InfiniBand Core Software Bob Woodruff Intel Corporation [email protected] Roland Dreier Topspin Communications [email protected] Sean Hefty Intel Corporation [email protected]
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking
Cluster Grid Interconects Tony Kay Chief Architect Enterprise Grid and Networking Agenda Cluster Grid Interconnects The Upstart - Infiniband The Empire Strikes Back - Myricom Return of the King 10G Gigabit
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented
Accelerating Spark with RDMA for Big Data Processing: Early Experiences
Accelerating Spark with RDMA for Big Data Processing: Early Experiences Xiaoyi Lu, Md. Wasi- ur- Rahman, Nusrat Islam, Dip7 Shankar, and Dhabaleswar K. (DK) Panda Network- Based Compu2ng Laboratory Department
Mellanox Academy Course Catalog. Empower your organization with a new world of educational possibilities 2014-2015
Mellanox Academy Course Catalog Empower your organization with a new world of educational possibilities 2014-2015 Mellanox offers a variety of training methods and learning solutions for instructor-led
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options
IMPLEMENTATION AND COMPARISON OF ISCSI OVER RDMA ETHAN BURNS
IMPLEMENTATION AND COMPARISON OF ISCSI OVER RDMA BY ETHAN BURNS B.S., University of New Hampshire (2006) THESIS Submitted to the University of New Hampshire in Partial Fulfillment of the Requirements for
High Performance VMM-Bypass I/O in Virtual Machines
High Performance VMM-Bypass I/O in Virtual Machines Jiuxing Liu Wei Huang Bulent Abali Dhabaleswar K. Panda IBM T. J. Watson Research Center 9 Skyline Drive Hawthorne, NY 532 {jl, abali}@us.ibm.com Computer
Advanced Computer Networks. High Performance Networking I
Advanced Computer Networks 263 3501 00 High Performance Networking I Patrick Stuedi Spring Semester 2014 1 Oriana Riva, Department of Computer Science ETH Zürich Outline Last week: Wireless TCP Today:
How To Monitor Infiniband Network Data From A Network On A Leaf Switch (Wired) On A Microsoft Powerbook (Wired Or Microsoft) On An Ipa (Wired/Wired) Or Ipa V2 (Wired V2)
INFINIBAND NETWORK ANALYSIS AND MONITORING USING OPENSM N. Dandapanthula 1, H. Subramoni 1, J. Vienne 1, K. Kandalla 1, S. Sur 1, D. K. Panda 1, and R. Brightwell 2 Presented By Xavier Besseron 1 Date:
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
A Packet Forwarding Method for the ISCSI Virtualization Switch
Fourth International Workshop on Storage Network Architecture and Parallel I/Os A Packet Forwarding Method for the ISCSI Virtualization Switch Yi-Cheng Chung a, Stanley Lee b Network & Communications Technology,
Security in Mellanox Technologies InfiniBand Fabrics Technical Overview
WHITE PAPER Security in Mellanox Technologies InfiniBand Fabrics Technical Overview Overview...1 The Big Picture...2 Mellanox Technologies Product Security...2 Current and Future Mellanox Technologies
Running Native Lustre* Client inside Intel Xeon Phi coprocessor
Running Native Lustre* Client inside Intel Xeon Phi coprocessor Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci 08 April 2014 * Some names and brands may be claimed as the property of others. What is the
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
New Data Center architecture
New Data Center architecture DigitPA Conference 2010, Rome, Italy Silvano Gai Consulting Professor Stanford University Fellow Cisco Systems 1 Cloud Computing The current buzzword ;-) Your computing is
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Mellanox continues its leadership providing InfiniBand Host Channel
Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Interoperability Testing and iwarp Performance. Whitepaper
Interoperability Testing and iwarp Performance Whitepaper Interoperability Testing and iwarp Performance Introduction In tests conducted at the Chelsio facility, results demonstrate successful interoperability
Zadara Storage Cloud A whitepaper. @ZadaraStorage
Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters
High Throughput File Servers with SMB Direct, Using the 3 Flavors of network adapters Jose Barreto Principal Program Manager Microsoft Corporation Abstract In Windows Server 2012, we introduce the SMB
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Designing Efficient FTP Mechanisms for High Performance Data-Transfer over InfiniBand
Designing Efficient FTP Mechanisms for High Performance Data-Transfer over InfiniBand Ping Lai, Hari Subramoni, Sundeep Narravula, Amith Mamidala, Dhabaleswar K. Panda Department of Computer Science and
Cloud Computing and the Internet. Conferenza GARR 2010
Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
How To Test Nvm Express On A Microsoft I7-3770S (I7) And I7 (I5) Ios 2 (I3) (I2) (Sas) (X86) (Amd)
The Performance Impact of NVMe and NVMe over Fabrics PRESENTATION TITLE GOES HERE Live: November 13, 2014 Presented by experts from Cisco, EMC and Intel Webcast Presenters! J Metz, R&D Engineer for the
OFED+ Host Software Release 1.5.4
OFED+ Host Software Release 1.5.4 User Guide IB0054606-02 A OFED+ Host Software Release 1.5.4 User Guide Information furnished in this manual is believed to be accurate and reliable. However, QLogic Corporation
Know your Cluster Bottlenecks and Maximize Performance
Know your Cluster Bottlenecks and Maximize Performance Hands-on training March 2013 Agenda Overview Performance Factors General System Configuration - PCI Express (PCIe) Capabilities - Memory Configuration
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand Pak Lui, Application Performance Manager September 12, 2013 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server and
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
InfiniBand in the Enterprise Data Center
InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer [email protected] Agenda Session Length:
InfiniBand Technology Overview. Dror Goldenberg, Mellanox Technologies
Dror Goldenberg, Mellanox Technologies SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and
Issues update to SUSE Linux Enterprise Distribution (SLES) with regards to OFS
Issues update to SUSE Linux Enterprise Distribution (SLES) with regards to OFS #OFADevWorkshop John Jolly SUSE Linux Kernel Engineer [email protected] Agenda Explain SUSE and Our Enterprise Product Describe
Performance of RDMA-Capable Storage Protocols on Wide-Area Network
Performance of RDMA-Capable Storage Protocols on Wide-Area Network Weikuan Yu Nageswara S.V. Rao Pete Wyckoff* Jeffrey S. Vetter Ohio Supercomputer Center* Managed by UT-Battelle InfiniBand Clusters around
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vsphere 5 R E S E A R C H N O T E
RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vsphere 5 R E S E A R C H N O T E RDMA Performance in Virtual Machines using QDR InfiniBand on vsphere 5 Table of Contents Introduction...
Ultra Low Latency Data Center Switches and iwarp Network Interface Cards
WHITE PAPER Delivering HPC Applications with Juniper Networks and Chelsio Communications Ultra Low Latency Data Center Switches and iwarp Network Interface Cards Copyright 20, Juniper Networks, Inc. Table
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Designing Cloud and Grid Computing Systems with InfiniBand and High-Speed Ethernet
Designing Cloud and Grid Computing Systems with InfiniBand and High-Speed Ethernet A Tutorial at CCGrid 11 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda
Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand
Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based
Storage Networking Foundations Certification Workshop
Storage Networking Foundations Certification Workshop Duration: 2 Days Type: Lecture Course Description / Overview / Expected Outcome A group of students was asked recently to define a "SAN." Some replies
Data Centers and Cloud Computing
Data Centers and Cloud Computing CS377 Guest Lecture Tian Guo 1 Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Case Study: Amazon EC2 2 Data Centers
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Enabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
Using Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
Oracle Big Data Appliance: Datacenter Network Integration
An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
An Oracle White Paper December 2010. Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric
An Oracle White Paper December 2010 Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric Introduction... 1 Today s Datacenter Challenges... 2 Oracle s Network Fabric... 3 Maximizing
Cisco SFS 7000P InfiniBand Server Switch
Data Sheet Cisco SFS 7000P Infiniband Server Switch The Cisco SFS 7000P InfiniBand Server Switch sets the standard for cost-effective 10 Gbps (4X), low-latency InfiniBand switching for building high-performance
VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D
ware and CPU Virtualization Technology Jack Lo Sr. Director, R&D This presentation may contain ware confidential information. Copyright 2005 ware, Inc. All rights reserved. All other marks and names mentioned
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
MPI / ClusterTools Update and Plans
HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski
IPOP-TinCan: User-defined IP-over-P2P Virtual Private Networks
IPOP-TinCan: User-defined IP-over-P2P Virtual Private Networks Renato Figueiredo Advanced Computing and Information Systems Lab University of Florida ipop-project.org Unit 3: Intra-cloud Virtual Networks
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
