Evaluation of 40 Gigabit Ethernet technology for data servers
|
|
|
- Frank Pierce
- 10 years ago
- Views:
Transcription
1 Evaluation of 40 Gigabit Ethernet technology for data servers Azher Mughal, Artur Barczyk Caltech / USLHCNet CHEP-2012, New York
2 Agenda Motivation behind 40GE in Data Servers Network & Servers Design Designing a Fast Data Transfer Kit PCIe Gen3 Server Performance 40G Network testing WAN Transfers Disk to Disk Transfers Questions? 2
3 The Motivation The LHC experiments, with their distributed Computing Models and global program of LHC physics, have a renewed focus on networks, and correspondingly a renewed emphasis on capacity and reliability of the networks Networks have seen an exponential growth in capacity 10X in usage every 47 months in ESnet over 18 years About 6M times capacity growth over 25 years across the Atlantic (LEP3Net in 1985 to USLHCNet as of today) LHC experiments (CMS / ATLAS) are generating large data sets which need to be efficiently transferred to end sites, anywhere in the world A sustained ability to use ever-larger continental and transoceanic networks effectively: high throughput transfers HEP as a driver of R&E and mission-oriented networks Testing latest innovations both in terms of software and hardware Harvey Newman, Caltech 3
4 27.3 PetaBytes Transferred Over 6 Months average transfer rate = 14 Gbps (Dec 2011 May. 2012) 4
5 Target Features in 40GE Server Has at least one 40GE port connecting to LAN/WAN Able to read from Disks at near 40Gbps (4.9 GB/sec) Able to write on Disks at near 40Gbps (4.9 GB/sec) Line rate Network throughput with minimum CPU utilization (therefore more headroom for applications) 5
6 History of 40GE NICs Mellanox is the only vendor offering 40GE NICs (since 2010). Mainly Three variants: 40GE Gen2 NIC ConnectX-2 PCIe Gen 2.0 x8 interface, 32Gbps FD 8b/12b line encoding, 20% overhead, 25.6Gbps FD 40GE Gen 3 NIC PCIe Gen 3.0 x8 interface, 1GB per lane or 64Gbps FD More efficient 64/66 encoding 40/56Gbps Ethernet/VPI NIC Faster Clock rate, can go upto 56Gbps using IB FDR mode 6
7 40GE Server Design Kit SandyBridge E5 Based Servers: (SuperMicro X9DRi-F or Dell R720) Intel E with C1 or C2 Stepping 128GB of DDR3 1600MHz RAM Mellanox VPI CX-3 PCIe Gen3 NIC Dell / Mellanox QSFP Active Fiber Cables LSI i, 8 port SATA 6G RAID Controller OCZ Vertex 3 SSD, 6Gb/s (preferably enterprise disks like Deneva 2) Dell Force10; Z GE Switch Server Cost = ~ $15k 7
8 System Layout FDT Transfer Application LSI RAID Controller Mellanox NIC MSI Interrupts: 4 / 8 / 16 PCIe-3 X8 DDR3 DDR3 DDR3 DDR3 Sandy Bridge CPU 0 8 Cores QPI QPI Sandy Bridge CPU 1 8 Cores DDR3 DDR3 DDR3 DDR3 LSI PCIe-2 X8 PCIe-2 X8 PCIe3 x8 PCIe3 x8 PCIe3 x8 DMI2 PCIe2 x4 PCIe3 x8 PCIe3 x8 PCIe3 x8 PCIe3 x8 LSI 8
9 Hardware Setting/Tuning SuperMicro Motherboard X9DRi/F PCI-e slot needs to be manually set to Gen3, otherwise defaults are Gen2 Disable Hyper threading Change PCI-e payload to the maximum (for Mellanox NICs) Mellanox CX3 VPI Use latest firmware and drivers Use QSFP Active Fiber cables Dell-Force10 Switch - Z9000 Flow control needs to be turned on for server facing ports Single Queue compared to 4 Queue model MTU =
10 Software and Tuning Scientific Linux 6.2 Distribution, default kernel Fast Data Transfer (FDT) utility for moving data among the sites Writing on the RAID-0 (SSD disk pool) /dev/zero /dev/null memory test Kernel smp affinity: Bind the Mellanox NIC driver queues to the processor cores where NIC s PCIe Lane is connected Move LSI Driver IRQ to the second processor Using NUMA Control to bind FDT application to the second processor Change Kernel TCP/IP parameters as recommended by Mellanox 10
11 System Tuning Details /etc/sysctl.conf (added during Mellanox driver installation) ## MLXNET tuning parameters ## net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_sack = 0 net.ipv4.tcp_low_latency = 1 net.core.netdev_max_backlog = net.core.rmem_max = net.core.wmem_max = net.core.rmem_default = net.core.wmem_default = net.core.optmem_max = net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = ## END MLXNET ## Ethernet Interface Ifconfig eth2 mtu 9000 ethtool -G eth2 rx 8192 Numactl (with local node memory binding) numactl --physcpubind=1,2 --localalloc /usr/java/latest/bin/java -jar /root/fdt.jar & Smp Affinity (Mellanox NIC) set_irq_affinity_bynode.sh 1 eth2 Smp Affinity (LSI RAID Controller) echo 20 > /proc/irq/73/smp_affinity 11
12 SuperComputing 2011 Collaborators Caltech Booth 1223 Courtesy of Ciena 12
13 SC11 - WAN Design for 100G 13
14 SC11 - PCIe Gen3 performance: 36.8 Gbps 37Gbps Dell R720xd showed similar results like SuperMicro 4 FDT TCP Flows mem-mem test 14
15 SC11 - Servers Testing, reaching 100G In (Gbps) Traffic: Out Sustained 186 Gbps; Enough to transfer 100,000 Blue-ray per day 15
16 SC11 - Disk to Disk Results; Peaks of 60Gbps 60Gbps 10Gbps 12Gbps Disk write on 7 Supermicro and Dell servers with a mix of 40GE and 10GE Servers. 16
17 40GE Data Transfer Demo between Amsterdam and Geneva In Collaboration with SURFnet, CERN, Ciena, and Mellanox Distance = 1650km, RTT=16ms 17
18 40GE line rate at 39.6Gbs 40Gbps Write ~ 17.5 Gbps CPU Core Utilization 25% each for 4 cores 18
19 Key Challenges encountered SuperMicro Servers, Mellanox CX3 NIC and drivers were all in BETA stage. First hand experience with PCIe Gen3 servers using sample E5 Sandy Bridge processors, Not many vendors were available for testing. What do we know on the BIOS settings for Gen3 (Slots, processor performance mode) Mellanox NIC randomly throwing interface errors. QSFP Passive Copper cable has issues at line rate (39.6Gbps). Use Fiber Cables LSI drivers, single threaded, utilizing a single core to maximum. Will FDT be able to go close to the line rate of Mellanox Network Cards, 39.6Gbps (theoretical peak) End to End 100G and 40G testing, any transport issues? 19
20 Future Directions Investigate bottlenecks in the LSI Raid Card driver, New driver supporting MSI-x Vectors is available (many configurable queues) Optimizing Linux kernel, SSD tuning as compared to mechanical disks, kernel timers, other unknowns Investigate performance for PCIe based SSD drives from vendors like Intel, FusionIO, OCZ Ways to lower CPU Utilization, investigate RoCE Understand/overcome the SSD wearing out problems over a time 20
21 Summary The 40/100Gbps network technology has shown the potential possibilities to transfer peta-scale physics datasets in a matter of hours around the world. Three highly tuned servers can easily reach the 100GE line rate, effectively utilizing the PCIe Gen3 technology. Individual Server tests using E5 processors and PCIe Gen-3 based Network Cards have shown stable network performance reaching line rate at 39.6Gbps. During SC11, Fast Data Transfer (FDT) application achieved an aggregate disk write of 60Gbps. MonALISA intelligent monitoring software, effectively recorded and displayed the traffic at 40/100G and the other 10GE links. 21
22 Questions? 22
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Supermicro Ethernet Switch Performance Test Straightforward Public Domain Test Demonstrates 40Gbps Wire Speed Performance
Straightforward Public Domain Test Demonstrates 40Gbps Wire Speed Performance SSE-X3348S/SR Test report- 40G switch port performance on Supermicro SSE-X3348 ethernet switches. SSE-X3348T/TR 1. Summary
High-Density Network Flow Monitoring
Petr Velan [email protected] High-Density Network Flow Monitoring IM2015 12 May 2015, Ottawa Motivation What is high-density flow monitoring? Monitor high traffic in as little rack units as possible
Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
Performance of Software Switching
Performance of Software Switching Based on papers in IEEE HPSR 2011 and IFIP/ACM Performance 2011 Nuutti Varis, Jukka Manner Department of Communications and Networking (COMNET) Agenda Motivation Performance
Performance Tuning Guidelines
Performance Tuning Guidelines for Mellanox Network Adapters Revision 1.15 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED
Fundamentals of Data Movement Hardware
Fundamentals of Data Movement Hardware Jason Zurawski ESnet Science Engagement [email protected] CC-NIE PI Workshop April 30 th 2014 With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester,
Campus Network Design Science DMZ
Campus Network Design Science DMZ Dale Smith Network Startup Resource Center [email protected] The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Know your Cluster Bottlenecks and Maximize Performance
Know your Cluster Bottlenecks and Maximize Performance Hands-on training March 2013 Agenda Overview Performance Factors General System Configuration - PCI Express (PCIe) Capabilities - Memory Configuration
Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system
bluechip STORAGEline R54300s NAS-Server system Executive summary After performing all tests, the Certification Document bluechip STORAGEline R54300s NAS-Server system has been officially certified according
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and
Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions
WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test
10GE NIC throughput measurements 30.10.2008 (Revised: 6.11.2008) S.Esenov
1 10GE NIC throughput measurements 30.10.2008 (Revised: 6.11.2008) S.Esenov 1 Introduction... 2 2 Terminology... 2 3 Factors affecting throughput... 2 3.1 Host PCIe, memory and CPU connectivity... 2 3.1.1
Comparison of 40G RDMA and Traditional Ethernet Technologies
Comparison of 40G RDMA and Traditional Ethernet Technologies Nichole Boscia, Harjot S. Sidhu 1 NASA Advanced Supercomputing Division NASA Ames Research Center Moffett Field, CA 94035-1000 [email protected]
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance Hybrid Storage Performance Gains for IOPS and Bandwidth Utilizing Colfax Servers and Enmotus FuzeDrive Software NVMe Hybrid
Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system
macle GmbH Grafenthal-S1212M Storage system Executive summary After performing all tests, the macle GmbH Grafenthal-S1212M has been officially certified according to the Open-E Hardware Certification Program
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters
High Throughput File Servers with SMB Direct, Using the 3 Flavors of network adapters Jose Barreto Principal Program Manager Microsoft Corporation Abstract In Windows Server 2012, we introduce the SMB
Introduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster
Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster Ryousei Takano Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS for Very Large Data Transfers Michael Smitasin Lawrence Berkeley National Laboratory (LBNL) Brian Tierney Energy Sciences Network (ESnet) [ 2 ] Example Workflow
Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system
macle GmbH GRAFENTHAL R2208 S2 Storage system Executive summary After performing all tests, the macle GmbH GRAFENTHAL R2208 S2 has been officially certified according to the Open-E Hardware Certification
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Performance Tuning Guidelines
Performance Tuning Guidelines for Mellanox Network Adapters Revision 1.17 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
An Analysis of 8 Gigabit Fibre Channel & 10 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption
An Analysis of 8 Gigabit Fibre Channel & 1 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption An Analysis of 8 Gigabit Fibre Channel & 1 Gigabit iscsi 1 Key Findings Third I/O found
PCI Express and Storage. Ron Emerick, Sun Microsystems
Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature
PCI Express* Ethernet Networking
White Paper Intel PRO Network Adapters Network Performance Network Connectivity Express* Ethernet Networking Express*, a new third-generation input/output (I/O) standard, allows enhanced Ethernet network
I3: Maximizing Packet Capture Performance. Andrew Brown
I3: Maximizing Packet Capture Performance Andrew Brown Agenda Why do captures drop packets, how can you tell? Software considerations Hardware considerations Potential hardware improvements Test configurations/parameters
Intel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS. Mark Wagner Principal SW Engineer, Red Hat August 14, 2011
KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS Mark Wagner Principal SW Engineer, Red Hat August 14, 2011 1 Overview Discuss a range of topics about KVM performance How to improve out of the box experience
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Hadoop on the Gordon Data Intensive Cluster
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
perfsonar Host Hardware
perfsonar Host Hardware This document is a result of work by the perfsonar Project (h@p://www.perfsonar.net) and is licensed under CC BY- SA 4.0 (h@ps://creakvecommons.org/licenses/by- sa/4.0/). Event
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4
NVM Express TM Infrastructure - Exploring Data Center PCIe Topologies
Architected for Performance NVM Express TM Infrastructure - Exploring Data Center PCIe Topologies January 29, 2015 Jonmichael Hands Product Marketing Manager, Intel Non-Volatile Memory Solutions Group
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Implementing Enterprise Disk Arrays Using Open Source Software. Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012
Implementing Enterprise Disk Arrays Using Open Source Software Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012 Mott Community College (MCC) Mott Community College is a mid-sized
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
New Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
Configuring Your Computer and Network Adapters for Best Performance
Configuring Your Computer and Network Adapters for Best Performance ebus Universal Pro and User Mode Data Receiver ebus SDK Application Note This application note covers the basic configuration of a network
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation
PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Big Data Technologies for Ultra-High-Speed Data Transfer in Life Sciences
WHITE PAPER Intel Xeon Processor E5 Family Big Data Analytics Big Data Technologies for Ultra-High-Speed Data Transfer in Life Sciences Using Aspera s High-Speed Data Transfer Technology to Achieve 10
PE310G4BPi40-T Quad port Copper 10 Gigabit Ethernet PCI Express Bypass Server Intel based
PE310G4BPi40-T Quad port Copper 10 Gigabit Ethernet PCI Express Bypass Server Intel based Description Silicom s quad port Copper 10 Gigabit Ethernet Bypass server adapter is a PCI-Express X8 network interface
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012 1 Market Trends Big Data Growing technology deployments are creating an exponential increase in the volume
Picking the right number of targets per server for BeeGFS. Jan Heichler March 2015 v1.2
Picking the right number of targets per server for BeeGFS Jan Heichler March 2015 v1.2 Evaluating the MetaData Performance of BeeGFS 2 Abstract In this paper we will show the performance of two different
Datacenter Operating Systems
Datacenter Operating Systems CSE451 Simon Peter With thanks to Timothy Roscoe (ETH Zurich) Autumn 2015 This Lecture What s a datacenter Why datacenters Types of datacenters Hyperscale datacenters Major
LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment
LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and IT
Accelerating 4G Network Performance
WHITE PAPER Accelerating 4G Network Performance OFFLOADING VIRTUALIZED EPC TRAFFIC ON AN OVS-ENABLED NETRONOME INTELLIGENT SERVER ADAPTER NETRONOME AGILIO INTELLIGENT SERVER ADAPTERS PROVIDE A 5X INCREASE
Experiences with MPTCP in an intercontinental multipathed OpenFlow network
Experiences with MP in an intercontinental multipathed network Ronald van der Pol, Michael Bredel, Artur Barczyk SURFnet Radboudkwartier 273 3511 CK Utrecht, The Netherlands Email: [email protected]
Architecting a High Performance Storage System
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to
Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Up to 4 PCI-E SSDs Four or Two Hot-Pluggable Nodes in 2U
Twin Servers Up to 4 PCI-E SSDs Four or Two Hot-Pluggable Nodes in 2U New Generation TwinPro Systems SAS 3.0 (12Gbps) Up to 12 HDDs/Node FDR(56Gbps)/QDR InfiniBand 1TB DDR3 up to 1866 MHz in 16 DIMMs Redundant
The Bus (PCI and PCI-Express)
4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the
Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal. 2013 by SGI Federal. Published by The Aerospace Corporation with permission.
Stovepipes to Clouds Rick Reid Principal Engineer SGI Federal 2013 by SGI Federal. Published by The Aerospace Corporation with permission. Agenda Stovepipe Characteristics Why we Built Stovepipes Cluster
PCI Express Impact on Storage Architectures and Future Data Centers
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation Author: Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is
PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems
PCI Express Impact on Storage Architectures Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may
Installing Hadoop over Ceph, Using High Performance Networking
WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3
4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card
4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card User Manual Model: UGT-ST644R All brand names and trademarks are properties of their respective owners www.vantecusa.com Contents: Chapter 1: Introduction...
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
How To Test Nvm Express On A Microsoft I7-3770S (I7) And I7 (I5) Ios 2 (I3) (I2) (Sas) (X86) (Amd)
The Performance Impact of NVMe and NVMe over Fabrics PRESENTATION TITLE GOES HERE Live: November 13, 2014 Presented by experts from Cisco, EMC and Intel Webcast Presenters! J Metz, R&D Engineer for the
Next Generation Storage Networking for Next Generation Data Centers
Next Generation Storage Networking for Next Generation Data Centers Dennis Martin President, Demartek Tuesday, September 16, 2014 Agenda About Demartek Increased Bandwidth Needs for Storage Storage Interface
Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School
Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap
over Ethernet (FCoE) Dennis Martin President, Demartek
A Practical Guide to Fibre Channel over Ethernet (FCoE) Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage
NexentaStor Enterprise Backend for CLOUD. Marek Lubinski Marek Lubinski Sr VMware/Storage Engineer, LeaseWeb B.V.
NexentaStor Enterprise Backend for CLOUD Marek Lubinski Marek Lubinski Sr VMware/Storage Engineer, LeaseWeb B.V. AGENDA LeaseWeb overview Express Cloud platform Initial storage build Why NexentaStor NexentaStor
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
Network Function Virtualization: Virtualized BRAS with Linux* and Intel Architecture
Intel Network Builders Reference Architecture Packet Processing Performance of Virtualized Platforms Network Function Virtualization: Virtualized BRAS with Linux* and Intel Architecture Packet Processing
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Welcome to the Dawn of Open-Source Networking. Linux IP Routers Bob Gilligan [email protected]
Welcome to the Dawn of Open-Source Networking. Linux IP Routers Bob Gilligan [email protected] Outline About Vyatta: Open source project, and software product Areas we re working on or interested in
HP Z Turbo Drive PCIe SSD
Performance Evaluation of HP Z Turbo Drive PCIe SSD Powered by Samsung XP941 technology Evaluation Conducted Independently by: Hamid Taghavi Senior Technical Consultant June 2014 Sponsored by: P a g e
Test Report Newtech Supremacy II NAS 05/31/2012. Newtech Supremacy II NAS storage system
Newtech Supremacy II NAS storage system Executive summary The Newtech Supremacy II NAS system was tested by the Open-E QA team. It has been found that the system is stable and functional but performance
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
3 Port PCI Express 2.0 SATA III 6 Gbps RAID Controller Card w/ msata Slot and HyperDuo SSD Tiering
3 Port PCI Express 2.0 SATA III 6 Gbps RAID Controller Card w/ msata Slot and HyperDuo SSD Tiering StarTech ID: PEXMSATA343 The PEXMSATA343 3-Port PCI Express 2.0 SATA Card with HyperDuo adds an internal
Scaling from Datacenter to Client
Scaling from Datacenter to Client KeunSoo Jo Sr. Manager Memory Product Planning Samsung Semiconductor Audio-Visual Sponsor Outline SSD Market Overview & Trends - Enterprise What brought us to NVMe Technology
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser
Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency
