Latency Considerations for 10GBase-T PHYs



Similar documents
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect

High-Performance Automated Trading Network Architectures

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

IEEE Congestion Management Presentation for IEEE Congestion Management Study Group

Storage at a Distance; Using RoCE as a WAN Transport

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Cut I/O Power and Cost while Boosting Blade Server Performance

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

SMB Direct for SQL Server and Private Cloud

Cluster Implementation and Management; Scheduling

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

An Oracle White Paper December Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

Optimizing Data Center Networks for Cloud Computing

Block based, file-based, combination. Component based, solution based

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Fibre Channel Over and Under

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

IP SAN Best Practices

Switching Architectures for Cloud Network Designs

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

Unified Fabric: Cisco's Innovation for Data Center Networks

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

ethernet alliance Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Fibre Channel over Ethernet in the Data Center: An Introduction

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Global Headquarters: 5 Speen Street Framingham, MA USA P F

ioscale: The Holy Grail for Hyperscale

Gigabit Ethernet Design

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Bioscience. Introduction. The Importance of the Network. Network Switching Requirements. Arista Technical Guide

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Architecting Low Latency Cloud Networks

Memory Channel Storage ( M C S ) Demystified. Jerome McFarland

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Performance Evaluation of Linux Bridge

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

InfiniBand in the Enterprise Data Center

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

3G Converged-NICs A Platform for Server I/O to Converged Networks

High Speed I/O Server Computing with InfiniBand

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Cloud-Based Apps Drive the Need for Frequency-Flexible Clock Generators in Converged Data Center Networks

A SENSIBLE GUIDE TO LATENCY MANAGEMENT

Brocade Solution for EMC VSPEX Server Virtualization

Next Generation Data Centre ICT Infrastructures. Harry Forbes CTO NCS

Michael Kagan.

White Paper Solarflare High-Performance Computing (HPC) Applications

PERFORMANCE TUNING ORACLE RAC ON LINUX

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

Isilon IQ Network Configuration Guide

IP SAN BEST PRACTICES

Cloud-ready network architecture

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

IxChariot Virtualization Performance Test Plan

Sun Constellation System: The Open Petascale Computing Architecture

Building a cost-effective and high-performing public cloud. Sander Cruiming, founder Cloud Provider

Network Architecture and Topology

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Future technologies for storage networks. Shravan Pargal Director, Compellent Consulting

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

State of the Art Cloud Infrastructure

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays

Computer Organization & Architecture Lecture #19

The Future of Cloud Networking. Idris T. Vasi

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Cloud Networking: A Network Approach that Meets the Requirements of Cloud Computing CQ2 2011

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

PCI Express Overview. And, by the way, they need to do it in less time.

New Data Center architecture

QoS and Communication Performance Management

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time

Understanding Latency in IP Telephony

Data Center Architecture Overview

Boosting Data Transfer with TCP Offload Engine Technology

RoCE vs. iwarp Competitive Analysis

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

LS DYNA Performance Benchmarks and Profiling. January 2009

Chapter 1 Reading Organizer

An Oracle White Paper September Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups

Box Leangsuksun+ * Thammasat University, Patumtani, Thailand # Oak Ridge National Laboratory, Oak Ridge, TN, USA + Louisiana Tech University, Ruston,

Data Center Convergence. Ahmad Zamer, Brocade

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

3.2 Limitations of Software Transports

Transcription:

Latency Considerations for PHYs Shimon Muller Sun Microsystems, Inc. March 16, 2004 Orlando, FL

Outline Introduction Issues and non-issues PHY Latency in The Big Picture Observations Summary and Recommendations 2

Why is PHY Latency an Issue for In the past, PHY latency for Ethernet was driven by bit-budget requirements of CSMA/CD Determines the physical span of the Ethernet network Latency requirements are very tight 10-Gigabit Ethernet is the first standard that does not support CSMA/CD Latency requirements can be substantially relaxed Allows for useful implementation tradeoffs Caution needs to be exercised when selecting the maximum allowable latency for the PHY Some network applications that run over Ethernet are latency-sensitive May suffer performance degradation if the PHY latency becomes significant 3

When is Latency Not a Problem Support for Pause flow control Rarely used Not a very popular (or useful) protocol At 10Gb/s speeds the size of the flow control buffers is already large If implemented, is probably already off-chip Network applications that mostly use bulk data transfers Backups, file serving, etc. Network applications (bulk data or transactional) that use lots of lowthroughput connections Web servers, some databases, etc. Pipelining and parallelism hide the latency for above applications 4

When is Latency a Problem Applications that have a significant transactional network traffic profile Message-based and/or request-response traffic patterns Clustering, HPCC, OLTP, etc. High-throughput connections where bulk data transfers are typically preceded by message exchanges Most databases (Oracle), etc. For above applications latency directly affects performance Relatively few connections do not lend themselves well for pipelining 5

Additional Latency Requirements Ethernet has never been considered a low-latency interconnect Mostly due to overheads incurred above the Ethernet sublayer However: Physical layers tend to be leveraged between various interconnect technologies Fiber Channel, InfiniBand, PCI-Express, etc. A low latency PHY will have a broader market potential 6

Networked Systems' Latency Components Protocol stack and OS In the lower 10s of microseconds in each direction End-to-end: ~2x Will continue to come down in the future Used to be in the 100s of microseconds Processors are getting faster More efficient network traffic processing in the OS Hardware hooks to speed up packet processing Server memory and I/O subsystem Up to several microseconds per packet (multiple accesses) NUMA effects, etc. I/O bridge latencies End-to-end: 2x Will get much better in the future Modern systems are already capable of minimizing this latency New NIC architectures will be able to hide most of it 7

Networked Systems' Latency Components NICs and switches Up to 1.2 microseconds per h/w component Most implementations use store-and-forward End-to-end: 3x For latency sensitive applications cut-through is an option for both NICs and switches Cable delay Up to 0.5 microseconds per hop End-to-end: 2x The vast majority of links in future datacenters will be shorter than 100m Blade and rack systems 8

Observations Goal: Pick a number for PHY latency such that it is proportionally insignificant in the overall system in the foreseeable future Latency consideration space: Ideally, the PHY latency should be on the order of 10s of nanoseconds Will accommodate all Ethernet and non-ethernet applications in the foreseeable future PHY latency on the order of 100s of nanoseconds is acceptable Will accommodate most Ethernet and some non-ethernet applications PHY latency should not exceed 1 microsecond May start affecting Ethernet over TCP/IP application performance in the foreseeable future 9

Observations (Continued) Trade-offs: From a system perspective, only end-to-end latency matters (Rx+Tx) Can be budgeted asymmetrically Given the choice between latency vs. complexity/power/cost, latency should take precedence Moor's Law will eventually take care of the latter but not of the former Given the choice between latency vs. UTP cable length, cable length should take precedence In the short term support for installed cabling is more important 10

Summary and Recommendations Relaxing latency requirements for the PHY does not come for free Eventually may start affecting some application performance May also reduce market potential Evaluate proposals in the context of observations and trade-offs presented Make final determination based on the bang for the buck trade-off 11

That's All, Folks! shimon.muller@sun.com March 16, 2004 Orlando, FL