25GE host/cable options

Similar documents
The Ethernet Roadmap Applications, servers, optics, and switches

Measurement Results of 3m QSFP-to-4SFP Breakout Cable for Non-FEC 25Gbps/lane Server Connections

USB 3.1 Channel Loss Budgets

Overcoming OM3 Performance Challenges

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Latest Trends in Data Center Optics

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

Arista 40G Cabling and Transceivers: Q&A

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

Welcome to Pericom s PCIe and USB3 ReDriver/Repeater Product Training Module.

RSTP to MST Spanning Tree Migration in a Live Datacenter. NANOG47 October 20, 2009 Dani Roisman droisman ~ at ~ peakwebconsulting ~ dot ~ com

PCI Express Supersedes SAS and SATA in Storage

1-Port R422/485 Serial PCIe Card

4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card

Why 25GE is the Best Choice for Data Centers

Successfully negotiating the PCI EXPRESS 2.0 Super Highway Towards Full Compliance

Gigabit Ethernet. Today a number of technologies, such as 10BaseT, Auto-Negotiation

Trends In Data Rate And Link Length In Evolving Optical Standards

Understanding Fiber Polarity: Implications at 10G, 40G, & 100G

Loss & Continuity Testing with MPO/ MTP Connectors

Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations

Method for improving Return Loss in order to reduce PHY power Ron Nordin Panduit Bob Wagner Panduit Paul Vanderlaan Berk-Tek

SX1012: High Performance Small Scale Top-of-Rack Switch

Calrec SoundGrid I/O. User Guide

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

USB PC Adapter V4 Configuration

Packet Tracer 3 Lab VLSM 2 Solution

Backplane Ethernet Study Group Market Drivers and Cost Considerations in Support of 40 inch average grade FR4 backplane links at 10Gb/s per lane

Cisco UCS 5108 Blade Server Chassis

Obsolete Fiber Technology? Not in my Data Center!

Sentinel I28 and Blackbelt Mass Flow Calibration and Verification

Cisco Active Network Abstraction Gateway High Availability Solution

FMC Technologies Measurement Solutions Inc.

Your Network. Our Connection. 10 Gigabit Ethernet

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Example: Multiple OFDM Downstream Channels and Examining Backwards Compatibility. Mark Laubach, Avi Kliger Broadcom

The ABC of Direct Attach Cables

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Low Cost 100GbE Data Center Interconnect

Ethernet over VDSL2 PoE PD Converter

Jitter in PCIe application on embedded boards with PLL Zero delay Clock buffer

Optics Technology Trends in Data Centers

Objective Considerations: Given Today s Data Center Environment

CISCO 10GBASE X2 MODULES

Essential NCPI Management Requirements for Next Generation Data Centers

LevelOne User Manual VDS-0120 Ethernet over VDSL2 Converter (PSE)

RC2000 Web Server User s Manual RCI P/N: FP-SER-ETH-SERVR1

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

PCI-SIG ENGINEERING CHANGE NOTICE

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

PCI Express 2.0 SATA III RAID Controller Card with Internal Mini-SAS SFF-8087 Connector

Connecting a Leica TPS1200 Robotic Total Station to EvidenceRecorder

Zone Distribution in the Data Center

Application Note. PCIEC-85 PCI Express Jumper. High Speed Designs in PCI Express Applications Generation GT/s

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

The Need for Low-Loss Multifiber Connectivity

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

PHD User Manual. Table of Contents

Latency Considerations for 10GBase-T PHYs

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

InstaPATCH Cu Cabling Solutions for Data Centers

PCI Express SATA III RAID Controller Card with Mini-SAS Connector (SFF-8087) - HyperDuo SSD Tiering

INTEGRATED MONITORING RECEIVER WITH MICRO SERVER STAM-IRS. 1. Features. 2. Description

Cisco 10GBASE SFP+ Modules

Cisco UCS 5108 Blade Server Chassis

Some Thoughts on the Future of Cyber-security

Data Center Architecture with Panduit, Intel, and Cisco

Mellanox Academy Online Training (E-learning)

User Guide FFFA

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Smarthome SELECT Bluetooth Wireless Stereo Audio Receiver and Amplifier INTRODUCTION

Advanced Network Services Teaming

Connecting Hosts and Routers

SCHEDULING SOFTWARE. This P.C. program must be running at all times, or PIN number verification will not be allowed.

How To Switch A Layer 1 Matrix Switch On A Network On A Cloud (Network) On A Microsoft Network (Network On A Server) On An Openflow (Network-1) On The Network (Netscout) On Your Network (

User Manual. ATX12V / EPS12V Power Supply TG-I460R (460W+460W) TG-I550R (550W+550W) Switching Power Supply User Manual

Data Sheet FUJITSU Storage ETERNUS CD10000

PP8X Printer Driver Installation Instruction

USB 3.1 Type-C and USB PD connectors

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

Simplifying Data Center Network Architecture: Collapsing the Tiers

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

10G and Beyond in the Data Center with OM3 Fiber

NTP Receiver/Software Installation Guide

Rack & Stack 123 TM. Enterprise Education Government Telecommunications Service Providers Tel:

FAQs. XAP Frequently Asked Questions. Software/Configuration

Exadata Database Machine

Transcription:

25GE host/cable options Summary of thinking after talking to a number of people Mark Nowell & Gary Nicholl

From dudek_111914_25ge_adhoc: 6 configs for AutoNeg to deal with. Historical Note: Autoneg is used to advertise abilities only, not host information. In this preso: mid=6.81db (consistent with 802.3bj), low = mid - 3dB, high = mid + 3dB 2

But further discussion (smaller email thread) resulted in a larger list 10 combinations is starting to get really ugly when considering mfg testing, compliance testing, usability issues, economies of scale what is reasonable to manage? 3

Back to the original motivations and objectives 5m objec)ve Define a single- lane 25 Gb/s PHY for opera8on over links consistent with copper twin axial cables, with lengths up to at least 5m Why: Industry interest in suppor1ng ports/reaches that were backwards compa1ble to 100GBASE- CR4 and would enable breakout configura1ons of 4x25GE using same equipment 3m objec)ve Define a single- lane 25 Gb/s PHY for opera8on over links consistent with copper twin axial cables, with lengths up to at least 3m Why: lower power and cost solu1on, op1mized for intra- rack server to switch interconnect, lowest latency 4

More motivations Plug and play a channel that is guaranteed by definition to come up working, between transmitter, receiver, and channel Avoid spec sprawl identify a minimum-set, straightforward spec which limits implementation complexity and multiplicity of test modes (ideally one spec for cable vendors to comply to) 5

This could be achieved with two configurations Config Host#1 Loss Cable (m) Host#2 loss FEC Comments 1 mid 5m mid RS (CL91) Meets 5m objective & industry goals 2 mid 3m mid No Meets 3m objective & industry goals Only one problem configuration #2 doesn t initially appear to be technically feasible. 3m cable reduces channel loss but not enough to support no FEC operation 35dB starting channel budget (802.3bj w/ 5m) needs to reduce to 25dB to support no FEC (is this verified?) Questions: Are we able to meet MTTFPA without FEC? Are we absolutely sure we can t close the link with improvements in cable assemblies for 3m or COM model assumptions? 6

Options to consider Config Host#1 Loss Cable (m) Host#2 loss FEC Comments 1 mid 5m mid RS (CL91) Meets 5m objective & industry goals 2 mid 3m mid No Meets 3m objective & industry goals We start exploring options: Option #1: Adding a light FEC with low latency (but not zero) KR (CL74) Option #2: Lowering the channel loss spec to claw back dbs 7

High level option #1: Use KR (CL74) FEC Config Host#1 Loss Cable (m) Host#2 loss FEC Comments 2a mid 3m mid KR (CL74) Meets 3m objective & industry goals Interesting (but possibly not verified as feasible yet) Buys the necessary margin to allow 3m cable, with hosts consistent with 802.3bj (aka mid ) loss KR (CL74) FEC implementation is easily available to anyone who has a 10G design. But it still isn t a solution for the ultimate desire of a No FEC solution Question: What is market impact of the extra 82ns latency? 8

High level option #2: Reducing overall channel budget further Config Host#1 Loss Cable (m) Host#2 loss FEC Comments 2b mid 3m low No Lower host loss @ one end 2c mid <3m mid No Further reduce cable length until No-FEC achievable 2b) Lowering server side host loss budget by some number of db. Justified since typically smaller boards (compared to switches) Do servers really have margin to spare? How much? Conflicts possibly with desire to move to lower cost board materials (with higher loss). If we reduce one end, then we could increase the other end host loss spec. Creates quickly broadening set of configurations to address 2c) Lowering cable reach to < 3m could eradicate configuration explosion How much lower than 3m would we need to go? Is it a useable length? Would this limit broad market potential for server TOR applications? 9

Beyond options 2a, 2b, 2c The first order options open up possibilities for other options leading to the 10+ potential configurations discussed earlier. Some potentials and comments: Options Justification Issues to to mitigated Increased host loss Take advantage of low loss host loss implementations to fully use available budget. Enable lower cost materials Potential to reduce # retimers on switch designs Breaks compatibility with 100GBASE CR4 compliant products Creates configurations that are not supported but easily implemented How many high host loss variants above 6.81dB? Is there significant impact to cost (savings)? 10

Increasing configurations options Number of Valid Configs Host-cable-Host-FEC 2 Mid-5m-Mid-RS Mid-3m-Mid-(No/KR) Comments Blue = difference from above Red = unsupported option that is a possible user config Meets all goals & objectives (assumes KR FEC acceptable if No FEC not feasible) 2 Mid-5m-Mid-RS Mid-<3m-Mid-No 3 Mid-5m-Mid-RS Mid-3m-Mid-KR Mid-3m-Low-No 5 Mid-5m-Mid-RS Mid-3m-Mid-KR Mid-3m-Low-No High-5m-Low-RS High-3m-High-RS High-5m-High-RS High-5m-Mid-RS Find the reach that supports goals of low-latency operation and addresses market need (might not be possible) Add lower host loss variant All subsets are valid and managed by AN Adding a higher host loss variant. Valid configs starts to grow Start to see invalid (but implementable) configs 11

Key Takeaways/Next Steps It should be possible to minimize configurations Higher host loss (than 802.3bj) brings a number of issues. Do we need it? Interest in exploring ways to achieve loss budget that might support no- FEC Cable length, Cable improvements, lower host loss variant, COM model and/ or parameters optimization More 3m cable channel data contributions? No FEC MTTFPA analysis needed User feedback needed on whether there is an acceptable minimum reach <3m 12