Global Networking Services



Similar documents
25 Gb/s Ethernet over a single lane for server interconnect Closing Plenary Report

In Support of BMP and Economic Feasibility BRAD BOOTH, MICROSOFT SEPTEMBER 2014 KANATA, ON CANADA

Objective Considerations: Given Today s Data Center Environment

The Ethernet Roadmap Applications, servers, optics, and switches

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

Pluggable Optics for the Data Center

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

CISCO 10GBASE X2 MODULES

Latest Trends in Data Center Optics

HIGH QUALITY FOR DATA CENTRES ACCESSORIES AND INFRASTRUCTURE. PRODUCT CATALOG 12/2014 PRO-ZETA a.s.

Your Network. Our Connection. 10 Gigabit Ethernet

Parallel SMF (PSM) in the Data Centre

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

Low Cost 100GbE Data Center Interconnect

25G SMF Optics for Next Generation Enterprise / Campus / Data Centre Applications. Kohichi Tamura, Oclaro. Peter Jones, Cisco

Volume and Velocity are Driving Advances in Data Center Network Technology

The ABC of Direct Attach Cables

Optical Data Center Markets: Volume I Optical Opportunities Inside the Data Center

MIGRATING TO A 40 GBPS DATA CENTER

How To Make A Data Center More Efficient

Data Center Design for 40/100G

100 GIGABIT ETHERNET TECHNOLOGY OVERVIEW & OUTLOOK. Joerg Ammon <jammon@brocade.com> Netnod spring meeting 2012

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

25 Gb/s Ethernet Over a Single Lane for Server Interconnect Call For Interest Consensus

BROCADE OPTICS FAMILY

Intel Ethernet SFP+ Optics

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Arista 40G Cabling and Transceivers: Q&A

40 and 100 Gigabit Ethernet Overview

GIGABIT ETHERNET. Greg Hankins UKNOF16. v /04/22

Evolution of Ethernet Speeds: What s New and What s Next

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

Network Topologies and Distances

Features and Benefits

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

QuickSpecs. Models HP X132 10G SFP+ LC ER Transceiver. HP SFP+ Transceivers (SR, LRM, LR and ER) Overview

Preparing Infrastructure for 10 Gigabit Ethernet Applications

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

Why 25GE is the Best Choice for Data Centers

25GbE: The Future of Ethernet

100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

High Speed Ethernet. Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF

Trends In Data Rate And Link Length In Evolving Optical Standards

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

over Ethernet (FCoE) Dennis Martin President, Demartek

White Paper. 10 Gigabit Ethernet Technology Overview

100 GBE AND BEYOND. Greg Hankins NANOG52. Diagram courtesy of the CFP MSA. NANOG /06/14

ECONOMICS OF DATA CENTER OPTICS SPIE PHOTONICS WEST INVITED PAPER (9775-2) FEBRUARY 16, 2016

40 Gigabit and 100 Gigabit Ethernet Are Here!

Safe Harbor Summary. Inphi Proprietary

Ethernet/IEEE evolution

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

QuickSpecs. Models HP X131 10G X2 CX4 Transceiver. HP X131 10G X2 Transceivers (SR, LRM, LR, ER and CX4) Overview. HP X131 10G X2 SC LRM Transceiver

Unified Storage Networking

Introduction to Data Center

Ixia Net Optics 10G Regen Tap

WHITE PAPER. Adapting Data Centers Using Fan-Out Technology

10 Gigabit Ethernet Cabling

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

Cisco 10GBASE SFP+ Modules

Contract Number NNG07DA20B NASA SEWP IV

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

WHITE PAPER. Adapting Data Centers Using Fan-Out Technology

10 Gigabit Ethernet (10GbE) and 10Base-T - RoadMap

Fabrics SDN and Big Data. What this means to your white space

Safe Harbor Summary. Inphi Proprietary

PE10G2T Dual Port Fiber 10 Gigabit Ethernet TOE PCI Express Server Adapter Broadcom based

DATA CENTER APPLICATIONS REFERENCE GUIDE Networking & Storage

Optics Technology Trends in Data Centers

Untangled: Improve Efficiency with Modern Cable Choices. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

100 Gigabit Ethernet is Here!

Local Area Networks. Guest Instructor Elaine Wong. Elaine_06_I-1

Ethernet Alliance Panel #2: Bandwidth Growth and The Next Speed of Ethernet

Safe Harbor Summary. Inphi Proprietary

10G and Beyond in the Data Center with OM3 Fiber

Prisma IP 10 GbE VOD Line Card

Data Center Architecture with Panduit, Intel, and Cisco

Optimizing Infrastructure Support For Storage Area Networks

CATEGORY 8: UNDERSTANDING THE NEW CATEGORY OF PERFORMANCE FOR BALANCED TWISTED PAIR CABLE

The Need for Low-Loss Multifiber Connectivity

White Paper. An Overview of Next-Generation 100 and 40 Gigabit Ethernet Technologies

DATA CENTER OPTICS AFCOM MIDLANTIC CHAPTER MEETING OCTOBER 10, 2014

Ethernet 102: The Physical Layer of Ethernet

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Data Centre Cabling. 20 August 2015 Stefan Naude RCDD The SIEMON Company

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Data Center Market Trends

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

3Com Transceivers OVERVIEW. Standards-based flexible Ethernet connections KEY BENEFITS

Cisco Catalyst 4500 Series Line Cards

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

G-TAP A Series // Data Sheet

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Next Breakthrough Technologies: Trends in Datacenter for 100G and Beyond. Durgesh S Vaidya May 9, 2014

PE210G6SPi9 Six Port Fiber 10 Gigabit Ethernet PCI Express Server Adapter Intel based. Description. Key Features

Cisco Catalyst 4500 Series Line Cards

Transcription:

Global Foundation Services Global Networking Services Objectives to Support Cloud Scale Data Center Design Brad Booth, Tom Issenhuth IEEE 802.3 400Gb/s Ethernet Study Group IEEE 802 November 2013 Plenary Dallas, TX

Supporters Andreas Bechtolsheim, Arista Anshul Sadana, Arista Chris Bergey, Luxtera Brian Welch, Luxtera Radha Nagarajan, Inphi Sudeep Bhoja, Inphi Jeff Maki, Juniper Dave Chalupsky, Intel Ryan Latchman, Mindspeed Dan Dove, Dove Networking Solutions David Warren, HP Adit Narasimh, Molex Scott Sommers, Molex Joe Dambach, Molex Kapil Shrikhande, Dell Scott Schube, NeoPhotonics Winston Way, NeoPhotonics John Petrilla, Avago Technologies Nathan Tracy, TE Connectivity Piers Dawe, Mellanox Technologies Arlon Martin, Mellanox Technologies 2

Cloud Scale Data Centers There is no single design or size for a cloud data center Topologies continue to evolve with technology advancements and cost structures Differences are driven by generation of design, location and scale While the overall traffic flow within different data centers is similar the design differences drive different link requirements Data center development/growth Three phases: design, build-out and operational (often simultaneously) Three year refresh cycle New colo* may come online as old one is being refreshed Infrastructure should last at least 4-6 generations of refresh New data centers and colos being added to meet growing demand * Colo = colocation (8 MW deployment area) 3

Optical Ethernet Historical Data Center* Inside Data Center Outside Data Center STD 550m 1 Gigabit Ethernet 500m MMF (SX) 1 Gigabit Ethernet 3km SMF 10 Gigabit Ethernet 300m MMF (SR) OM4 10 Gigabit Ethernet 2km & 10km SMF 40 Gigabit Ethernet 150m MMF (SR4) 40 Gigabit Ethernet 10km SMF 100 Gigabit Ethernet 150m MMF (SR10) 100 Gigabit Ethernet 100m MMF (SR4) 100 Gigabit Ethernet 10km SMF 400 Gigabit Ethernet??m SMF 400 Gigabit Ethernet??km SMF Inside the Data Center is 500m * Objectives 4

Optical Ethernet Historical Campus* Inside Campus 1 Gigabit Ethernet 3km SMF (LX) STD (5km) Outside Campus 10 Gigabit Ethernet 2 & 10km SMF (LR) 10 Gigabit Ethernet 40km SMF (ER) 40 Gigabit Ethernet 10km SMF (LR4) 40 Gigabit Ethernet 40km SMF (ER4) 100 Gigabit Ethernet 10km SMF (LR4) 100 Gigabit Ethernet 40km SMF (ER4) 400 Gigabit Ethernet??km SMF 400 Gigabit Ethernet??km SMF * Objectives Inside the Campus is 2km 5

Why Outside Not Used Inside A lot of bucks 6

Examples of Missing the Market 40GBASE-SR4 is a good solution for a row, but the 150m reach doesn t cross the data center Industry created a 300m version based off of 10GBASE-SR 100GBASE-SR10 is not cost effective (# of fibers required) 100GBASE-SR4 reach doesn t cross the data center.3bm unable to adopt SMF solution 100GBASE-LR4 Lite solution developed for 2km New 1300nm optimized MMF for data center applications Study Group Has Opportunity to Develop Standard to Meet Market Need 7

Cloud Data Center Campus Interconnections 10-80 km > 100 km Colo 1 Central DCR 1 Colo 2, 3 & 4 Colo 2 Central DCR 2 Colo 2, 3 & 4 Colo 3 Central DCR 3 Colo 2, 3 & 4 Colo 4 Central DCR 4 Colo 2, 3 & 4 Metro/Core (DWDM) Colo B1 1,000 m 500 m Colo A4 Colo A3 Colo A2 Colo A1 Spine A1 Spine B1 Colo C1 Spine C1 Colo D1 Spine D1 Infrastructure designed to use a single data rate (X) TARGET MARKET FOR 400G LEAFs 20 m 3 m TORs Servers Colo = colocation Server links are a subset of X 8

Interconnection Volume Four sections per colo & multiple colos ( 4) per data center Volumes below are per section (except DCR to Metro) A End Z End Volume Reach (max) Medium Cost Sensitivity Server ǂ TOR 10k 100k 3 m Copper Extreme TOR LEAF 1k 10k 20 m Fiber (AOC) High LEAF SPINE 1k 10k 400 m SMF High Market Space SPINE DCR 100 1000 1,000 m SMF Medium Campus DCR Metro 100 300 10-80 km SMF Low WAN LAN ǂ Server-TOR links may be served by breakout cables 9

Technology Adoption 40G Growing for server connectivity Strong in TOR to DC router 100G Starting in TOR to DC router (non-standard) Missing components: low-cost 300-400m optics, switch silicon 400G Targeting TOR to DC router 40G servers will increase the need to reduce over-subscription Need to supply components that slowed 100G adoption 10

RELATIVE COST Technology Timing Considerations Technology Comparison Technology A Technology B Early highvolume (cloud) adopters not willing to pay premium for Technology A 7 6 5 4 3 2 1 0 Cloud DC Need 1 2 3 4 5 6 7 8 9 10 YEAR 400G Standard Complete 400G Amendments Next Gen Data Rate Data Center Refresh 11

Cloud Data Center Reach Considerations LAN links 500 m Very cost sensitive due to high volume of links being used Typically assume a 3-4 db loss budget Campus links 2 km Decreased cost sensitivity due to lower volume and technical trade-offs Loss budget typically in 4-6 db range Metro/Core is typically DWDM (outside of IEEE 802.3 scope) Links 20 m MMF module is a possibility, but needs to be cost competitive with AOCs Copper still being used intra-rack breakout is of interest 12

Recommendation Adopt objectives to support the high-volume Cloud Data Center reach requirement ⱡ Provide physical layer specifications which support 400 Gb/s operation over: At least 500 m of single-mode fiber At least 2 km of single-mode fiber Electrical interface specification Objective should enable AOC implementations Sufficient for direct-attach copper (DAC) implementations? ⱡ Task Force may decide a single PMD can satisfy both objectives. 13

Thank you! 14