Optics Technology Trends in Data Centers



Similar documents
Trends In Data Rate And Link Length In Evolving Optical Standards

Why 25GE is the Best Choice for Data Centers

The Ethernet Roadmap Applications, servers, optics, and switches

Data center day. a silicon photonics update. Alexis Björlin. Vice President, General Manager Silicon Photonics Solutions Group August 27, 2015

How To Make A Data Center More Efficient

Optimizing Infrastructure Support For Storage Area Networks

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

Low Cost 100GbE Data Center Interconnect

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Data Center Design for 40/100G

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Latest Trends in Data Center Optics

Your Network. Our Connection. 10 Gigabit Ethernet

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

The ABC of Direct Attach Cables

Obsolete Fiber Technology? Not in my Data Center!

25G SMF Optics for Next Generation Enterprise / Campus / Data Centre Applications. Kohichi Tamura, Oclaro. Peter Jones, Cisco

Implementation of Short Reach (SR) and Very Short Reach (VSR) data links using POET DOES (Digital Opto- electronic Switch)

Delivering Scale Out Data Center Networking with Optics Why and How. Amin Vahdat Google/UC San Diego

Hardware Acceleration for High-density Datacenter Monitoring

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays

White Paper. An Overview of Next-Generation 100 and 40 Gigabit Ethernet Technologies

Optical Data Center Markets: Volume I Optical Opportunities Inside the Data Center

Using High Availability Technologies Lesson 12

Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Volume and Velocity are Driving Advances in Data Center Network Technology

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Evolution of Ethernet Speeds: What s New and What s Next

Network Monitoring - Fibre TAP Modules

Optical interconnection networks for data centers

The Need for Low-Loss Multifiber Connectivity

FC SAN Vision, Trends and Futures. LB-systems

10G and Beyond in the Data Center with OM3 Fiber

INTRODUCTION TO MEDIA CONVERSION

Data Centre Cabling. 20 August 2015 Stefan Naude RCDD The SIEMON Company

Enabling 21 st Century Connectivity

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Next Generation 100G Client Optics

Design Guide. SYSTIMAX InstaPATCH 360 Traffic Access Point (TAP) Solution.

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Brocade Solution for EMC VSPEX Server Virtualization

Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

HP Multimode OM3 LC/LC Optical Cables Overview. Models HP 50 m Multimode OM3 LC/LC Optical Cable

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Gigabit Ethernet: Architectural Design and Issues

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Fibre Channel Fiber Cabling

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

100 GIGABIT ETHERNET TECHNOLOGY OVERVIEW & OUTLOOK. Joerg Ammon <jammon@brocade.com> Netnod spring meeting 2012

Multi-Gigabit Interfaces for Communications/Datacomm

Lecture 1. Lecture Overview. Intro to Networking. Intro to Networking. Motivation behind Networking. Computer / Data Networks

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

WHITE PAPER. Enabling 100 Gigabit Ethernet Implementing PCS Lanes

Ethernet/IEEE evolution

100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules

Practical Challenges in Scaling Storage Networks

Cloud-Based Apps Drive the Need for Frequency-Flexible Clock Generators in Converged Data Center Networks

Active Optical Cables for InfiniBand QDR and FDR. SC11 Seattle, WA

Latency Considerations for 10GBase-T PHYs

SMB Direct for SQL Server and Private Cloud

Marker Drivers and Requirements. Encryption and QKD. Enterprise Connectivity Applications

Fiber Coupler Overview

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Ixia Director TM. Powerful, All-in-One Smart Filtering with Ultra-High Port Density. Efficient Monitoring Access DATA SHEET

Cisco SFS 7000D Series InfiniBand Server Switches

Network Technologies for Next-generation Data Centers

Fibre Channel over Ethernet in the Data Center: An Introduction

Scaling from Datacenter to Client

Data Center Network Evolution: Increase the Value of IT in Your Organization

Network Design. Yiannos Mylonas

CISCO 10GBASE X2 MODULES

Advanced Computer Networks. Datacenter Network Fabric

Advanced Modulation Formats in Data Centre Communications Michael J. Wale Director Active Products Research

PCI Express* Ethernet Networking

Transcription:

Optics Technology Trends in Data Centers Marc A. Taubenblatt IBM OFC Market Watch 2014

Outline/Summary Data movement/aggregate BW increasing and increasingly important Historically, per channel data rate increases underlie gains in optical interconnect cost and power efficiency It s getting harder to keep doing that Proliferation of alternate technology choices More Customized (e.g. Technology choice, EOE optimization, packaging) Market fragmentation, fight for volumes ( cost) Make the most of limited PHY resources higher up the stack (i.e. SDN) or new networks (OCS?)

Data Movement Becoming Critical Compute density increasing (multi-core, VM s), BW need increases EW traffic increasing* (Distributed workloads, virtualization, big data, random workloads, e.g. graph analytics) 44% CAGR in DC traffic, 76% within DC (Cisco 2012*) 80%+ of Google traffic now internal facing (B. Koley, Google OI Conf. 2012) Every 1kb of external traffic entering the datacenter generates 930kb of internal traffic (N. Farrington, Facebook OI Conf. 2013) Flatter topologies for lower latency ( higher radix, longer links) Resiliency (redundancy, path diversity, hot backup ) Caveat: Not all datacenters have the same needs (e.g Mega-center vs Enterprise) Interconnect Volumes vs Network Hierarchy Aggregation Decreasing volumes Flatter network Access Source: after B.Booth/Microsoft, Nov 2013, IEEE 400G study group TOR MOR *http://www.globalservicesmedia.com/global-services/analysis/154845/data-center-cloud-computingtrends-2013-beyond

Will networks dominate DC cost and power concerns? Amortized Cost after * H.Liu, Google, IEEE Summer Topicals 2013) after L.A. Barrosom, DataCenter as Computer, 2013 after A.Greenberg, Microsoft, ACM Sigcomm 2009 Growing concern on power and cost of network, but it s not the biggest piece today: Optics becoming most of network cost (H.Liu*) Growing faster than compute and memory But DC efficiency is gated by the network Latency, bottlenecks, predictable latency So has been easy to exploit increased optics/network cost for performance efficiencies at the DC level Leverage of benefits have to level off when?

$/Gbps Challenges for Physical Layer Optical Interconnects: Cost and Power Historically, gains in $/Gbps and mw/gbps driven by increased data rates: Aggregate data rates easier to manage with fewer channels e.g. 400GE advantageous at 8x50Gbp PCIe Gen5 likely to be 32Gb/s, Gen6 could be 64Gb/s. 64G Fibre Channel expected to be 56.10Gb/s OIF has defined 3 serial 56Gb/s interfaces for future optical interconnects But NRZ increasingly difficult at these data rates and beyond: E packaging losses, driver and receiver performance, Link budget CMOS logic not getting any faster, increased power for mux/demux 10000 1000 100 10 10M 266M 100M 1G 10G 1G 12x2.5G 100G (10x10) 2G 4G 8G 12x5G 1 1980 1990 2000 2010 2020 Year 4x5G 4x10G 12x10G 12x10G FC Ethernet AOC SNAP12 $/Gbps 10000 1000 100 Data courtesy of Ken Jackson, prices at volume production 10 1 10 1000 100000 Aggr Raw Data Rate (Gbps) FC Ethernet AOC SNAP12 Power (Ethernet) Power (AOC)

What are the technology choices to stay on trend? MultiMode (MM) VCSELs Getting faster but distance limited SingleMode (SM) Si Photonics Expensive but getting cheaper with Si Ph, no distance issues, link budget and cost (packaging!) more challenging SM VCSELs and MM Si Photonics Best of both worlds or worst of both worlds? WDM, PAM4, DMT Trades fiber cost for Trx cost & power (e.g. NRZ 56G a few db better link budget than PAM4) Fwd Error Correction (FEC) likely needed, adds some latency (e.g. ~100ns for 4-6dB gain*) and don t forget Copper 25G possible for a few meters, and it s cheaper than optics! *T.Wang et al, Huawei IEEE 400GE study group, Nov 13

Competing Technology Space Channel Speed (Gbps) 100 10 1 SM Optics MM Optics Copper 1 10 Distance (m) 100 1000 OFC 2014 Th3C.2: D. Kuchta et al 64Gb/s Transmission over 57m MMF using an NRZ Modulated 850nm VCSEL (also >250m at 40G and > 100m at 60G) Beware that incumbant technologies are not standing still (e.g. VCSELs, Copper)

Data Rate & Power: A Packaging Challenge High data rates cause high electrical to optical link power tighter packaging integration of optics Standard edge of drawer module: Easy to use, replace and cable but electrical link will be costly/expensive at higher channel data rates (e.g. need SerDes in module, PAM4 ) Logic: mproc, memory, switch, etc. First-level package optical module Mid-board optics: Harder to cable and replace, but mitigates electrical link power and cost, opportunities for EOE optimization Logic: mproc, memory, switch, etc. First-level package optical module connector jumper On Package Optics: Difficult to design, cable and replace, but best opportunity for lower power and cost more amenable to HPC and highly custom needs Logic: mproc, memory, switch, etc. First-level package

Summary/Questions Staying on the cost and power trend is creating more technology choices and more possibilities for customization Proliferation of alternate technology choices Volumes are key to lower costs, could too many choices could inhibit volume? Cause longer time for market to adopt? More opportunities for technology niches with reasonable volumes? More Customized (e.g. Technology Choice, EOE optimization, packaging) Different DC s will have different needs, cost tolerance and focus areas Advanced packaging favors more sophisticated users, harder to do this with one size fits all designs Does market fragmentation dominate OR open HW standards?