Optical interconnection networks for data centers



Similar documents
Optical interconnects in data centers

OVER the last few years, the exponential increase of the

Optical Networking for Data Centres Network

Data Center Network Structure using Hybrid Optoelectronic Routers

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Control Plane architectures for Photonic Packet/Circuit Switching-based Large Scale Data Centres

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network

Delivering Scale Out Data Center Networking with Optics Why and How. Amin Vahdat Google/UC San Diego

T. S. Eugene Ng Rice University

Flexible SDN Transport Networks With Optical Circuit Switching

Juniper Networks QFabric: Scaling for the Modern Data Center

Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers

Research Article Hybrid Optical Switching for Data Center Networks

Scaling 10Gb/s Clustering at Wire-Speed

Hybrid Optical Switching for Data Center Networks

SummitStack in the Data Center

Are Optical Networks Green? Rod Tucker University of Melbourne

Optimizing Data Center Networks for Cloud Computing

Genexis FTTH Network Architecture

Network Planning and Operation Tool

Photonic Networks for Data Centres and High Performance Computing

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Green Photonics in Switching : Rod Tucker Centre for Energy-Efficient Telecommunications University of Melbourne

Operating Systems. Cloud Computing and Data Centers

Adding application awareness in flexible optical networking

Why 25GE is the Best Choice for Data Centers

Technical White Paper for Multi-Layer Network Planning

Arista and Leviton Technology in the Data Center

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

SummitStack in the Data Center

HIGH-PERFORMANCE SOLUTIONS FOR MONITORING AND SECURING YOUR NETWORK A Next-Generation Intelligent Network Access Guide OPEN UP TO THE OPPORTUNITIES

Datacenter architectures

Energy-awareness in Fixed Network Infrastructures. Voravit Tanyingyong

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Workshop - New Paradigms for Routing in Optical Networks

Power Saving Features in Mellanox Products

Unified Computing Systems

Charting a Path to Sustainable and Scalable ICT Networks

SDH and WDM: a look at the physical layer

Photonic Switching Applications in Data Centers & Cloud Computing Networks

Cisco s Massively Scalable Data Center

VIRTUALIZING THE EDGE

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Backbone Capacity Planning Methodology and Process

Brocade Solution for EMC VSPEX Server Virtualization

Data Center Network Topologies: FatTree

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

SDH and WDM A look at the physical layer

Network Technologies for Next-generation Data Centers

How To Build A Cisco Uniden Computing System

Recession-Proof Consulting Services with CWDM Network Design

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

How To Build A Network For Storage Area Network (San)

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Application Performance Analysis and Troubleshooting

Ethernet Alliance Panel #2: Bandwidth Growth and The Next Speed of Ethernet

Virtualized Converged Data Centers & Cloud how these trends are effecting Optical Networks

PART II. OPS-based metro area networks

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

A Business Case for Scaling the Next-Generation Network with the Cisco ASR 9000 System: Now with Converged Services. Key Takeaways.

ENABLING THE PRIVATE CLOUD - THE NEW DATA CENTER NETWORK. David Yen EVP and GM, Fabric and Switching Technologies Juniper Networks

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Technology-Driven, Highly-Scalable Dragonfly Topology

FNT EXPERT PAPER. // From Cable to Service AUTOR. Data Center Infrastructure Management (DCIM)

100 Gigabit Ethernet is Here!

Value Proposition for Data Centers

Optimizing Infrastructure Support For Storage Area Networks

How To Switch A Layer 1 Matrix Switch On A Network On A Cloud (Network) On A Microsoft Network (Network On A Server) On An Openflow (Network-1) On The Network (Netscout) On Your Network (

Data Center Networking Designing Today s Data Center

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

Introduction to Optical Networks

How Microsoft IT Developed a Private Cloud Infrastructure

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Connect & Go with WDM PON Ea 1100 WDM PON

Microsoft s Cloud Networks

10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable

Energy Limitations on Optical Data Transport and Switching. Rod Tucker University of Melbourne

Data Center Switch Fabric Competitive Analysis

PON Technology A Shift in Building Network Infrastructure. Bob Matthews Technical Manager CommScope Canada

L innovazione tecnologica dell industria italiana verso la visione europea del prossimo futuro

100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Multi-protocol Label Switching

Hierarchy and dynamics of optical networks.

Annex D: Technological developments

Chapter 9. Internet. Copyright 2011 John Wiley & Sons, Inc 10-1

Data center day. a silicon photonics update. Alexis Björlin. Vice President, General Manager Silicon Photonics Solutions Group August 27, 2015

10G CWDM Conversion Technology

LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.

Data Center Convergence. Ahmad Zamer, Brocade

ARISTA WHITE PAPER Cloudifying Data Center Monitoring

DESIGN AND VERIFICATION OF LSR OF THE MPLS NETWORK USING VHDL

Network Performance Channel

Top Five Things You Need to Know Before Building or Upgrading Your Cloud Infrastructure

Advanced Computer Networks. Datacenter Network Fabric

Transcription:

Optical interconnection networks for data centers The 17th International Conference on Optical Network Design and Modeling Brest, France, April 2013 Christoforos Kachris and Ioannis Tomkos Athens Information Technology (AIT) email: kachris@ait.edu.gr, itom@ait.edu.gr

60 secs on the web

How Big are Data Centers Data Center Site Sq footage Facebook (Santa Clara) 86,000 Google (South Carolina) 200,000 HP (Atlanta) 200,000 IBM (Colorado) 300,000 Microsoft (Chicago) 700,000 [Source: How Clean is Your Cloud?, Greenpeace 2011] Wembley Stadium:172,000 square ft

Cloud Computing Traffic According to Cisco s Global Cloud Index, IP traffic over data center networks will reach 4.8 Zettabytes a year by 2015, and cloud computing will account for one third of it, or 1.6 Zettabytes

Data Center Traffic In 2010, 77% of traffic remained within the data center, and this will decline only slightly to 76 percent by 2015

Data Centers Power Consumption Data centers consumed 330 Billion KWh in 2007 and is expected to reach 1012 Billion KWh in 2020 2007 (Billion KWh) 2020 (Billion KWh) Data Centers 330 1012 Telecoms 293 951 Total Cloud 623 1963 [Source: How Clean is Your Data Center, Greenpeace, 2012

Carbon Emissions of ICT Sector 830 MtCO2e (2% of the total emissions) 1430 MtCO2e [Source: SMART 2020: Enabling the low carbon economy in the information age, Greenpeace] Data centers is the fastest growing contributor to the ICT sector s carbon footprint. (In 2002, the global data centre footprint, including equipment use and embodied carbon, was 76 MtCO2e and this is expected to more than triple by 2020 to 259 MtCO2e ) The CO2 emissions of 10,000 Google searches is equal to a five mile trip in the average U.S. automobile [Source: Google, Inc.]

Power consumption of Data Centers The cost to power and cool the data centers will reach the CAPEX cost Data center networks (DCN) consume as much as 23% of the total power consumption Data center servers consume as much as 40% of the total power consumption in a Data Center [Source: Where does power go?, www.greendataproject.org]

Data Center Networks Data Center Content switches & Load balance ToR switches Servers Core switches Aggregate Switches Rack Internet 10 Gbps 10 Gbps Rack Rack Rack The Data Center Networks are usually based on a fat tree topology using commodity switches Core Layer: Used to interconnect the access switches (10Gbps) Aggregate Layer: Used to interconnect the ToR switches (10Gbps) Rack Layer: Top of Rack (ToR) switches are used to connects the servers in a rack (1Gbps)

Data Center Networks Data Center Internet Content switches & Load balance Core switches Aggregate Switches 10 Gbps ToR switches 10 Gbps Servers Rack Rack Rack Rack SERENDIPITY

Optics Interconnects Hierarchy [Source: IBM, Internal Optical Interconnects ]

From Opaque to Transparent So far optical interconnect have been only used for point to point interconnects mainly for higher bandwidth and better density

Opaque to transparent networks Telecommunications Data Centers [Source: Altan Kocyigit, All optial Networking]

How green is all optical? Current switches consume a high amount of energy for the E/O, O/E, buffers and Switch Fabrics [Source: R. Tucker, Are optical Networks Green? ]

Need for All optical Interconnects We need High bandwidth, Low latency, scalable, energy efficient Data Centers Networks that can sustain the exponential increase of the network traffic All Optical Interconnects: No need for buffering Switching can be performed using passive components at the wavelength level (e.g. splitters, wavelength switching AWGR) Higher bandwidth (wavelength multiplexing) Lower latency Lower power consumption, footprint, carbon emissions

Future Data Center Networks High power consumption due to O/E, E/O and switches Terabit Optical Interconnect WDM Links Current Future Data Center Networks [Source: Cisco, Petabit Optical Switch for Data Center Networks, Scaling Networks in Large Data Centers, Facebook We need high radix, scalable, energy efficient Data Centers that can sustain the exponential increase of the network traffic

The optical toolbox Coupler AWGR Wavelength switching Optical MEMS Spatial switching WSS Wavelength and spatial switching

A Survey on Optical Interconnects In the last years there are several research papers that propose the use of optical interconnects for Data Center networks. There are two main categories: Hybrid Schemes (Commodity networks enhanced with optical circuits) All optical schemes Packet based schemes Circuit based schemes

C Through Architecture C Through Architecture was introduced in 2010 (Rice U, CM, Intel) The ToR switches are connected both to an electrical packet based network (i.e. Ethernet) and an optical circuit based network. Pros: Ease to upgrade the current networks Cons: The circuit switch network can only provide a matching on the graph of racks. Reconfiguration time takes several ms [Source: A Survey on Optical Interconnects for Data Centers, C. Kachris, IEEE Surveys and Tutorials]

Helios Architecture Helios was introduced in 2010 by UCSD Helios is similar to the c Through architecture but it is based on WDM links (superlinks) that aggregates several wavelengths. These superlinks can carry up to w x 10 Gbps (where w is the number of wavelengths; from 1 to 32). Pros: Higher bandwidth per link Cons: The circuit switch network can only provide a partial matching on the graph of racks. Reconfiguration time takes several ms

DOS Architecture The DOS architecture was introduced in 2010 by UC Davis The switching in the DOS architecture is based on Arrayed Waveguide Grating Router (AWGR) that allows contention resolution in the wavelength domain.

DOS Architecture The optical switch fabric consists of an array of tunable wavelength converters (one TWC for each node), an AWGR and a loopback shared buffer. Each node can access any other node through the AWGR by configuring the transmitting wavelength of the TWC. The switch fabric is configured by the control plane that controls the TWC and the label extractors (LEs).

DOS Architecture Pros: Fast switching Elimination of Aggregate switches Cons: Complex content resolution, arbitration High power consumption due to multiple O/E and E/O converters (in SDRAM and in Label extractors) Scalability (number of ports in the AWGR)

Petabit Architecture Petabit Architectuer was introduced in 2011 by Polytechnic Institute of New York Petabit switch fabric is based on AWGR and tunable wavelength converters The Petabit switch fabric adopts a three stage Clos network and each stage consists of an array of AGWRs that are used for the passive routing of packets. The switch is combined efficiently with electronic buffering (in the line cards) and scheduling.

Petabit Architecture The main difference compared to the DOS architecture is that Petabit switch does not use any buffers inside the switch fabric (thus avoiding the power hungry E/O and O/E conversion). Instead, the congestion management is performed using electronic buffers in the Line cards and an efficient scheduling algorithm. Pros: Highly scalable Elimination of contention buffers Cons: High number of tunable wavelength converters

Proteus Architecture The Proteus architecture is an all optical architecture that is based on Wavelength Selective Switch (WSS) modules and an optical switching matrix that is based on MEMS. The optical wavelengths are combined using a multiplexer and are routed to a WSS. The WSS multiplex each wavelength to up to k different groups and each group in connected to a port in the MEMS optical switch.

Proteus Architecture The switching configuration of the MEMS determines which set of ToRs are connected directly. In case that a TOR switch has to communicate with a ToR switch that is not directly connected, then it uses hop byhop communication. Proteus must ensure that the entire ToR graph is connected when performing the MEMS reconfiguration. Pros: Lower power consumption due to the elimination of the aggregate switches (73 kw compared to 160kW of a Reference design) Cons: High reconfiguration time when the MEMS switch needs to be reconfigured High Cost due to the number of WSS required

based Interconnects Optical interconnects can be enhanced by the exploitation of subcarriers can provide better spectral efficiency and fine grain bandwidth allocation Conventional WDM WDM Sub-carrier Single channels Super-channels (i.e. )

based interconnects In the WSS based scheme each node consists of a rack that accommodates several servers and each rack uses a Top of the Rack (ToR) switch to communicate with other racks. Each ToR switch has several optical transceivers (e.g. 1Gbps SFP) to connect to the servers and one or more optical transceivers Optical Switching Matrix WSS MUX Coupler WSS based Optical Switching WSS based Optical Switching λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 ToR [C. Kachris, I. Tomkos, Optical based Data Center Networks, OFC 2012] ToR ToR

based interconnects Power reduction is achieved though the use of less number of transceivers but enhanced with for better spectral efficiency Optical Switching Matrix WSS MUX Coupler WSS based Optical Switching WSS based Optical Switching Power consumption of optical ToR Switches WDM WSS WSS 300 λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 Power Consumption (kw) 250 200 150 100 50 0 80 160 240 320 Number of Racks Figure 5. Power consumption of WSS ToR ToR ToR

The Cascade effect 1 Watt save in the server or the network level result in cumulative savings of about 2.84 Watts in total power consumption [Source: Berk Tek: The Choice for Data Center Cabling, September 2008

Conclusions Future Data Center Network will require high bandwidth networks to face the traffic of cloud computing However the power consumption will have to remain almost the same Optical Interconnects looks as a promising solution in providing high bandwidth, low latency and energy efficient interconnects

Questions? C. Kachris, I. Tomkos A Survey on Optical Interconnects for Data Centers, IEEE Survey and Tutorials, 2012 C. Kachris, K. Bergman, I. Tomkos, Optical Interconnects for Future Data Center Networks, Springer Publication, end of 2012 Bibliography on optical interconnects for data centers www.ait.gr/ait_web_site/researcher/kachris/optical_interconnect_bibliography.htm Thank you! Christoforos Kachris kachris@ait.edu.gr