Optical interconnection networks for data centers The 17th International Conference on Optical Network Design and Modeling Brest, France, April 2013 Christoforos Kachris and Ioannis Tomkos Athens Information Technology (AIT) email: kachris@ait.edu.gr, itom@ait.edu.gr
60 secs on the web
How Big are Data Centers Data Center Site Sq footage Facebook (Santa Clara) 86,000 Google (South Carolina) 200,000 HP (Atlanta) 200,000 IBM (Colorado) 300,000 Microsoft (Chicago) 700,000 [Source: How Clean is Your Cloud?, Greenpeace 2011] Wembley Stadium:172,000 square ft
Cloud Computing Traffic According to Cisco s Global Cloud Index, IP traffic over data center networks will reach 4.8 Zettabytes a year by 2015, and cloud computing will account for one third of it, or 1.6 Zettabytes
Data Center Traffic In 2010, 77% of traffic remained within the data center, and this will decline only slightly to 76 percent by 2015
Data Centers Power Consumption Data centers consumed 330 Billion KWh in 2007 and is expected to reach 1012 Billion KWh in 2020 2007 (Billion KWh) 2020 (Billion KWh) Data Centers 330 1012 Telecoms 293 951 Total Cloud 623 1963 [Source: How Clean is Your Data Center, Greenpeace, 2012
Carbon Emissions of ICT Sector 830 MtCO2e (2% of the total emissions) 1430 MtCO2e [Source: SMART 2020: Enabling the low carbon economy in the information age, Greenpeace] Data centers is the fastest growing contributor to the ICT sector s carbon footprint. (In 2002, the global data centre footprint, including equipment use and embodied carbon, was 76 MtCO2e and this is expected to more than triple by 2020 to 259 MtCO2e ) The CO2 emissions of 10,000 Google searches is equal to a five mile trip in the average U.S. automobile [Source: Google, Inc.]
Power consumption of Data Centers The cost to power and cool the data centers will reach the CAPEX cost Data center networks (DCN) consume as much as 23% of the total power consumption Data center servers consume as much as 40% of the total power consumption in a Data Center [Source: Where does power go?, www.greendataproject.org]
Data Center Networks Data Center Content switches & Load balance ToR switches Servers Core switches Aggregate Switches Rack Internet 10 Gbps 10 Gbps Rack Rack Rack The Data Center Networks are usually based on a fat tree topology using commodity switches Core Layer: Used to interconnect the access switches (10Gbps) Aggregate Layer: Used to interconnect the ToR switches (10Gbps) Rack Layer: Top of Rack (ToR) switches are used to connects the servers in a rack (1Gbps)
Data Center Networks Data Center Internet Content switches & Load balance Core switches Aggregate Switches 10 Gbps ToR switches 10 Gbps Servers Rack Rack Rack Rack SERENDIPITY
Optics Interconnects Hierarchy [Source: IBM, Internal Optical Interconnects ]
From Opaque to Transparent So far optical interconnect have been only used for point to point interconnects mainly for higher bandwidth and better density
Opaque to transparent networks Telecommunications Data Centers [Source: Altan Kocyigit, All optial Networking]
How green is all optical? Current switches consume a high amount of energy for the E/O, O/E, buffers and Switch Fabrics [Source: R. Tucker, Are optical Networks Green? ]
Need for All optical Interconnects We need High bandwidth, Low latency, scalable, energy efficient Data Centers Networks that can sustain the exponential increase of the network traffic All Optical Interconnects: No need for buffering Switching can be performed using passive components at the wavelength level (e.g. splitters, wavelength switching AWGR) Higher bandwidth (wavelength multiplexing) Lower latency Lower power consumption, footprint, carbon emissions
Future Data Center Networks High power consumption due to O/E, E/O and switches Terabit Optical Interconnect WDM Links Current Future Data Center Networks [Source: Cisco, Petabit Optical Switch for Data Center Networks, Scaling Networks in Large Data Centers, Facebook We need high radix, scalable, energy efficient Data Centers that can sustain the exponential increase of the network traffic
The optical toolbox Coupler AWGR Wavelength switching Optical MEMS Spatial switching WSS Wavelength and spatial switching
A Survey on Optical Interconnects In the last years there are several research papers that propose the use of optical interconnects for Data Center networks. There are two main categories: Hybrid Schemes (Commodity networks enhanced with optical circuits) All optical schemes Packet based schemes Circuit based schemes
C Through Architecture C Through Architecture was introduced in 2010 (Rice U, CM, Intel) The ToR switches are connected both to an electrical packet based network (i.e. Ethernet) and an optical circuit based network. Pros: Ease to upgrade the current networks Cons: The circuit switch network can only provide a matching on the graph of racks. Reconfiguration time takes several ms [Source: A Survey on Optical Interconnects for Data Centers, C. Kachris, IEEE Surveys and Tutorials]
Helios Architecture Helios was introduced in 2010 by UCSD Helios is similar to the c Through architecture but it is based on WDM links (superlinks) that aggregates several wavelengths. These superlinks can carry up to w x 10 Gbps (where w is the number of wavelengths; from 1 to 32). Pros: Higher bandwidth per link Cons: The circuit switch network can only provide a partial matching on the graph of racks. Reconfiguration time takes several ms
DOS Architecture The DOS architecture was introduced in 2010 by UC Davis The switching in the DOS architecture is based on Arrayed Waveguide Grating Router (AWGR) that allows contention resolution in the wavelength domain.
DOS Architecture The optical switch fabric consists of an array of tunable wavelength converters (one TWC for each node), an AWGR and a loopback shared buffer. Each node can access any other node through the AWGR by configuring the transmitting wavelength of the TWC. The switch fabric is configured by the control plane that controls the TWC and the label extractors (LEs).
DOS Architecture Pros: Fast switching Elimination of Aggregate switches Cons: Complex content resolution, arbitration High power consumption due to multiple O/E and E/O converters (in SDRAM and in Label extractors) Scalability (number of ports in the AWGR)
Petabit Architecture Petabit Architectuer was introduced in 2011 by Polytechnic Institute of New York Petabit switch fabric is based on AWGR and tunable wavelength converters The Petabit switch fabric adopts a three stage Clos network and each stage consists of an array of AGWRs that are used for the passive routing of packets. The switch is combined efficiently with electronic buffering (in the line cards) and scheduling.
Petabit Architecture The main difference compared to the DOS architecture is that Petabit switch does not use any buffers inside the switch fabric (thus avoiding the power hungry E/O and O/E conversion). Instead, the congestion management is performed using electronic buffers in the Line cards and an efficient scheduling algorithm. Pros: Highly scalable Elimination of contention buffers Cons: High number of tunable wavelength converters
Proteus Architecture The Proteus architecture is an all optical architecture that is based on Wavelength Selective Switch (WSS) modules and an optical switching matrix that is based on MEMS. The optical wavelengths are combined using a multiplexer and are routed to a WSS. The WSS multiplex each wavelength to up to k different groups and each group in connected to a port in the MEMS optical switch.
Proteus Architecture The switching configuration of the MEMS determines which set of ToRs are connected directly. In case that a TOR switch has to communicate with a ToR switch that is not directly connected, then it uses hop byhop communication. Proteus must ensure that the entire ToR graph is connected when performing the MEMS reconfiguration. Pros: Lower power consumption due to the elimination of the aggregate switches (73 kw compared to 160kW of a Reference design) Cons: High reconfiguration time when the MEMS switch needs to be reconfigured High Cost due to the number of WSS required
based Interconnects Optical interconnects can be enhanced by the exploitation of subcarriers can provide better spectral efficiency and fine grain bandwidth allocation Conventional WDM WDM Sub-carrier Single channels Super-channels (i.e. )
based interconnects In the WSS based scheme each node consists of a rack that accommodates several servers and each rack uses a Top of the Rack (ToR) switch to communicate with other racks. Each ToR switch has several optical transceivers (e.g. 1Gbps SFP) to connect to the servers and one or more optical transceivers Optical Switching Matrix WSS MUX Coupler WSS based Optical Switching WSS based Optical Switching λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 ToR [C. Kachris, I. Tomkos, Optical based Data Center Networks, OFC 2012] ToR ToR
based interconnects Power reduction is achieved though the use of less number of transceivers but enhanced with for better spectral efficiency Optical Switching Matrix WSS MUX Coupler WSS based Optical Switching WSS based Optical Switching Power consumption of optical ToR Switches WDM WSS WSS 300 λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 λ1 λ2 λ3 λ8 Power Consumption (kw) 250 200 150 100 50 0 80 160 240 320 Number of Racks Figure 5. Power consumption of WSS ToR ToR ToR
The Cascade effect 1 Watt save in the server or the network level result in cumulative savings of about 2.84 Watts in total power consumption [Source: Berk Tek: The Choice for Data Center Cabling, September 2008
Conclusions Future Data Center Network will require high bandwidth networks to face the traffic of cloud computing However the power consumption will have to remain almost the same Optical Interconnects looks as a promising solution in providing high bandwidth, low latency and energy efficient interconnects
Questions? C. Kachris, I. Tomkos A Survey on Optical Interconnects for Data Centers, IEEE Survey and Tutorials, 2012 C. Kachris, K. Bergman, I. Tomkos, Optical Interconnects for Future Data Center Networks, Springer Publication, end of 2012 Bibliography on optical interconnects for data centers www.ait.gr/ait_web_site/researcher/kachris/optical_interconnect_bibliography.htm Thank you! Christoforos Kachris kachris@ait.edu.gr