Control Plane architectures for Photonic Packet/Circuit Switching-based Large Scale Data Centres Salvatore Spadaro Optical Communications group Universitat Politècnica de Catalunya (UPC) Barcelona, Spain spadaro@tsc.upc.edu Acknowledgment LIGHTNESS project partners Symposium on Next Generation Data Center-Paving the way for the Zettabyte Era, ECOC2013, London, September 24 th, 2013
Table of contents The FP7 EU LIGHTNESS project Some drivers/uses cases for data centres network evolution LIGHTNESS roach Data Plane architecture (TOR, OCS and OPS) Control Plane architecture (SDN) Overall architecture Conclusions 2
The LIGHTNESS EU FP7 project Low latency and high Throughput dynamic NEtwork infrastructures for high performance datacentre interconnects (318606, November 2012 October 2015) Consortium (industry and academia) Interoute (Coordinator) Technical University of Eindhoven (TU/e) Nextworks Barcelona Supercomputing Centre (BSC) University of Bristol (Technical leader) Universitat Politècnica de Catalunya (UPC) Infinera University of California Davis Website: www.ict-lightness.eu 3
The LIGHTNESS EU FP7 project Main objectives: Design and prototype high capacity transport networks for intra-data Centres Design and prototype high-scalable and strictly non-blocking OPS node Design and prototype hybrid Top of the Rack (TOR) switch Design and prototype a unified network control plane for dynamic and ondemand high bandwidth network services provisioning 4
Data Centres: Drivers for evolution DCsaregrowinginsizetoaccommodatetheever-increasingdemandfor cloud services Data centres traffic is expected to quadruple in the next few years; mainly eastwest traffic (within data centres) DCs are required to provide more powerful IT capabilities, more storage space and more capacity Flexibility, power efficiency, QoS guarantees, cost-effectiveness, less complex to manage Lower time to market for new services to be deployed 5
Use case1: DCN self-optimisation Service constraints (QoS) and characteristics invariance during the service lifetime Data Centre Network Fabric Data Centre Network Fabric TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR Customer B storage resource Customer A storage resource Customer B storage resource Customer A s Customer B s Network services for customer A lication New Network service for customer B lication Customer A s Customer B s Re configured network service for customer A lication Discovered performance degradation 6
Use case2: Service recovery Service constraints (QoS) and characteristics invariance during the service lifetime Data Centre Network Fabric Data Centre Network Fabric TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR Customer s Network services for customer lication Network failure Customer storage resource Customer s Re configured network service to recover from the failure condition Customer storage resource 7
Use case3: Service scheduling Scheduled content replication for high-availability Data Centre Network Fabric TOR TOR TOR TOR TOR TOR Primary storage for the customer critical data Backup storage for the customer critical data Customer s running lications with critical data Advance reservation support Network services for content storage Scheduled network service for content relocation 8
Data Centres: Current architecture Multi-tier hierarchical architecture, based on commodity Ethernet/InfiniBand switch fabrics core aggregation access TOR TOR TOR TOR TOR TOR Main drawbacks Scalability, Latency, power consumption Current management and control functions are almost static and requiring manual intervention More flexible, automated and programmable solutions 9
Optics in DCN solutions Flat optical DCN architecture, based on the combination of OPS and OCS switching technologies and a novel TOR design Ultra-high bandwidth provided by WDM technology Enhanced scalability, transmission rate, lower power consumption Architecture Year Elect./Opt Circuit/Packet Scalability Cap. Limit Prototype C-Through 2010 Hybrid Hybrid Low Tx/Rx Yes Helios 2010 Hybrid Hybrid Low Tx/Rx Yes Proteus 2010 All-optical Circuit Medium Tx/Rx Yes LIONS 2012 All-optical Packet Medium twc, awgr Yes Petabit, IRIS 2010 All-optical Packet Medium twc, awgr No Cascaded AWGRs 2013 All-optical Circuit High twc, awgr Yes MIS 2004 All-optical Packet Low soa Yes Data Vortex 2006 All-optical Packet Low soa Yes 10
LIGHTNESS DCN architecture Overall architecture: OCS technology to handle longlived traffic flows (no latency requirements) OPS technology to handle shortlived traffic flows (latency requirements) TOR classify flows and maps them to OCS/OPS CP Agent OPS/OCS ToR CP Agent OPS/OCS ToR D C Management & Orchestration Network lications Unified LIGHTNESS Control Plane CP Agent Northbound Interface Inter DC CP Interface Southbound Interface CP Agent Intra inter DC interface Control plane to automate the network service provisioning CP Agent OPS/OCS ToR CP Agent OPS/OCS ToR OPS node CP Agent OCS node 11
DCN Data Plane: TOR design High-speed FPGA platforms and hybrid opto-electronic transceivers Traffic from servers parsed and med to OCS/OPS Interfaces with unified Control Plane Configuration, monitoring Rx/Tx to OPS switching Electronic server to ToR connectivity OPS/OCS ToR From/to servers Electronic data parsing/switching and ming/scheduling to OCS/OPS transport Rx/Tx Rx/Tx Rx/Tx 40/100G transceivers to OCS switching 12
DCN Data Plane: OPS switch fabric WDM OPS Spanke-type modular switch with distributed control Input ports: N=FxM (being F the number of input fibres, each carrying M wavelengths) Modular WDM OPS architecture based on 1xF photonic switches Contention resolution blocks at each output based on fixed wavelength conversion W. Miao,J. Luo,S. DiLucente,H. Dorren,N. Calabretta, Novelflatdatacenternetworkarchitecturebased on scalable and flow-controlled optical switch system, PD4.F.3, PDs session on Thursday Highly distributed control Low latency * S. Di Lucente et al., Scaling Low-Latency Optical Packet Switches to a Thousand Ports, IEEE/A JOCN, 2012 13
DCN CP requirements/features Make simpler the management of the lications running inside data centres Ability to provision connectivity services quickly in order to meet the SLA requirements of the intra data centre dynamic traffic patterns Handling failure conditions, managing alarm notifications coming from the underlying data plane devices and recovering Extensible to be easily extended to cope with additional features and technologies Able to isolate the logical resources, secure accessibility and traffic segregation for multi-tenancy capability Expose network resource information to both data centre management (orchestration) and lications Support for multiple optical technologies (OCS/OPS) Support inter-dcs connectivity 14
SDN-based control plane Centralised SDN controller Integrated control plane supporting heterogeneous optical technologies Data plane abstraction DC management, orchestration, lications Northbound Centralized SDN Controller E/W Southbound Cooperation with other controllers Creation and management of multiple co-existing and independent network slices (virtual networks) Interfaces CP A OPS/OCS ToR CP A OPS/OCS ToR OPS node CP A CP A Southbound (e.g., extended OpenFlow protocol) CP A OPS/OCS ToR CP A OPS/OCS ToR Cp A Intra inter DC interface Northbound East-west OCS node 15
SDN-based control plane solution SDN controller implements: Network service provisioning Path/Flow computation Monitoring functions Topology discovery Resource abstraction Additional network functions can be programmed through the open Northbound APIs: Dynamic network service optimization, etc. 16
SDN-based control plane solution SDN controller features: Abstraction: Resource abstraction at the Southbound interface Programmability: Additional and enhanced network functions can be programmed through the open Northbound APIs Interoperability: Network lications on top of the SDN controller for enhanced routing functions (e.g., PCE) Open Northbound APIs enables a potential interaction with Cloud Management Systems (e.g, OpenStack) 17
Interfaces (I) Northbound: Provide APIs, procedures and mechanisms to provision on-demand and flexible network services in the data centre Both resource requirements (bandwidth, computing, storage, etc.) and network performance (latency, etc.) can be sent Abstracted DCN topology information to be exposed for monitoring and data centre orchestration purposes 18
Interfaces (II) Southbound: Communication with the network devices deployed in the DCN (TOR, OCS, OPS) Implements procedures to discover, configure, and monitor their status OpenFlow protocol extensions Expertise and demostrations* To be designed for OPS/Information model East-West: multiple controllers (scalability) * M. Channegowda talk, SDN workshop 19
Conclusions Hybrid optical switching technologies to provide transport services tailored to the lication requirements (latency, throughput) Unified control plane based on centralised SDN controller Efficient data centres resources management Abstraction models for the hybrid optical technologies OF extensions for OCS and OPS switching capabilities 20
Control Plane architectures for Photonic Packet/Circuit Switching-based Large Scale Data Centres Salvatore Spadaro Optical Communications group Universitat Politècnica de Catalunya (UPC) Barcelona, Spain spadaro@tsc.upc.edu Acknowledgment LIGHTNESS project partners Symposium on Next Generation Data Center-Paving the way for the Zettabyte Era, ECOC2013, London, September 24 th, 2013