Prospects for Software Defined Networking & Network Function Virtualization in Media and Broadcast John Ellerton Head of Media Futures, British Telecom john.ellerton@bt.com
Agenda The Promise of SDN and NFV A Reality Check Research Lab Design Considerations Lab Functional Components Lab Phases Findings & Challenges Conclusions
The Promise of SDN & NFV Design goals for future broadcast and media networks: Minimize Management Converged IP Infrastructure Increasing Endpoint Mobility Need for Elasticity The Trinity Programmability Virtualization Centralisation
We use a variety of proprietary appliances to provide media and broadcast functions The concept of virtualization is well-known and not new Operating system virtualization (Virtual Machines) Computational and application resource virtualisation (Cloud Computing) Network Function Virtualization Virtualize the class of network function Defining NFV Replace specialist hardware with instances of virtual services provided on service nodes in the network Enables high volume services and functions on generic platforms
SDN & NFV: A Reality Check Why Software Defined Networking and Network Functions Virtualization? SDN is not a new protocol You do not buy off the shelf SDN Building out infrastructure using NFV will take time
SDN & NFV: A Reality Check Why Software Defined Networking and Network Functions Virtualization? SDN is not a new protocol You do not buy off the shelf SDN Building out infrastructure using NFV will take time
Bandwidth, Compute & Storage UHD production will increase bandwidth demand 1080p 8bit 4:2:2 50fps uncompressed @ 3Gb/s 2160p 12bit 4:2:2 50fps uncompressed @ 24Gb/s 4320p 12bit 4:2:2 50fps uncompressed @ 96Gb/s We can now begin to virtualize broadcast functions JPEG 2K encode/decode Uncompressed to IP encapsulation Packet generation and analysis Production needs massive, flexible storage on demand Petabyte scale, high performance storage nodes Real-time QoS Automated setup and teardown of nodes
Merchant silicon Research lab design considerations General-purpose commodity off-the-shelf Ethernet platforms Reduce Network Complexity Distributed control planes via a centralized controller Optical Transport Flexible optical networking concept Open Application Program Interfaces Push / pull configuration or information directly to each layer
Applications Video Service Scheduler, Load balancer Packet Controller - Open Daylight 1.0 Open source, modular, pluggable, flexible SDN controller platform Media Gateways Physical solution, Virtual solution Optical Switching Optical ROADMs Optical Network Hypervisor Lab functional components Abstracts underlying transport network from client SDN controllers
Phase 1
It worked (eventually) Automated scheduling, setup and teardown of broadcast services across multi-layer (IP, over OpenFlow, over optical) Whitebox software problems Equipment plagued with incompatibility problems, requiring numerous upgrades and bug work-arounds Hardware-based video encoding Powerful but inflexible Phase 1 Findings Optical transport layer abstraction Limited API meant limited control automation
Phase 2 ScheduALL Open API XML Adva Video Orchestrator Open API REST-API REST-API REST-API Opendaylight Helium (ODL 1.3) OF 1.3 Opendaylight Hydrogen (ODL 1.0) OF 1.0 Open Controllers Src. Aperi BCOM OF-DPA* Adva Optical Hypervisor White Boxes Aperi Dst. Open API
Packet Controller - Open Daylight Helium OpenFlow 1.3, Open vswitch support Media Functions Virtualization - reprogrammable FPGA based cards JPEG 2K encode/decode Uncompressed to IP encapsulation Hitless switching Packet generation and analysis Service and Network Resiliency Phase 2 Findings Testing underlined the need for hitless service switching Setup/teardown of services impacted by single failures of key components Maintenance and Stability of Whitebox Switches Firmware loading could vary from a few seconds to several minutes
There s more to be done Phase 3 Optical Domain Flexibility & Bandwidth Optical domain should be as flexible as IP/Ethernet layers Elastic Optical Networks (EON) utilizes recent ITU-T flexi-grid (flexible bit rates to beyond 100Gb/s) Virtual Network Function Infrastructure Management (VIM) Open Stack: virtual media encoders, caching nodes and file storage Open Stack does not appear to meet important SDN & NFV requirements, such as distribution, networking, operational optimization, and data plane optimization Architecture, Interfaces and Models ETSI NFV Reference Architectural Framework Candidate architecture for phase 3
Underlay Network Abstraction Current challenges Abstracted representation of each server layer (optical and Ethernet) and client layer (IP) is an important goal Role of Standards and Open Source Limited engagement and participation of Standards Development Organizations Open Source communities easier to engage with. Immediate access to software platforms and an active and willing support community Integration of Whitebox Switching into Legacy OSS/BSS Yet to see viable management platforms for very large number of whitebox switches allowing integration with existing Operations and Business Support Systems
Conclusions It is possible to build high performance broadcast networks using SDN today Rapid path switching of uncompressed video signals works using OpenFlow and low cost commodity Ethernet switches Provisioning of virtualized broadcast functions works well with an OpenFlow network, allowing cost-effective repurposing of hardware OpenFlow can be used with optical networks to provision capacity-on-demand Elastic Optical Networks may provide even more flexible optical capacity in future There is more work to do on robustness, scaling, interoperability and standards
Acknowledgements & Questions john.ellerton@bt.com John Ellerton Andrew Lord Paul Gunning Kristan Farrow Paul Wright British Telecom, United Kingdom Daniel King David Hutchison Lancaster University, United Kingdom We thank our hardware and software partners for their ongoing support, including: ScheduALL, Nevion, Aperi, ADVA Optical Networking, and Intel Labs. Special thanks to Telefonica, Verizon, Orange and the BBC who were willing to share their thoughts and ideas with us. Finally, we would like to acknowledge and thank our University partners and the EPSRC-funded project Towards Ultimate Convergence of All Networks (TOUCAN).