1 SDN & NFV: Hoffnung oder hype? Martin Dräxler, Holger Karl, Matthias Keller, Sevil Mehraghdam, Arne Schwabe, Philip Wette University of Paderborn 1 SDN & NFV: Hoffnung oder Hype?
2 Overview Software-defined networking Technical context Issues Research examples Network function virtualization Technical context Research examples 2 SDN & NFV: Hoffnung oder Hype?
3 SDN technological context: Switches, routers A typical switch or router structure: two structures High-performance data plane, with simple functionality Complex decision logic Router Software Control Management: CLI, SNMP Routing Protocols: OSPF, ISIS, BGP Hardware Datapath 3 SDN & NFV: Hoffnung oder Hype? Per-packet: Lookup, switch, buffer From 
4 Insight: Control interface exists In typical router/switch configurations, there is a control interface for the actual packet forwarding functionality It is fairly simple It talks in terms of matching header fields and simple actions to perform 4 SDN & NFV: Hoffnung oder Hype?
5 Simple network? Router Software Control Million of lines of source code 7279 RFCs Barrier to entry Hardware Datapath 500M gates 10Gbytes RAM Bloated Power Hungry Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, 5 SDN & NFV: Hoffnung oder Hype? From 
6 SDN core idea Use the existing control interface to forwarding fabric But pull out the decision logic from the switch/router Many of those, distributed, no single view on network status and replace it by centralized instance Centralize control logic! 6 SDN & NFV: Hoffnung oder Hype?
7 Issues About what to talk to the brain? When to talk to the brain? How to structure the brain? How many brains, where? 7 SDN & NFV: Hoffnung oder Hype?
8 About what to talk to the brain: Flows?! What is a flow? Application flow All http All shuffle traffic of one node Jim s traffic All packets to Canada 8 SDN & NFV: Hoffnung oder Hype? Types of action Allow/deny flow Route & re-route flow Isolate flow Make flow private Remove flow From 
9 Packet-switching substrate in SDN Ethernet DA, SA, etc IP DA, SA, etc TCP DP, SP, etc Payload Collection of bits to plumb flows (of different granularities) between end points 9 SDN & NFV: Hoffnung oder Hype? From 
10 Popular SDN today: OpenFlow OpenFlow Controller OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Switch Data Path (Hardware) 10 SDN & NFV: Hoffnung oder Hype? From 
11 OpenFlow Protocol v1.0 Rule Action Stats Packet + byte counters 1. Forward packet to port(s) 2. Encapsulate and forward to controller 3. Drop packet 4. Send to normal processing pipeline Switch Port MAC src MAC dst 11 SDN & NFV: Hoffnung oder Hype? Eth type + mask what fields to match VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport From 
12 A detour: Rings a bell? A switch performs an exact match on some restricted header fields and performs some simple actions: rewrite, forward, drop decisions made at central point We have seen that before A switch performs an exact match on a label and performs some simple actions: rewrite, forward, drop, push/pop label decisions made at central point MPLS, GMPLS, Main differences No need for label edge switches Wildcarding 12 SDN & NFV: Hoffnung oder Hype?
13 A detour: Three standard architectures Compare three standard architectures : IP, MPLS, SDN From the perspective of the main three interfaces that exist in any network architecture: Host-to-Network, Operator-to-Network, Packet-to-Switch Host-to- Network Packet-to- Switch Operator-to- Network IP MPLS Header fields in IP packet Header fields in IP packet 13 SDN & NFV: Hoffnung oder Hype? Same Label None somewhat (e.g., PCE) SDN Depends Depends Main focus Idea from , 4WARD papers
14 Issues About what to talk to the brain? SDN, OpenFlow: flows Other schemes (e.g., PCE in MPLS): virtual circuits, When to talk to the brain? How to structure the brain? How many brains, where? 14 SDN & NFV: Hoffnung oder Hype?
15 When to talk to the brain? Whenever a non-matchable flow arrives at a switch What else is the switch supposed to do? Should this be the rule, or the exception? The rule: This happens a lot Brain needs to be extremely fast, very short latencies, to react properly The exception: Happens rarely How to achieve? Reasonable defaults! Preconfiguring network is imperative, with reasonable timeouts 15 SDN & NFV: Hoffnung oder Hype?
16 Issues About what to talk to the brain? SDN, OpenFlow: flows Other schemes (e.g., PCE in MPLS): virtual circuits, When to talk to the brain? Proactively! Reactive only as fallback How to structure the brain? How many brains, where? 16 SDN & NFV: Hoffnung oder Hype?
17 How to structure the brain? Or: what is a controller? Goal: compute flow mods From many concurrent requests With lots of repetitive tasks But also with lots of creative aspects OpenFlow Controller Unknown flow! Here is a FLOWMOD: Match + Action 17 SDN & NFV: Hoffnung oder Hype? Control Path OpenFlow Data Path (Hardware)
18 Controller structure to compute flow mods Split controllers into A reusable part: Deals with concurrency, parsing, security, handling OpenFlow protocol engine, A dedicated part: Take the actual decisions E.g., fancy multi-path routing scheme, load balancing, Distinguish between Controller framework just the reusable part Control application the dedicated part Controller the two together 18 SDN & NFV: Hoffnung oder Hype?
19 Controller frameworks Many controller frameworks exist Core difference: APIs to connect the control application Implementation Many options Separate processes or threads Libraries bound together Examples: Beacon, NOX/POX, Ryu, OpenDaylight, ONOS, Some are quite simple and straightforward, some very complex Some come with a complete programming philosophy App( Runtime( Switch(API( Switches( Monolithic(Controller( Control(Flow,(Data(Structures,(etc.(( Controller(Platform( (OpenFlow)( 19 SDN & NFV: Hoffnung oder Hype? Apps! Monitor! Route FW! LB! Programmer!API!! (Pyretic)! Runtime! Pyretic!Controller!Platform! Switch!API! (OpenFlow)! Switches! From 
20 Example network Internet SDN Switch w/ labeled ports Servers 1 20 SDN & NFV: Hoffnung oder Hype? 2 3 A B From 
21 A simple OpenFlow router Priority 2:dstip=A - > fwd(2) 1:* - > fwd(1) 2:dstip=B - > fwd(3) Pattern Action 1 dstip!=a dstip!=b 21 SDN & NFV: Hoffnung oder Hype? dstip=a 2 3 dstip=b 21 A B From 
22 Router turns load balancer Suppose: Turn the router into a load balancer for servers A and B Which header fields could you use? Which rules are needed? Which priorities? ""srcip=0*,dstip=p"0>"mod(dstip=a)""""" ""srcip=1*,dstip=p"0>"mod(dstip=b)" Then: Can we combine router program and load balancer program? 22 SDN & NFV: Hoffnung oder Hype? Load Balancer: IP/mod 1" Pattern 2" 3" Action A" B" And THAT is the challenge of SDN! From 
23 Addressing this challenge: Pyretic (one approach) Controller application for access control: Block host def access_control(): return ~(match(srcip= ) match(dstip= ) ) Access control, then flood: access_control() >> flood() And many more, with simple operators, embedded in Python 23 SDN & NFV: Hoffnung oder Hype?
24 Issues About what to talk to the brain? SDN, OpenFlow: flows Other schemes (e.g., PCE in MPLS): virtual circuits, When to talk to the brain? Proactively! Reactive only as fallback How to structure the brain? Separate controller framework, control applications Provide means to compose control applications Incorporate into other infrastructure (e.g., Neutron for OpenStack) How many brains, where? 24 SDN & NFV: Hoffnung oder Hype?
25 A single controller? Dependability jeopardized Multiple controllers which does what? Separate roles master and backup Separate regions borders? Hierarchies? With repartitioning? Performance jeopardized Local controllers working for regional controllers Low latency! Facility location problem Classic distributed systems & optimization problems! 25 SDN & NFV: Hoffnung oder Hype? k = 1 k = 5 location in average-latency-optimized placement! location in worst-case-latency-optimized placement! Figure 1: Optimal placements for 1 and 5 controllers in the Internet2 OS3E deployment. Worst-case latency. An alternative metric is worst-case latency, defined as the maximum node-to-controller propagation delay: L wc (S 0 )= max min (v2v ) (s2s 0 ) From  d(v, s) (2) where again we seek the minimum S 0 S. The related optimization problem is minimum k-center . Nodes within a latency bound. Rather than minimizing the average or worst case, we might place controllers to maximize the number of nodes within a latency bound; the general version of this problem on arbitrary overlapping sets is called maximum cover . An instance of this problem includes a number k and a collection of sets S = S 1,S 2,...,S m,wheres i v 1,v 2,...,v n. The objective is to find a subset S 0 S of sets, such that S i 2S S 0 i is maximized and S 0 = k. Each set S i comprises all nodes within a latency bound from a single node. F c w 5 p o d m b t
26 Issues About what to talk to the brain? SDN, OpenFlow: flows Other schemes (e.g., PCE in MPLS): virtual circuits, When to talk to the brain? Proactively! Reactive only as fallback How to structure the brain? Separate controller framework, control applications Provide means to compose control applications Incorporate into other infrastructure (e.g., Neutron for OpenStack) How many brains, where? Complex decision problem; highly depends on scenario 26 SDN & NFV: Hoffnung oder Hype?
27 Research Example: MaxiNet: Distributed Emulation of Software-Defined Networks https://www.cs.upb.de/?id=maxinet 27 SDN & NFV: Hoffnung oder Hype?
28 How to emulate a data center? Data centers have A high number of switches and servers High speed links (10Gbps) High link utilization Core Pods... ToRs... Racks Evaluate SDN ideas: Use Mininet Emulator, runs many machines/switches as processes in Linux network namespaces Key: Time dilation Emulate one second of a 10G link by 10 seconds of a 1G link 28 SDN & NFV: Hoffnung oder Hype?
29 MaxiNet Distributing Mininet to Multiple Machines MaxiNet is a framework for distributing Mininet emulations onto multiple workers Virtual Network Cluster of Workers... Core... Pods ToRs Racks 29 SDN & NFV: Hoffnung oder Hype?
30 MaxiNet at a Glance MaxiNet partitions the virtual topology in N parts... Core Switches must not be split in half Pods ToRs Goal: minimize edge cut Racks From each partition a new topology is built and emulated using Mininet at a dedicated worker 30 SDN & NFV: Hoffnung oder Hype?
31 Test Setup Generated 60 Seconds of TCP traffic for the data center Clos-like topology, 20 servers per rack, 160 racks 8 ToR switches form a Pod with 2 Pod switches each Core layer consists of 7 switches We emulated 207 switches and 3600 servers... Used a time dilation factor of Core Pods... ToRs... Racks MaxiNet cluster consisted of 12 Machines Intel Xeon E5506 CPU (2x 2.16 Ghz Quadcore), 12 Gb RAM 1 Gbit/s NICs wired to a Cisco Catalyst 2960G-24TC-L switch Implemented ECMP on top of Beacon Controller Controller was placed out-of-band Directly connected to the Cisco Catalyst 2960G-24TC-L 31 SDN & NFV: Hoffnung oder Hype?
32 Result: Load at the OpenFlow Controller CPU utilization [%] CPU utilization Data rate RX Data rate TX Data rate [Mbit/s] 0 On average 4% CPU utilization and 5 Mbit/s traffic (time dilation factor: 200) à Using our ECMP implementation in a real data center, the controller has to be at least 8x faster than our machine in the lab 32 SDN & NFV: Hoffnung oder Hype? Time [s]
33 Research example: DCT 2 Gen: A Versatile TCP Traffic Generator for Data Centers 33 SDN & NFV: Hoffnung oder Hype?
34 Where to get input traffic for emulation? Want: Layer 4 TCP traffic not easily available for large centers Available: Some observations of Layer 2 traffic Conceivable workflow? inferred L4 Traffic Distributions 3 Generate generated L4 Traffic 4 Schedule Abstract observed L2 Traffic Distributions Analyze observed L2 Traces 34 SDN & NFV: Hoffnung oder Hype? 2 1? = generated L2 Traffic Distributions Analyze 6 generated L2 Traces 5 Emulate Part of DCT 2 Gen
35 Challenges Payload and ACK traffic, traffic matrices, rack awareness, Example: Deconvolving payload/ack traffic PDF ACK-Size Distribution (Layer 4) PDF Flow size Flow-Size Distribution (Layer 2) Flow size implies determines Payload-Size Distribution (Layer 4) 35 SDN & NFV: Hoffnung oder Hype? PDF Flow size CDF Flowsize Original Convolved Payload ACKs
36 DCT 2 Gen: Practical use case Download our TCP traffic traces Obtain your own L2 traffic, use DCT 2 Gen to produce your own TCP traces Change L2 input distributions, obtain synthetic TCP traces Feed TCP traces to evaluation tool DCT 2 Gen combines particularly well with Maxinet But also with Mininet, network simulators, 36 SDN & NFV: Hoffnung oder Hype? Let s start a collection! SNDLib for data centers!
37 Research example: MAC Addresses as Efficient Routing Labels in Data Centers 37 SDN & NFV: Hoffnung oder Hype?
38 Problem statement Forwarding in Data Centers 00:3f:92 Port 1 2a:77:b4 Port 2 00:aa:cc -> 0a:3f:92 00:aa:cc -> 2a:77:b4 00:aa:cc -> af:e2:8f 00:aa:cc -> a6:d7:f9 00:aa:cc -> 2c:1c:66 a6:d7:f9 Port 3 2c:1c:66 Port 1 af:e2:8f Port 1 Very large (#hosts) Cannot be aggregated 38 SDN & NFV: Hoffnung oder Hype?
39 Using labels and wildcard Label: ffa :aa:cc -> 0a:3f:92 ffa00001 ffa????1 Port 1 Label: ffa :aa:cc -> 2a:77:b4 Label: ffa :aa:cc -> 2c:1c:66 ffa????2 ffa00002 Port 2 ffa????3 Port 3 Ffa????4 Port 4 Label: ffa :aa:cc -> a6:d7:f9 Label: ff0aa01 00:aa:cc -> af:e2:8f Label: ffa :aa:cc -> 2a:77:b4 39 SDN & NFV: Hoffnung oder Hype?
40 Adding labels 00:aa:cc -> 0a:3f:92 Label: ffa :aa:cc -> 0a:3f: ff:a0:01 0a:3f:92 0a:3f:92 ff:??:?? Port 1 ffa????1 ff:a?:?1 Port 1 ffa00001 ffa????2 ff:a?:?2 Port ee:a0:02 df:47:3b ff:a7:01 77:3b:26 df:47:3b ee:??:?? Port 3 eea :3b:26 cc:??:?? Port 12 ffa ee:a0:7a fa:00:1c 40 SDN & NFV: Hoffnung oder Hype? Can be messed up fa:00:1c Port 3 eea0007a using ARP by an SDN controller
41 Challenges Objective Find labels for minimal table sizes No operating system changes Find labels NP complete, good heuristic available 41 SDN & NFV: Hoffnung oder Hype?
42 Intermediate summary: SDN Software-defined networks archetypical example of a hype Lot s of attention to a concept that existed (under different name) for a long time already But now with better marketing Huge industry interest (and politics) Standardization fora: Open Network Foundation, IRTF SDN-RG Industry consortium for controller platform: OpenDaylight (Cisco vs. BigSwitch, ) Coexistence of controller platforms open question SDN in isolation Advantages, but not complete band-aid Claim: Gets really useful when combined with application knowledge 42 SDN & NFV: Hoffnung oder Hype?
43 Overview Software-defined networking Technical context Issues Research examples Network function virtualization Technical context Research examples 43 SDN & NFV: Hoffnung oder Hype?
44 Network Function Virtualization in ISP networks ISP networks operate many functions on the actual flows (And not just on signalling data, as in SDN) Examples: firewalls, deep packet inspection, load balancers, signal processing (e.g., CoMP in mobile access networks) Conventional approach: one function, one box Expensive, slow rollout, Virtualize! => Virtualized Network Functions! Commodity boxes operate on packet flows Rollout: Install/activate software on box Adapt routing (=> relationship to SDN) 44 SDN & NFV: Hoffnung oder Hype?
45 NFV Current developments Heavily pushed by ISPs Still lot s of research as well as development going on Big EU projects: T-NOVA, Unify, Key questions: Architecture, interfaces, orchestration approaches Initial standardization efforts: ETSI MANO work group 45 SDN & NFV: Hoffnung oder Hype?
46 ETSI Network MANO Function Virtualization Components E2E Network Service End Point Network Service End Point Logical Abstractions Logical Links SW Instances VNF : Virtualized Network Function VNF VNF VNF VNF VNF VNF Instances VNF VNF VNF VNF NFV Infrastructure (NFVI) VNF Forwarding Graph aka. Service Chain Virtual Resources Virtualization SW HW Resources Virtual Compute 46 SDN & NFV: Hoffnung oder Hype? Virtual Storage Virtualization Layer Virtual Network Compute Storage Network 3
47 ETSI MANO: Management and Orchestration Architecture NFV Management and Orchestration Architecture OSS/BSS Os-Nfvo NFV Orchestrator (NFVO) Or-Vnfm NS Catalog VNF Catalog NFV Instances NFVI Resources EMS VNF NFVI Vn-Nf Execution reference points 47 SDN & NFV: Hoffnung oder Hype? VeEn-Vnfm VeNf-Vnfm Nf-Vi VNF Manager (VNFM) Vnfm-Vi Virtualised Infrastructure Manager (VIM) Or-Vi NFV-MANO Other reference points Main NFV reference points 5 Source: ETSI NFV MANO WI document (ongoing work)
48 Research example: Local heuristics for individual flow processor placement 48 SDN & NFV: Hoffnung oder Hype?
49 Where to process which flow? Similar problem to SDN controller placement but now, flows have to pass through controllers In SDN, controller usually not on the data path Formally: Facility location problem combined with multi-commodity flow problem Variants: Just process a flow (possibly combining multiple flows): Mobile backhaul network Process flow and return answer: Distributed cloud computing (DCC) Sever or network function: Does not really matter Here: local heuristic 49 SDN & NFV: Hoffnung oder Hype?
50 NFV/DCC: Local placement heuristic Idea: assign customers to nearby facilities Until facility is full, then keep looking Open a facility when a user arrives Essentially: expanding ring search 50 SDN & NFV: Hoffnung oder Hype?
51 Research example: Specifying and Placing Chains of Virtual Network Functions 51 SDN & NFV: Hoffnung oder Hype?
52 From individual network functions to chains Usually: more than one network function needed to process a flow Network functions form chains ETSI Mano: Virtual Network Function Forwarding Graph How to specify a function chain? Simple: fully, each step limits flexibility of placement Flexible: leave concurrency in the specification order does not always matter! Placement can trade off processing against data rate, delay, f 3 f 1 f 2 f 3 f 1 f 2 LB f 3 f 3 52 SDN & NFV: Hoffnung oder Hype? LB f 1 f 2 f 1 f 2 f 3 f 3
53 Extension: Template embedding Current work: Templates for NFV chains Do not fix the chain structure But rather specify relative performance, capabilities of each stage E.g.: one database can serve 10 web servers Then: embed the template, adapting it to current load situation Using relationships encoded in the template 53 SDN & NFV: Hoffnung oder Hype?
54 Intermediate Summary: NFV Heavy original push from industry, rather than from academia Addresses concrete, real-world problem But perhaps with less sparkle than SDN First standardization under way, yet still considerable open issues Scalability, non-trivial network functions, 54 SDN & NFV: Hoffnung oder Hype?
55 55 SDN & NFV: Hoffnung oder Hype?
56 SFB 901: On-the-fly computing Not just software-defined networks, but also software-defined infrastructure Not just software-defined infrastructure, but also software-defined software Actually: software is created at usage time, using request information Configure software components automatically 56 SDN & NFV: Hoffnung oder Hype?
57 Conclusions Our infrastructures will become more flexible, more adaptive Not only networking: Servers in any case, storage is getting there There is hope that this simplify infrastructures There is unquestionably incredible hype Do we have the knowledge to exploit that? Do we have the foresight to exploit that? 57 SDN & NFV: Hoffnung oder Hype?
58 58 SDN & NFV: Hoffnung oder Hype?
59 References 1. Various talks by Nick McKeown on SDN, 2. B. Heller et al., Tutorial: SDN for Engineers, Open Networking Summit, Santa Clara, April J. Reich, Modular SDN Programming with Pyretic, Princeton. lang.org/pyretic 4. M. Casado, T. Koponen, S. Shenker, and A. Tootoonchian, Fabric: a retrospective on evolving SDN, in Proceedings of the first workshop on Hot topics in software defined networks - HotSDN 12, 2012, pp B. Heller, R. Sherwood, and N. McKeown, The controller placement problem, in ACM SIGCOMM Computer Communication Review, 2012, vol. 42, no. 4, p See for a very good list of papers! 59 SDN & NFV: Hoffnung oder Hype?
60 Backup slides 60 SDN & NFV: Hoffnung oder Hype?
61 Balance then Route (in Sequence) srcip=0*,dstip=p - > mod(dstip=a) srcip=1*,dstip=p - > mod(dstip=b) dstip=a - > fwd(2) dstip=b - > fwd(3) * - > fwd(1) Combined Rules? (only one match) srcip=0*,dstip=p dstip=a - > mod(dstip=a) fwd(2) srcip=1*,dstip=p dstip=b - > mod(dstip=b) fwd(3) * dstip=a - > fwd(2) fwd(1) srcip=0*,dstip=p dstip=b - > fwd(3) mod(dstip=a) * srcip=1*,dstip=p - > fwd(1) mod(dstip=b) Balances Forwards w/o Forwarding! Balancing! 61
62 Route and Monitor (in Parallel) Route dstip = à fwd(2) dstip = à fwd(3) * à fwd(1) Monitor srcip = à count IP = IP = Forwards Counts but but doesn t count forward dstip srcip = à count fwd(2) dstip = à fwd(3) fwd(2) * dstip = à fwd(1) fwd(3) srcip * = à count fwd(1) Combined rules installed on switch? 62
63 Requires a Cross Product [ICFP 11, POPL 12] Route dstip = à fwd(2) dstip = à fwd(3) * à fwd(1) Monitor srcip = à count IP = IP = srcip = , dstip = à fwd(2), count srcip = , dstip = à fwd(3), count srcip = à fwd(1), count dstip = à fwd(2) dstip = à fwd(3) * à fwd(1) 63