Towards Software-Defined Networks. Network Infrastructures. Tommaso Melodia E-mail: tmelodia@eng.buffalo.edu



Similar documents
Project 3 and Software-Defined Networking (SDN)

OpenFlow/So+ware- defined Networks. Srini Seetharaman Clean Slate Lab Stanford University July 2010

Software Defined Networking

Software Defined Networking (SDN)

Getting to know OpenFlow. Nick Rutherford Mariano Vallés

The Future of Networking, and the Past of Protocols

Open Source Network: Software-Defined Networking (SDN) and OpenFlow

Software Defined Networking What is it, how does it work, and what is it good for?

Software Defined Networks (SDN)

Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering

How To Understand The Power Of A Network In A Microsoft Computer System (For A Micronetworking)

SDN and Streamlining the Plumbing. Nick McKeown Stanford University

OpenFlow Overview. Daniel Turull

CSCI-1680 So ware-defined Networking

NETWORK VIRTUALIZATION BASED ON SOFTWARE DEFINED NETWORK

Tutorial: OpenFlow in GENI

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

OpenFlow: History and Overview. Demo of routers

Software Defined Networking What is it, how does it work, and what is it good for?

The Internet: A Remarkable Story. Inside the Net: A Different Story. Networks are Hard to Manage. Software Defined Networking Concepts

How SDN will shape networking

Software Defined Networking

OpenFlow: Enabling Innovation in Campus Networks

Software Defined Networks

9/8/14. Outline. SDN Basics. Concepts OpenFlow Controller: Floodlight OF- Config Mininet. SDN Concepts. What is socware defined networking? Why SDN?

Funded in part by: NSF, Cisco, DoCoMo, DT, Ericsson, Google, Huawei, NEC, Xilinx

CS 78 Computer Networks. Internet Protocol (IP) our focus. The Network Layer. Interplay between routing and forwarding

SDN. What's Software Defined Networking? Angelo Capossele

Advanced Software Engineering. Lecture 8: Data Center by Prof. Harold Liu

OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

OpenFlow. Ihsan Ayyub Qazi. Slides use info from Nick Mckeown

How To Understand The Power Of The Internet

Data Center Networks and Basic Switching Technologies

Introduction to OpenFlow:

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013

Software Defined Networking & Openflow

Software-Defined Networking for the Data Center. Dr. Peer Hasselmeyer NEC Laboratories Europe

OpenFlow and Software Defined Networking presented by Greg Ferro. OpenFlow Functions and Flow Tables

From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

LuaFlow, an open source Openflow Controller

OpenFlow Technology Investigation Vendors Review on OpenFlow implementation

COMPSCI 314: SDN: Software Defined Networking

OpenFlow on top of NetFPGA

SDN AND SECURITY: Why Take Over the Hosts When You Can Take Over the Network

Network Virtualization and Application Delivery Using Software Defined Networking

OpenFlow / SDN: A New Approach to Networking

Towards Software Defined Cellular Networks

SDN, OpenFlow and the ONF

Multiple Service Load-Balancing with OpenFlow

VXLAN: Scaling Data Center Capacity. White Paper

Definition of a White Box. Benefits of White Boxes

Network Virtualization Based on Flows

YI-CHIH HSU & JEI-WEI ESTINET TECHNOLOGIES

LTE - Can SDN paradigm be applied?

Does SDN accelerate network innovations? Example of Flexible Service Creation

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Why Software Defined Networking (SDN)? Boyan Sotirov

Software Defined Networking (SDN) T Computer Networks II Hannu Flinck

Software Defined Network Application in Hospital

OpenFlow: Concept and Practice. Dukhyun Chang

STRUCTURE AND DESIGN OF SOFTWARE-DEFINED NETWORKS TEEMU KOPONEN NICIRA, VMWARE

BROADCOM SDN SOLUTIONS OF-DPA (OPENFLOW DATA PLANE ABSTRACTION) SOFTWARE

Software Defined Networking (SDN) - Open Flow

Software Defined Networking and the design of OpenFlow switches

ViSION Status Update. Dan Savu Stefan Stancu. D. Savu - CERN openlab

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

CS6204 Advanced Topics in Networking

Panopticon: Incremental SDN Deployment in Enterprise Networks

Multicasting on SDN. Prof. Sunyoung Han Konkuk University 23 July 2015

Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

SSVVP SIP School VVoIP Professional Certification

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

How To Write A Network Plan In Openflow V1.3.3 (For A Test)

Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics

SOFTWARE DEFINED NETWORKS REALITY CHECK. DENOG5, Darmstadt, 14/11/2013 Carsten Michel

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Networking 4 Voice and Video over IP (VVoIP)

Virtualization and SDN Applications

Securing Local Area Network with OpenFlow

DEMYSTIFYING ROUTING SERVICES IN SOFTWAREDEFINED NETWORKING

Software Defined Networking

Software Defined Networking Subtitle: Network Virtualization Terry Slattery Chesapeake NetCraftsmen Principal Consultant CCIE #1026.

Network Layer: Network Layer and IP Protocol

Wedge Networks: Transparent Service Insertion in SDNs Using OpenFlow

OpenFlow & Software Defined Networking

MPLS for ISPs PPPoE over VPLS. MPLS, VPLS, PPPoE

Internet Protocol: IP packet headers. vendredi 18 octobre 13

Software Defined Networking A quantum leap for Devops?

Transcription:

Towards Software-Defined Networks Network Infrastructures Tommaso Melodia E-mail: tmelodia@eng.buffalo.edu Based on Slides from Nick McKewon, Scott Shenker, Kurose-Ross, Tim Hinrichs 1 Outline A brief review: How do current networks work Software-defined Networking Paradigm Network Virtualization and Flowvisor Network operating systems Debugging through software-defined networks 2 1

Network Layer transport segment from sending to receiving host on sending side encapsulates segments into datagrams on receiving side, delivers segments to transport layer network layer protocols in every host, router router examines header fields in all IP datagrams passing through it application transport network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical application transport network data link physical 3 Two Key Network-layer Functions forwarding: move packets from router s input to appropriate router output routing: determine route taken by packets from source to dest. routing algorithms analogy: routing: process of planning trip from source to dest forwarding: process of getting through single interchange 4 2

Interplay Between Routing and Forwarding routing algorithm local forwarding table header value output link 0100 0101 0111 1001 3 2 2 1 routing algorithm determines end-end-path through network forwarding table determines local forwarding at this router value in arriving packet s header 0111 3 2 1 5 4-5 Datagram Networks no call setup at network layer routers: no state about end-to-end connections no network-level concept of connection packets forwarded using destination host address application transport network data link physical 1. send datagrams 2. receive datagrams application transport network data link physical 6 3

Datagram Forwarding Table routing algorithm local forwarding table dest address output link address-range 1 address-range 2 address-range 3 address-range 4 3 2 2 1 4 billion IP addresses, so rather than list individual destination address list range of addresses (aggregate table entries) IP destination address in arriving packet s header 1 3 2 7 Datagram Forwarding Table Destination Address Range 11001000 00010111 00010000 00000000 through 11001000 00010111 00010111 11111111 11001000 00010111 00011000 00000000 through 11001000 00010111 00011000 11111111 11001000 00010111 00011001 00000000 through 11001000 00010111 00011111 11111111 otherwise Link Interface 0 1 2 3 8 4

Router Architecture Overview two key router functions: run routing algorithms/protocol (RIP, OSPF, BGP) forwarding datagrams from incoming to outgoing link forwarding tables computed, pushed to input ports routing processor routing, management control plane forwarding data plane (hardware) high-seed switching fabric 9 router input ports router output ports Input Port Functions line termination link layer protocol (receive) lookup, forwarding queueing switch fabric physical layer: bit-level reception data link layer: e.g., Ethernet 10 10 decentralized switching: given datagram dest., lookup output port using forwarding table in input port memory ( match plus action ) goal: complete input port processing at line speed queuing: if datagrams arrive faster than forwarding rate into switch fabric 5

Switching Fabrics transfer packet from input buffer to appropriate output buffer switching rate: rate at which packets can be transferred from inputs to outputs often measured as multiple of input/output line rate N inputs: switching rate N times line rate desirable three types of switching fabrics memory memory bus crossbar 11 11 Switching Via Memory first generation routers: traditional computers with switching under direct control of CPU packet copied to system s memory speed limited by memory bandwidth (2 bus crossings per datagram) input port (e.g., Ethernet) memory output port (e.g., Ethernet) system bus 12 12 6

Switching Via a Bus datagram from input port memory to output port memory via a shared bus bus contention: switching speed limited by bus bandwidth 32 Gbps bus, Cisco 5600: sufficient speed for access and enterprise routers bus 13 13 Switching Via Interconnection Network overcome bus bandwidth limitations banyan networks, crossbar, other interconnection nets initially developed to connect processors in multiprocessor advanced design: fragmenting datagram into fixed length cells, switch cells through the fabric. crossbar Cisco 12000: switches 60 Gbps through the interconnection network 14 14 7

Output Ports switch fabric datagram buffer queueing link layer protocol (send) line termination 15 15 buffering required when datagrams arrive from fabric faster than the transmission rate scheduling discipline chooses among queued datagrams for transmission Datagram (packets) can be lost due to congestion, lack of buffers Priority scheduling who gets best performance, network neutrality Output Port Queueing switch fabric switch fabric at t, packets more from input to output one packet time later buffering when arrival rate via switch exceeds output line speed queueing (delay) and loss due to output port buffer overflow! 16 16 8

How much buffering? RFC 3439 rule of thumb: average buffering equal to typical RTT (say 250 msec) times link capacity C e.g., C = 10 Gpbs link: 2.5 Gbit buffer recent recommendation: with N flows, buffering equal to RTT. C N 17 17 The Internet Network Layer host, router network layer functions: transport layer: TCP, UDP network layer routing protocols path selection RIP, OSPF, BGP forwarding table link layer physical layer IP protocol addressing conventions datagram format packet handling conventions ICMP protocol error reporting router signaling 18 18 9

Inside a Router Routing Engine Input Ports Packet Forwarding Fabric Output Ports 19 19 Two Key Definitions Data Plane: processing and delivery of packets Based on state in routers and endpoints E.g., IP, TCP, Ethernet, etc. Fast timescales (per-packet) Control Plane: establishing the state in routers Determines how and where packets are forwarded Routing, traffic engineering, firewall state, Slow time-scales (per control event) 20 20 10

We have lost our way Routing, management, mobility management, access control, VPNs, App App App Operating System Million of lines of source code 5400 RFCs Barrier to entry Specialized Packet Forwarding Hardware 500M gates 10Gbytes RAM Bloated Power Hungry 21 21 IPSec Firewall Router OSPF-TE RSVP TE HELLO Software Control HELLO Hardware Datapath HELLO Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, An industry with a mainframe mentality 22 22 11

Software Defined Networks: The Future of Networking and the Past of Protocols 23 23 Key to Internet Success: Layers Applications built on Reliable (or unreliable) transport built on Best-effort global packet delivery built on Best-effort local packet delivery built on Physical transfer of bits 24 24 12

Why Is Layering So Important? Decomposed delivery into fundamental components Independent but compatible innovation at each layer A practical success of unprecedented proportions but an academic failure 25 25 Built an Artifact, Not a Discipline Other fields in systems : Operating Systems, Data Bases, Distributed Systems Teach basic principles Are easily managed Continue to evolve Networking: Teach big bag of protocols Notoriously difficult to manage Evolves very slowly 26 26 13

Why Does Networking Lag Behind? Networks used to be simple: Ethernet, IP, TCP. New control requirements led to great complexity Isolation VLANs, ACLs Traffic engineering MPLS, ECMP, Weights Packet processing Firewalls, NATs, middleboxes Payload analysis Deep packet inspection (DPI).. Mechanisms designed and deployed independently Complicated control plane design, primitive functionality Stark contrast to the elegantly modular data plane 27 27 Infrastructure Still Works! Only because of our ability to master complexity This ability to master complexity is both a blessing and a curse! 28 28 14

A Simple Story About Complexity ~1985: Don Norman visits Xerox PARC Talks about user interfaces and stick shifts 29 29 What Was His Point? The ability to master complexity is not the same as the ability to extract simplicity When first getting systems to work. Focus on mastering complexity When making system easy to use and understand Focus on extracting simplicity You will never succeed in extracting simplicity If don t recognize it is different from mastering complexity 30 30 15

Take Home Message Networking still focused on mastering complexity Little emphasis on extracting simplicity from control plane No recognition that there is a difference. Extracting simplicity builds intellectual foundations Necessary for creating a discipline. That s why networking lags behind 31 31 We have lost our way Routing, management, mobility management, access control, VPNs, App App App Operating System Million of lines of source code 5400 RFCs Barrier to entry Specialized Packet Forwarding Hardware 500M gates 10Gbytes RAM Bloated Power Hungry Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, An industry with a mainframe mentality 32 32 16

Reality App App App Operating System Specialized Packet Forwarding Hardware App App Operating System App Specialized Packet Forwarding Hardware Lack of competition means glacial innovation Closed architecture means blurry, closed interfaces 33 33 Glacial process of innovation made worse by captive standards process Idea Standardize Deployment Wait 10 years Driven by vendors Consumers largely locked out Lowest common denominator features Glacial innovation 34 34 17

A Better Example: Programming Machine languages: no abstractions Mastering complexity was crucial Higher-level languages: OS and other abstractions File system, virtual memory, abstract data types,... Modern languages: even more abstractions Object orientation, garbage collection, Abstractions key to extracting simplicity 35 35 Why Are Abstractions/Interfaces Useful? Interfaces are instantiations of abstractions Interfaces shield programs from low-level details Allows freedom of implementation on both sides Which leads to modular program structure They don t remove complexity, merely hide it Someone deals with complexity once Everyone else leverages that work 36 36 18

The Power of Abstraction Modularity based on abstraction is the way things get done Barbara Liskov Abstractions Interfaces Modularity 37 37 What abstractions do we have in networking? Layers are Great Abstractions Layers only deal with the data plane IP s best effort delivery TCP s reliable byte-stream We have no powerful control plane abstractions! No sophisticated management/control building blocks So new control requirements cause increased complexity How do we find those control plane abstractions? Two steps: define problem, and then decompose it 38 38 19

The Network Control Problem Compute the configuration of each physical device E.g., Forwarding tables, ACLs, Operate without communication guarantees Operate within given network-level protocol Only people who love complexity would find this a reasonable request 39 39 Programming Analogy What if programmers had to: Specify where each bit was stored Explicitly deal with all internal communication errors Within a programming language with limited expressability Programmers would redefine problem: Define a higher level abstraction for memory Build on reliable communication abstractions Use a more general language Abstractions divide problem into tractable pieces And make programmer s task easier 40 40 20

From Requirements to Abstractions 1. Operate without communication guarantees Need an abstraction for distributed state the communication in control programs is ALL directed towards collecting/disseminating/or calculating on distributed state 2. Compute the configuration of each physical device Need an abstraction that simplifies configuration 3. Operate within given network-level protocol Need an abstraction for general forwarding model Once these abstractions are in place, control mechanism has a much easier job! 41 41 SDN in one sentence SDN is defined precisely by these three abstractions Distribution, forwarding, configuration SDN not just a random good idea Fundamental validity and general applicability SDN may help us create a discipline Abstractions enable reasoning about system behavior Provides environment where formalism can take hold. OK, but what are these abstractions? 42 42 21

1. Distributed State Abstraction Shield control mechanisms from state distribution While allowing access to this state Natural abstraction: global network view Annotated network graph provided through an API Implemented with Network Operating System Control mechanism is now program using API No longer a distributed protocol, now just a graph algorithm E.g. Use Dijkstra rather than Bellman-Ford 43 43 Software Defined Network (SDN) e.g. routing, access control Network Traditional of Switches Control and/or Mechanisms Routers Control Program Global Network View Distributed algorithm running between neighbors Network OS 44 44 22

Major Change in Paradigm No longer designing distributed control protocols Design one distributed system (NOS) Use for all control functions Now just defining a centralized control function Configuration = Function(view) 45 45 Key Task of Network Controller protocol is largely deltas: Switch-to-Controller: changes of network state Controller-to-Switch: changes of configuration It is a natural way to write control logic 46 46 23

2. Specification Abstraction Control program should express desired behavior It should not be responsible for implementing that behavior on physical network infrastructure Natural abstraction: simplified model of network Simple model with only enough detail to specify goals Requires a new shared control layer: Map abstract configuration to physical configuration This is network virtualization 47 47 Simple Example: Access Control What Abstract Network Model Global Network View How 48 48 24

Software Defined Network: Take 2 Abstract Network Model Network Control Virtualization Program Global Network View Network OS 49 49 What Does This Picture Mean? Write a simple program to configure a simple model Configuration merely a way to specify what you want Examples Access Control Lists: who can talk to who Isolation: who can hear my broadcasts Routing: only specify routing to the degree you care Some flows over satellite, others over landline Traffic Engineering: specify in terms of quality of service, not routes Virtualization layer compiles these requirements Produces suitable configuration of actual network devices Network Operating System then transmits these settings to physical boxes 50 50 25

Specifies behavior Compiles to topology Transmits to switches Software Defined Network: Take 2 Control Program Abstract Network Model Network Virtualization Global Network View Network OS 51 51 Two Examples Uses Scale-out router: Abstract view is single router Physical network is collection of interconnected switches Allows routers to scale out, not up Use standard routing protocols on top Multi-tenant networks: Each tenant has control over their private network Network virtualization layer compiles all of these individual control requests into a single physical configuration Hard to do without SDN, easy (in principle) with SDN 52 52 26

3. Forwarding Abstraction Switches have two brains Management CPU (smart but slow) Forwarding ASIC (fast but dumb) Need a forwarding abstraction for both CPU abstraction can be almost anything ASIC abstraction is much more subtle: : Control switch by inserting <header;action> entries Essentially gives NOS remote access to forwarding table Instantiated in OpenvSwitch 53 53 Does SDN Work? Is it scalable? Is it less responsive? Does it create a single point of failure? Is it inherently less secure? Is it incrementally deployable? Yes No No No Yes 54 54 27

SDN: Clean Separation of Concerns Control program: specify behavior on abstract model Driven by Operator Requirements Network Virtualization: map abstract model to global view Driven by Specification Abstraction Network Operating System: map global view to physical switches API: driven by Distributed State Abstraction Switch/fabric interface: driven by Forwarding Abstraction 55 55 We Have Achieved Modularity! Modularity enables independent innovation Gives rise to a thriving ecosystem Innovation is the true value proposition of SDN SDN doesn t allow you to do the impossible It just allows you to do the possible much more easily This is why SDN is the future of networking 56 56 28

Slicing the physical network Is change likely? 57 57 Change is happening in non traditional markets App App App Network Operating System Ap p Ap p Ap p Operating System Specialized Packet Forwarding Hardware Ap p Ap p Operating System Ap p Ap p Ap p Operating System Ap p Specialized Packet Forwarding Hardware Ap p Ap p Operating System Ap p Specialized Packet Forwarding Hardware Ap p Ap p Ap p Specialized Packet Forwarding Hardware Operating System 58 58 Specialized Packet Forwarding Hardware 29

The Software defined Network 3. Well defined open API App App 2. At least one good operating system Extensible, possibly open source App Network Operating System 1. Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware 59 59 Simple Packet Forwarding Hardware Isolated slices Many operating systems, or Many versions App App App App App App App App Network Operating System 1 Network Operating System 2 Network Operating System 3 Network Operating System 4 Virtualization or Slicing Layer Open interface to hardware Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware 60 60 Simple Packet Tommaso Melodia, Forwarding Software Hardware Defined Networks 30

Consequences More innovation in network services Owners, operators, 3 rd party developers, researchers can improve the network E.g. energy management, data center management, policy routing, access control, denial of service, mobility Lower barrier to entry for competition Healthier market place, new players 61 61 The change has already started In a nutshell Driven by cost and control Started in data centers. and may spread Trend is towards an open source, software defined network Growing interest for cellular and telecom networks 62 62 31

Example: New Data Center Cost 200,000 servers Fanout of 20 10,000 switches $5k commercial switch $50M $1k custom built switch $10M Control 1. Optimize for features needed 2. Customize for services & apps 3. Quickly improve and innovate Savings in 10 data centers = $400M Large data center operators are moving towards defining their own network in software. 63 63 Trend App App App App App App Windows Windows Windows (OS) (OS) (OS) Linux Linux Linux Mac Mac Mac OS OS OS Controller Controller NOX 1 1 (Network OS) Controller Controller Network 2 OS Virtualization layer x86 (Computer) Virtualization or Slicing Computer Industry Network Industry 64 64 32

Software defined Wireless Networks App App App Mobility manager, AAA, billing, MVNO, Wireless service provider, Network Operating System Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware WiFi, WiMAX, LTE 65 65 The SDN Stack 66 66 33

The SDN Stack Simple Switch CloudNaaS Stratos Applications NOX Beacon Trema Maestro Controller FlowVisor Console FlowVisor Slicing Software 67 67 Commercial Switches HP, NEC, Pronto, Juniper.. and many more Software Ref. Switch OpenWRT NetFPGA PCEngine WiFi AP Broadcom Ref. Switch Open vswitch Switches 67 The SDN Stack Controller 68 68 Switches 68 34

How does work? Ethernet Switch 69 69 Control Path (Software) Data Path (Hardware) 70 70 35

Controller Protocol (SSL/TCP) Control Path Data Path (Hardware) 71 71 Example Controller Software Layer Client PC Hardware Layer MAC src MAC dst Flow Table IP Src IP Dst TCP sport TCP dport Action * * * 5.6.7.8 * * port 1 port 1 port 2 port 3 port 4 72 72 5.6.7.8 1.2.3.4 36

Progression OF v1.0: released end of 2009: Into the Campus OF v1.1: released March 1 2011: Into the WAN multiple tables: leverage additional tables tags and tunnels: MPLS, VLAN, virtual ports multipath forwarding: ECMP, groups OF v1.2: approved Dec 8 2011: Extensible Protocol extensible match extensible actions IPv6 multiple controllers 73 73 The SDN Stack Controller 74 74 Commercial Switches HP, NEC, Pronto, Juniper.. and many more Software Ref. Switch OpenWRT NetFPGA PCEngine WiFi AP Broadcom Ref. Switch Open vswitch Switches 74 37

Switches Vendor Models Virtualize? Notes Image HP ProCurve Pronto/ Pica8 5400zl, 6600, + 3290, 3780, 3920, + 1 OF instance per VLAN 1 OF instance per switch LACP, VLAN and STP processing before OF Wildcard rules or non IP pkts processed in s/w Header rewriting in s/w CPU protects mgmt during loop No legacy protocols (like VLAN and STP) Most actions processed in hardware MAC header rewriting in h/w Name Lang Platform(s) Original Author Notes Reference Open vswitch C Linux Stanford/Nicira not designed for extensibility C/ Python Linux/BSD? Ben Pfaff/Nicira In Linux kernel 3.3+ Indigo C/Lua Linux-based Hardware Switches Dan Talayco/BigSwitch Bare switch 75 75 The SDN Stack NOX Beacon Trema Maestro Controller 76 76 Commercial Switches HP, NEC, Pronto, Juniper.. and many more Software Ref. Switch OpenWRT NetFPGA PCEngine WiFi AP Broadcom Ref. Switch Open vswitch Switches 76 38

Controllers Name Lang Original Author Notes C Stanford/Nicira not designed for extensibility Reference NOX Python, C++ Nicira actively developed Beacon Java David Erickson (Stanford) runtime modular, web UI framework, regression test framework Maestro Java Zheng Cai (Rice) Trema Ruby, C NEC includes emulator, regression test framework RouteFlow? CPqD (Brazil) virtual IP routing as a service POX Python Floodlight Java BigSwitch, based on Beacon Too many to easily keep track of http://yuba.stanford.edu/~casado/of sw.html 77 77 The SDN Stack NOX Beacon Trema Maestro Controller FlowVisor Console FlowVisor Slicing Software 78 78 Commercial Switches HP, NEC, Pronto, Juniper.. and many more Software Ref. Switch OpenWRT NetFPGA PCEngine WiFi AP Broadcom Ref. Switch Open vswitch Switches 78 39

FlowVisor Creates Virtual Networks Simple switch CloudNaaS Stratos Each application runs in an isolated slice of the network Switch Protocol FlowVisor Protocol Reservations 79 79 Switch Switch FlowVisor slices networks, creating multiple isolated and programmable logical networks on the same physical topology The SDN Stack Simple Switch CloudNaaS Stratos Applications NOX Beacon Trema Maestro Controller FlowVisor Console FlowVisor Slicing Software 80 80 Commercial Switches HP, NEC, Pronto, Juniper.. and many more Software Ref. Switch OpenWRT NetFPGA PCEngine WiFi AP Broadcom Ref. Switch Open vswitch Switches 80 40

Example SDN Applications Wisconsin Projects Stratos CloudNaaS OpenSAFE ECOS Stanford Demos Wireless mobility VM mobility/migration Network virtualization Power management Load balancing Traffic Engineering 81 81 openflow.org/videos 82 82 82 41

83 83 and Network Virtualization 84 84 42

In a nutshell A revolution is just starting in networking Driven by cost and control It started in data centers. and is spreading Trend is towards an open-source, software-defined network The new opportunity to innovate will bring about the need to try new ideas Hence virtualization (or slicing) Outline one way to do it with 85 85 Software-defined Network 1.Data Centers Cost and control 2.Network & Cellular operators Bit-pipe avoidance Cost and control Security and mobility 1.Researchers GENI, FIRE, 86 86 43

What form might it take? 87 87 Application Application Application OS Computer Computer OS abstracts hardware substrate Innovation in applications 88 88 44

Application Application Application Application Windows (OS) Windows (OS) or Linux or Mac OS x86 (Computer) x86 (Computer) Simple, common, stable, hardware substrate below + Programmability + Competition Innovation in OS and applications 89 89 App App App Application Application Windows (OS) (OS) (OS) Linux Linux Linux Mac Mac Mac OS OS OS Windows (OS) or Linux or Mac OS Virtualization x86 (Computer) x86 (Computer) Simple, common, stable, hardware substrate below + Programmability + Strong isolation model + Competition above Innovation in infrastructure 90 90 45

A simple stable common substrate 1. Allows applications to flourish Internet: Stable IPv4 led to the web 2. Allows the infrastructure on top to be defined in software Internet: Routing protocols, management, 3. Rapid innovation of the infrastructure itself Internet: er...? What s missing? What is the substrate? 91 91 (Statement of the obvious) In networking, despite several attempts Never agreed upon a clean separation between: 1. A simple common hardware substrate 2. And an open programming environment on top 92 92 46

A prediction 1. A clean separation between the substrate and an open programming environment 2. A simple low-cost hardware substrate that generalizes, subsumes and simplifies the current substrate 3. Very few preconceived ideas about how the substrate will be programmed 4. Strong isolation among features But most of all. 93 93 Open-source will play a large role 94 94 47

Owners, operators, administrators, developers, researchers will want to improve, update, fix, experiment, share, build-upon, and version their network. 95 95 Therefore, the software-defined network will allow simple ways to program and version. One way to do this is virtualizing/slicing the network substrate. 96 96 48

as a simple, sliceable substrate below 97 97 App App App App App App Windows Windows Windows (OS) (OS) (OS) Linux Linux Linux Mac Mac Mac OS OS OS Controller Controller Controller 1 1 1 Controller Controller Controller 2 Virtualization x86 (Computer) Virtualization (FlowVisor) Simple, common, stable, hardware substrate below + Programmability + Strong isolation model + Competition above Faster innovation 98 98 49

Step 1: Separate intelligence from datapath Operators, users, 3rd party developers, researchers, New function! 99 99 Step 2: Cache decisions in minimal flowbased datapath If header = x, send to port 4 If header = y, overwrite header with z, send to ports 5,6 If header =?, send to me Flow Table 100 100 50

Packet-switching substrate Ethernet DA, SA, etc IP DA, SA, etc TCP DP, SP, etc Payload Collection of bits to plumb flows (of different granularities) between end points 101 101 Properties of a flow-based substrate We need flexible definitions of a flow Unicast, multicast, waypoints, load-balancing Different aggregations We need direct control over flows Flow as an entity we program: To route, to make private, to move, Exploit the benefits of packet switching It works and is universally deployed It s efficient (when kept simple) 102 102 51

Substrate: Flowspace Ethernet DA, SA, etc IP DA, SA, etc TCP DP, SP, etc Payload Collection of bits to plumb flows (of different granularities) between end points Header User-defined flowspace 2.0 Payload 103 103 Flowspace: Simple example Single flow All flows from A IP DA All flows between two subnets A IP SA 104 104 52

Flowspace: Generalization Single flow Set of flows Field 1 Field n Field 2 105 105 Properties of Flowspace Backwards compatible Current layers are a special case No end points need to change Easily implemented in hardware e.g. TCAM flow-table in each switch Strong isolation of flows Simple geometric construction Can prove which flows can/cannot communicate 106 106 53

Slicing Flowspace 107 107 Approach 1: Slicing using VLANs Sliced Switch Controller C C VLANs B VLANs A VLANs (Legacy VLANs) Flow Table Flow Table Flow Table Normal L2/L3 Processing Controller B Controller A 108 108 Some prototype switches do this 54

Approach 2: FlowVisor Rob Sherwood* (rob.sherwood@stanford.edu) Alice s Controller Bob s Controller Protocol Switch FlowVisor Protocol Switch Switch 109 109 * Deutsche Telekom, T-Labs FlowVisor Broadcast Multicast http Load-balancer Protocol Switch FlowVisor Protocol Switch Switch 110 110 55

FlowVisor WiMax-WiFi Handover Tricast Lossless Handover Learning switch Mobile VMs New BGP Bob s FlowVisor GENI Production Network Controller Alices s FlowVisor Protocol Network Administrator s FlowVisor Protocol GENI s FlowVisor GENI Aggregate Manager 111 111 Switch Tommaso Melodia, Switch Software Defined Networks Switch FlowVisor A proxy between switch and guest controller Parses and rewrites messages as they pass Ensures that one experiment does not affect another Allows rich virtual network boundaries By port, by IP, by flow, by time, etc. Define virtualization rules in software 112 112 56

FlowVisor Goals Transparency Unmodified guest controllers Unmodified switches Strong resource Isolation Link b/w, switch CPU, etc. Flow space: who gets this message Virtualization Policy module Rich network slicing 113 113 Slicing Example 114 114 57

Trend Computer Industry Network Industry App App App App App App Windows Windows Windows (OS) (OS) (OS) Linux Linux Linux Mac Mac Mac OS OS OS NOX Controller Controller 1 1 (Network OS) Controller Controller Network 22 OS Virtualization layer x86 (Computer) Virtualization or Slicing Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation 115 115 What is? 116 116 58

Short Story: is an API Control how packets are forwarded Implementable on COTS hardware Make deployed networks programmable not just configurable Makes innovation easier Result: Increased control: custom forwarding Reduced cost: API increased competition 117 117 Ethernet Switch/Router 118 118 59

Control Path (Software) Data Path (Hardware) 119 119 Controller Protocol (SSL/TCP) Control Path Data Path (Hardware) 120 120 60

Flow Table Abstraction Software Layer Hardware Layer MAC src Firmware MAC dst Flow Table IP Src IP Dst TCP sport TCP dport Action * * * 5.6.7.8 * * port 1 PC Controller port 1 port 2 port 3 port 4 121 121 5.6.7.8 1.2.3.4 Basics Flow Table Entries Rule Action Stats Packet + byte counters 1. Forward packet to port(s) 2. Encapsulate and forward to controller 3. Drop packet 4. Send to normal processing pipeline 5. Modify Fields Switch Port VLAN ID MAC src MAC dst + mask what fields to match Eth type IP Src IP Dst IP Prot TCP sport TCP dport 122 122 61

Examples Switching Switch Port * MAC src Flow Switching Switch Port port3 Firewall Switch Port * MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * 00:1f:.. * * * * * * * port6 MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Forward * * * * * * * * 22 drop 123 123 Examples Routing Switch Port * MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * * * * * 5.6.7.8 * * * port6 VLAN Switching Switch Port * MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport * 00:1f.. * vlan1 * * * * * TCP dport Action port6, port7, port9 124 124 62

Usage Dedicated Network Switch Rule Action Statistics Aaron s code Controller PC Protocol Switch Switch Rule Action Statistics Rule Action Statistics 125 125 Switch.org Network Design Decisions Forwarding logic (of course) Centralized vs. distributed control Fine vs. coarse grained rules Reactive vs. Proactive rule creation Likely more: open research area 126 126 63

Centralized vs Distributed Control Centralized Control Controller Distributed Control Controller Switch Switch Controller Switch Switch Controller Switch Switch 127 127 Flow Routing vs. Aggregation Both models are possible with Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone 128 128 64

Reactive vs. Proactive Both models are possible with Reactive First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules 129 129 Application: Network Slicing Divide the production network into logical slices o each slice/service controls its own packet forwarding o users pick which slice controls their traffic: opt-in o existing production services run in their own slice e.g., Spanning tree, OSPF/BGP Enforce strong isolation between slices o actions in one slice do not affect another Allows the (logical) testbed to mirror the production network o real hardware, performance, topologies, scale, users o Prototype implementation: FlowVisor 130 130 65

Add a Slicing Layer Between Planes Slice 2 Controller Slice 1 Controller Slice 3 Controller Slice Policies Rules Control/Data Protocol Excepts 131 131 Data Plane Network Slicing Architecture A network slice is a collection of sliced switches/routers Data plane is unmodified Packets forwarded with no performance penalty Slicing with existing ASIC Transparent slicing layer each slice believes it owns the data path enforces isolation between slices i.e., rewrites, drops rules to adhere to slice police forwards exceptions to correct slice(s) 132 132 66

Slicing Policies The policy specifies resource limits for each slice: Link bandwidth Maximum number of forwarding rules Topology Fraction of switch/router CPU FlowSpace: which packets does the slice control? 133 133 Flowspace Flowspace is a way of thinking about classes of packets Each slice has forwarding control of a specific set of packets, as specified by packet header fields all packets in a given flow are controlled by the same slice Each flow is controlled by exactly one slice (ignoring monitoring slices) In practice, flow spaces are described using ordered ACL-like rules 134 134 67

FlowSpace: Maps Packets to Slices 135 135 Real User Traffic: Opt-In Allow users to Opt-In to services in real-time o Users can delegate control of individual flows to Slices o Add new FlowSpace to each slice's policy Example: o "Slice 1 will handle my HTTP traffic" o "Slice 2 will handle my VoIP traffic" o "Slice 3 will handle everything else" Creates incentives for building high-quality services 136 136 68

FlowVisor Implemented on Server Servers Custom Control Plane Controller Controller Controller Controller Network Stub Control Plane Firmware Protocol FlowVisor Firmware Data Plane Data Path Data Path 137 137 Switch/ Router Switch/ Router FlowVisor Message Handling Alice Controller Bob Controller Cathy Controller Rule Policy Check: Is this rule allowed? Full Line Rate Forwarding FlowVisor Firmware Exception Policy Check: Who controls this packet? Packet Data Path 138 138 69

Deployments 139 139 has been prototyped on. Ethernet switches HP, Cisco, NEC, Quanta, + more underway IP routers Cisco, Juniper, NEC Switching chips Most (all?) hardware Broadcom, Marvell switches now based on Transport switches Open vswitch Ciena, Fujitsu WiFi APs and WiMAX Basestations 140 140 70

Deployment: Stanford Real, production network o 15 switches, 35 APs o 25+ users o 1+ year of use Same physical network hosts Stanford demos o 7 different demos 141 141 Deployments: GENI 142 142 71

(Public) Industry Interest Google has been a main proponent of new 1.1 WAN features ECMP, MPLS-label matching MPLS LDP- speaking router: NANOG50 NEC has announced commercial products Initially for datacenters, talking to providers Ericsson MPLS Openflow and the Split Router Architecture: A Research Approach at MPLS2010 143 143 Conclusions Current networks are complicated is an API Interesting apps include network slicing Nation wide academic trials underway has potential for Service Providers Custom control for Traffic Engineering 144 144 72

NOX: A Bit of History NOX was the first SDN controller Released under GPL in 2008 Extensively used in research Now maintained by research community 145 145 NOX Highlights Linux C++ and Python Component system Event-based programming model Applications: Forwarding (reactive), topology discovery, host tracking, 146 146 73

NOX Centralized programming model High-level abstraction 147 147 Programming Interface Events Namespace Libraries Routing Packet classification DNS Network filtering 148 148 74

NOX Architecture App 1 App 2 App 3 NOX Controller PC Network View OF Switch OF Switch Wireless OF Switch Off-the-shelf hosts 149 149 Switch Abstraction switch abstraction is a flow table. Each flow table entry takes the form <header : counters, actions> Switch executes the actions corresponding to the highest-priority matching header in table. 150 150 75

Operation Switch 1. Packet p reaches switch. 2. If p matches a flow entry Then apply the corresponding actions Else forward to the controller Controller 1. Packet p reaches controller. 2. Update view of network state. 3. Decide the route for the packet and inform the relevant switches of that route. 151 151 Application I/O Observation granularity: Switch-level topology Locations of users, hosts, middleboxes Services offered, e.g. HTTP or NFS Bindings between names and addresses NOT the entire packet/flow state Control granularity: flows. Decisions about one packet are applied to all subsequent packets in the flow. 152 152 76

Programmatic Interface: Events NOX exposes network events to applications Switch join Switch leave User authenticated Flow initiated Applications consist of code fragments that respond to these events. 153 153 Example: Access Control function handle_flow_initialize(packet) usersrc = nox.resolve_user_src(packet) hostsrc = nox.resolve_host_src(packet) usertgt = nox.resolve_user_tgt(packet) hosttgt = nox.resolve_host_tgt(packet) prot = nox.resolve_ap_prot(packet) if deny(usersrc,hostsrc,usertgt,hosttgt,prot) then nox.drop(packet) else nox.installpath(p, nox.computepath(p)) function deny(usersrc, hostsrc, usertgt, hosttgt, prot) 154 154 77

Scalability Events (per second) Packet arrivals (10 6 ): handled by switches Flow initiations (10 5 ) : handled by controller View change (10): handled by controller Controller Can be replicated. Only global data structure: view. One currently handles 10 5 flow initiations per second. 155 155 Related Work 4D project (2005): provide global view of network via centralized controller. SANE/Ethane (2007): extends 4D by adding users/nodes to the namespace and captures flow-initiation. NOX (2008): extends SANE/Ethane Scaling for large networks. General programmatic control of network. Maestro (2008): network OS focused on controlling interactions between applications. Industry: deep-packet inspection, firewalls, etc. are appliances-- can be leveraged by NOX. Also, functionality similar to Ethane. 156 156 78

POX A new platform in pure Python Clean dependencies Take good things from NOX Target Linux, Mac OS, and Windows Goal: Good for research Non-goal: Performance 157 157 How do other industries do it? Network Debugging 158 158 79

AppAppAppAppAppAppAppAppAppAppApp Specialized Applications Specialized Operating System Specialized Hardware Open Interface Window s or Linux or (OS) Open Interface Microprocessor Mac OS 159 159 Vertically integrated Closed, proprietary Slow innovation Small industry Horizontal Open interfaces Rapid innovation Huge industry AppAppAppAppAppAppAppAppAppAppApp Specialized Features Specialized Control Plane Specialized Hardware Control Plane or Open Interface Control Plane or Open Interface Merchant Switching Chips Control Plane 160 160 Vertically integrated Closed, proprietary Slow innovation Horizontal Open interfaces Rapid innovation 80

New Research Areas With SDN we can: 1. Formally verify that our networks are behaving correctly 2. Identify bugs, then systematically track down their root cause. 161 161 Software Defined Network (SDN) f View Control Programs f View Control Programs Abstract Network View Network Virtualization Global Network View Network OS f View Control Programs 162 162 Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding 81

Software Defined Network (SDN) f View Control Programs firewall.c f View if( pkt->tcp->dport == 22) Control droppacket(pkt); Control Programs Programs Abstract Network View Network Virtualization f View 163 163 1.<Match, 2.<Match, 3.<Match, 4.<Match, 5.<Match, 6. 7. Packet Forwarding Global Network View 1.<Match, 2.<Match, Network OS 3.<Match, Packet 4.<Match, 1.<Match, Forwarding 5.<Match, 2.<Match, 3.<Match, 1.<Match, 6. 7. 4.<Match, 2.<Match, Packet 5.<Match, 3.<Match, Forwarding 6. 4.<Match, 7. 5.<Match, Packet 6. 7. Forwarding Packet Forwarding 1.<Match, 2.<Match, 3.<Match, 4.<Match, 5.<Match, 6. 7. Making ASICs Work Specification Functional Description (RTL) Functional Verification Logical Synthesis Static Timing Place & Route Design Rule Checking (DRC) Layout vs Schematic (LVS) Layout Parasitic Extraction (LPE) Testbench & Vectors Manufacture & Validate $10B tool business supports a $250B chip industry 100s of Books >10,000 Papers 10s of Classes 164 164 82

Making Software Work Functional Description (Code) Specification Testbench $10B tool business supports a $300B S/W industry Static Code Analysis Run-time Checker Invariant Checker Interactive Debugger Model Checking 100s of Books >100,000 Papers 10s of Classes 165 165 Making Networks Work (Today) traceroute, ping, tcpdump, SNMP, Netflow. er, that s about it. 166 166 83

Why debugging networks is hard Complex interaction Between multiple protocols on a switch/router. Between state on different switches/routers. Multiple uncoordinated writers of state. Operators can t Observe all state. Control all state. 167 167 Networks are kept working by Masters of Complexity A handful of books Almost no papers No classes 168 168 84

Philosophy of Making Networks Work YoYo You re On Your Own Yo Yo Ma You re On Your Own, Mate 169 169 With SDN we can: 1. Formally verify that our networks are behaving correctly. 2. Identify bugs, then systematically track down their root cause. 170 170 85

Software Defined Network (SDN) firewall.c 171 171 Control Programs Packet Forwarding if( pkt->tcp->dport == 22) Control droppacket(pkt); Control Programs Programs Abstract Network View Network Virtualization Global Network View Network OS Packet Forwarding Packet Forwarding Packet Forwarding 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. Packet Forwarding Two SDN projects 1. Static Checking Independently checking correctness 2. Header Space Analysis Is the datapath behaving correctly? 172 172 86

1. Static checking Independently checking correctness 173 173 Motivations In today s networks, simple questions are hard to answer: Can host A talk to host B? What are all the packet headers from A that can reach B? Are there any loops in the network? Is Group X provably isolated from Group Y? What happens if I remove a line in the config file? 174 174 174 87

1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. 1. <Match, 2. <Match, 3. <Match, 4. <Match, 5. <Match, 6. 7. Software Defined Network (SDN) 175 175 Control Programs Packet Forwarding Control Programs Network Virtualization Packet Forwarding Global Network View Network OS Packet Forwarding Abstract Network View Packet Forwarding Control Programs Packet Forwarding Policy A can talk to B Guests can t reach PatientRecords Static Checker How it works Header Space Analysis 176 176 88

Header Space Analysis 2 3 1 4 177 177 1 2 3 4 Header Space Analysis 2 3 1 4 178 178 1 2 3 4 Port ID 89

Can A talk to B? 2 3 1 4 179 179 1 2 3 4 Port ID All packets from A that can reach B 180 180 90

Header Space Analysis Consequences Abstract forwarding model; protocol independent Finds all packets from A that can reach B Find loops, regardless of protocol or layer Can prove that two groups are isolated Can verify if network adheres to policy 181 181 91