Network irtualization History Network irtualization and Data Center Networks 263-3825-00 SDN Network irtualization Qin Yin Fall Semester 203 Reference: The Past, Present, and Future of Software Defined Networking. Nick Feamster, Jennifer Rexford, and Ellen Zegura. http://gtnoise.net/papers/drafts/sdn-cacm-203-aug22.pdf 2 Network irtualization History Dedicated overlays for incremental deployment bone (multicast) and 6bone (IPv6) ulti-service s Tempest project for AT s Overlays for improving the Resilient Overlay Networks (RON) Shared experimental testbeds PlanetLab, Emulab, Orbit, irtualizing the infrastructure Overcoming Internet impasse through virtualization Later testbeds like GENI, INI, irtualization in SDN Open vswitch, ininet, Flowisor, Nicira NP, Extending ing into the virtualization layer Ben Pfaff, Justin Pettit, Teemu Koponen, Keith Amidon, artin Casado, Scott Shenker HotNets-III, 2009 Reference: The Past, Present, and Future of Software Defined Networking. Nick Feamster, Jennifer Rexford, and Ellen Zegura. http://gtnoise.net/papers/drafts/sdn-cacm-203-aug22.pdf 3 Reference: Network irtualization, Ben Pfaff, Nicira Networks, Inc. http://benpfaff.org/~blp/-virt-lecture.pdf 4 Data Center Network Design with s Problem: Isolation All s can talk to each other by default. You don't want someone in engineering screwing up the finance. You don't want a break-in to your production website to allow stealing human resources data. other agg switches other ToRs Some switches have security features but: You bought the cheap ones instead. There are hundreds of switches to set up. other ToRs other agg switches virtual switch (= vswitch) achine achine 2... achine 40 virtual switch (= vswitch) achine achine 2... achine 40 up to 28 s each One rack of machines up to 28 s each One rack of machines 6
Problem: Connectivity The s in a data center can name each other by their AC addresses (L2 addresses). This only works within a data center. To access machines or s in another data center, IP addresses (L3 addresses) must be used. And those IP addresses have to be globally routable. virtual switch (= vswitch) achine achine 2... achine 40 up to 28 s each The Internet other ToRs L3 other agg switches L2 Non-Solution: LANs A LAN partitions a physical Ethernet into isolated virtual Ethernet s: Ethernet LAN IP TCP L2 L3 L4 The Internet is an L3. When a packet crosses the Internet, it loses all its L2 headers, including the LAN tag. You lose all the isolation when your traffic crosses the Internet. Other problems: limited number, static allocation. One rack of machines 7 8 Solution: Network irtualization irtualization Layering Network irtualization irtual resource irtual Ethernet irtualization layer Tunnel Physical resource Physical Ethernet Tunneling: Separating irtual and Physical Network Ethernet IP TCP Path of a Packet (No Tunnel) A packet from one to another passes through a number of switches along the way. Each switch only looks at the destination AC address to decide where the packet should go. other ToRs other agg switches Ethernet IP GRE Ethernet IP TCP virtual switch (= vswitch) achine achine 2... achine 40 up to 28 s each Physical Headers irtual Headers One rack of machines 9 0 physical virtual Path of a Packet (ia Tunnel) achine Ethernet IP GRE Physical Headers achine 2... achine 40 The Internet Ethernet IP TCP irtual Headers achine achine 2... achine 40 routing switching Challenges Setting up the tunnels: After startup After shutdown After migration Handling failures onitoring Administration Use a central controller to set up the tunnels. Data Center Data Center 2 2
A Network irtualization Distributed System wires control protocols The Internet controller achine achine 2 achine 3 achine 4 OS OS OS OS Data Center Data Center 2 Controller Duties onitor: Physical locations, states Control: Tunnel setup All packets on virtual and physical irtual/physical mapping Tells OS running everywhere else what to do 3 4 Open vswitch Ethernet switch implemented in software Can be remotely controlled Tunnels (GRE and others) Integrates with s, e.g. XenServer, K Free and open source openvswitch.org OpenFlow protocol Ethernet switch To manage the forwarding behavior of the fast path Flow table = ordered list of if-then rules: If this packet comes from A and going to B, then send it out via tunnel 42. (No rule: send to controller.) 5 6 OSDB protocol Used to manage Open vswitch instances anagement protocol for less time critical configuration: Create many virtual switch instances Attach interfaces to virtual switches Tunnel setup Set QoS policies on interfaces Further reading about OSDB protocol: http://heresy.com/tag/ovsdb/ OpenFlow in the Data Center (One Possibility) wires control protocols 5 The Internet achine achine 2 The same achine process 3 repeats on achine the other 4 2 OS OS end to send OS the reply back. OS 4 3. sends packet. 2. Open vswitch checks flow table no match. Sends packet to controller. 3. Controller Core tells Switch OS to set up a controller tunnel to the destination and send the packet on that tunnel. 4. OS sends packet on the new tunnel. 5. Normal switching and routing carry the packet to its destination in the usual way. This is done at most on a per- flow basis, and other optimizations keep it from happening too frequently. Data Center Data Center 2 7 8
Open vswitch: Design Overview Hypervisor physical machine Open vswitch: Design Details Hypervisor physical machine irtual machines 2 3 irtual machines 2 3 NIC NIC NIC NIC NIC NIC NIC NIC NIC NIC Host operating system NIC ovs-vswitchd NIC Host operating system ovs-vswitchd user kernel OS kernel module...other elements... Hypervisor Adminstrative CLI/GUI NIC NIC Controller Controller Open vswitch is Fast Bandwidth As fast as Linux bridge with same CPU usage Kernel module: > Gbps ovs-vswitchd: 00 bps Controller: 0 bps Latency Kernel module: < μs ovs-vswitchd: < ms Controller < 0ms Conclusion Companies spread s across data centers. Ordinary ing exposes differences between s in the same data center and those in different data centers. Tunnels can hide the differences. A controller and OpenFlow switches at the edge of the can set up and maintain the tunnels. 22 Problem Can the production be the testbed? Evaluating new services is hard Experimenters want to control the behavior of their Rob Sherwood, Glen Gibb, Kok-Kiong Yap, Guido Appenzeller, artin Casado, Nick ckeown, and Guru Parulkar New services may require changes to switch software Also require access to real world traffic OSDI, 200 Reference: Network irtualization, Ben Pfaff, Nicira Networks, Inc. http://benpfaff.org/~blp/-virt-lecture.pdf Good ideas rarely get deployed 23 24
Solution Overview: Network Slicing Divide the production into logical slices Each slice/service controls its own packet forwarding Users pick which slice controls their traffic: opt-in Existing production services run in their own slice e.g., Spanning tree, OSPF/BGP Enforce strong isolation between slices Actions in one slice do not affect another Allows the (logical) testbed to mirror the production Real hardware, performance, topologies, scale, users Network Slicing Architecture A slice is a collection of sliced switches/routers Data plane is unmodified Packets forwarded with no performance penalty Slicing with existing ASIC Transparent slicing layer Each slice believes it owns the data path Enforces isolation between slices i.e., rewrites, drops rules to adhere to slice policy Forwards exceptions to correct slice(s) 25 26 Slicing Policies The policy specifies resource limits for each slice: Link bandwidth aximum number of forwarding rules Fraction of switch/router CPU (based on control traffic a particular slice controller can generate) FlowSpace: which packets does the slice control? FlowSpace: aps Packets to Slices FlowSpace is basically the set of all possible header values defined by the OpenFlow tuple Only one controller can ever control a particular flowspace Priority solves flowspace overlapping problem 27 28 Real User Traffic: Opt-In Allow users to Opt-In to services in real-time Individual flows can be delegated to a slice by a user Admins can add policy to slice dynamically Web Slice oip Slice Flowisor ideo Slice All the rest Flowisor Implemented on OpenFlow Sits between switches and controllers Speaks OpenFlow up and down. Acts like a proxy to switches and controllers Datapaths and controllers run unmodified Creates incentives for building high-quality services 29 30
How does this work? essage Handling - PacketIn Drop if controller is not connected. PacketIn Is LLDP? Yes Send to appropriate slice. It this action allowed? Who controls this packet? PacketIn from datapath Extract match structure and match FlowSpace No match Has packet been send to a slice? match No No Are actions allowed? Insert a drop rule. No Yes Log exception. Send to slice. Drop if controller is not connected. Yes 3 Done 32 Flowisor irtualization Network Slice = Collection of sliced switches, links, and traffic or header space Each slice associated to a controller Transparent slicing, i.e., every slice believes it has full and sole control of datapath F enforces traffic and slice isolation Controllers and switches do not need to be modified Not a generalized virtualization Flowisor Summary Flowisor introduces the concept of a slice Originally designed to test new services on production traffic But, it s really only a Network Slicer! Flowisor provides slicing but not a complete virtualization. 33 34 Programmable irtual Networks From Network Slicing To Network irtualization Ali Al-Shabibi Open Networking Laboratory Reference: nvirters.org/wp-content/uploads/203/05/irt-july-203-eetup.pptx 35 Network irtualization Decoupling the services provided by a (virtualized) from the physical infrastructure irtual is a container of services (L2-L7) provisioned by software Faithful reproduction of services provided by a physical Analogy to a complete reproduction of physical machine (CPU, memory, I/O, etc.) Reference: http://www.opennetsummit.org/pdf/203/presentations/bruce_davie.pdf 36
What is Network irtualization? Network irtualization vs. Slicing Overlays PN LAN None of these give RF you a virtual PLS TRILL Topology irtualization irtual links irtual nodes Decoupled from physical They merely virtualize one aspect of a Address irtualization irtual Addressing aintain current abstractions Add some new ones Policy irtualization Who controls what? What guarantees are enforced? Slicing Sorry, you can t. You need to discriminate traffic of two s with something other than the existing header bits Thus no address or complex topology virtualization 37 38 Network irtualization vs. Slicing irtualization: State of the Art Slicing Sorry, you can t. You need to discriminate traffic of two s with something other than the existing header bits Thus no address or complex topology virtualization Network irtualization irtual nets are completely independent irtual nets are distinguished by the tenant id Complete address and topology virtualization Functionality implemented at the edge Use of tunneling techniques, such as STT, XLAN, GRE Network core is not available for innovation Closed source controller controls the behavior of the Provides address and topology virtualization, but limited policy virtualization. oreover, the topology looks like only one big switch 39 40 Big Switch Abstraction Big Switch Abstraction E SWITCH E E2 E3 E2 E5 E4 SWITCH 2 A single switch greatly limits the flexibility of the controller Cannot specify your own routing policy. What if you want a tree topology? E6 E3 E4 E5 E6 42
OpenirteX Ultimate Goal Current irtualization Solutions OpenirteX Network OS Network OS Network OS Networks are not programmable Functionality implemented at the edge Network core is not available for innovation ust provision tunnels to provide virtual topology Address virtualization provided by encapsulation Each virtual is handed to a controller for programming. Edge & core available for innovation Entire physical topology may/can be exposed to the downstream controller. Address virtualization provided by remapping/rewriting header fields Both dataplanes and controllers can be used unmodified. ) Topology, address space and control function mapping OpenirteX irtual graph Physical graph 43 Physical High Level Features Network irtualization and SDN Support for more generalized virtualization as opposed to slicing Address virtualization: use extra bits or clever use of tenant id in header Topology virtualization: on demand topology Integrate with cloud using OpenStack Network virtualization!= SDN Predates SDN ay use SDN, doesn t require SDN Easier to virtualize an SDN switch Run separate controller per virtual Leverage open interface to the hardware OpenirteX is still in the design phase 45 Reference: http://www.cs.princeton.edu/courses/archive/fall3/cos597e/docs/0virtualization.pptx 46 References Extending ing into the virtualization layer. Ben Pfaff, Justin Pettit, Teemu Koponen, Keith Amidon, artin Casado, Scott Shenker. In proceedings of the 8th AC Workshop on Hot Topics in Networks (HotNets-III). New York City, NY, October 2009. Can the production be the testbed?. Rob Sherwood, Glen Gibb, Kok-Kiong Yap, Guido Appenzeller, artin Casado, Nick ckeown, and Guru Parulkar. 200. In Proceedings of the 9th USENIX conference on Operating systems design and implementation (OSDI'0). USENIX Association, Berkeley, CA, USA, -6. Nikhil Handigol, Brandon Heller, imalkumar Jeyakumar, Bob Lantz, and Nick ckeown. 202. Reproducible experiments using containerbased emulation. In Proceedings of the 8th international conference on Emerging ing experiments and technologies (CoNEXT '2). AC, New York, NY, USA, 253-264. Network irtualization in ulti-tenant Datacenters. mware Technical Report. 203. 47