D2.2 SDN support for wireless islands and OpenFlow integration into OMF in Europe



Similar documents
Programmable Networking with Open vswitch

Tutorial: OpenFlow in GENI

Network Virtualization

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Automated Overlay Virtual Networking Manager for OpenFlow-based International SDN-Cloud Testbed

Cisco Outdoor Wireless Mesh Enables Alternative Broadband Access

Virtualization and SDN Applications

Software Defined Exchange (SDX) and Software Defined Infrastructure Exchange (SDIX) Vision and Architecture

Getting to know OpenFlow. Nick Rutherford Mariano Vallés

SDN Architecture and Service Trend

Pronto Cloud Controller The Next Generation Control

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller

Multiple Service Load-Balancing with OpenFlow

OSHI - Open Source Hybrid IP/SDN networking (and its emulation on Mininet and on distributed SDN testbeds)

Software Defined Optical Networks with Optical OpenFlow. Jörg-Peter Elbers, Achim Autenrieth ADVAnced Technology August 2012 Rev 1.

Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering

Configuring Network Address Translation (NAT)

VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

SDN Testbeds and Experimentation

Top-Down Network Design

Cisco Outdoor Wireless Network Serves Up Automatic Meter Reading

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Virtualization, SDN and NFV

Network Functions Virtualization in Home Networks

Network Virtualization History. Network Virtualization History. Extending networking into the virtualization layer. Problem: Isolation

White Paper. Requirements of Network Virtualization

When SDN meets Mobility

Introduction to OpenFlow:

OpenFlow: Enabling Innovation in Campus Networks

Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.

MPLS L2VPN (VLL) Technology White Paper

DREAMER and GN4-JRA2 on GTS

Overview of Routing between Virtual LANs

Networks. The two main network types are: Peer networks

Juniper / Cisco Interoperability Tests. August 2014

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics

OpenFlow Overview. Daniel Turull

Linux KVM Virtual Traffic Monitoring

What is VLAN Routing?

The proliferation of the raw processing

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

VLANs. Application Note

Demystifying Wireless for Real-World Measurement Applications

Interconnecting Cisco Network Devices 1 Course, Class Outline

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters

Aerohive Networks Inc. Free Bonjour Gateway FAQ

Telecom - The technology behind

Virtualised MikroTik

OpenFlow/So+ware- defined Networks. Srini Seetharaman Clean Slate Lab Stanford University July 2010

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy

Create Virtual AP for Network Campus with Mikrotik

Network Virtualization and SDN/OpenFlow for Optical Networks - EU Project OFELIA. Achim Autenrieth, Jörg-Peter Elbers ADVA Optical Networking SE

Facility Usage Scenarios

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Layer 3 Network + Dedicated Internet Connectivity

DATA SECURITY 1/12. Copyright Nokia Corporation All rights reserved. Ver. 1.0

Network Virtualization Network Admission Control Deployment Guide

"Charting the Course...

GENI Network Virtualization Concepts

IP Address and Pre-configuration Information

OSHI - Open Source Hybrid IP/SDN networking (and its emulation on Mininet and on distributed SDN testbeds)

Software Defined Networking

Fibre Channel over Ethernet in the Data Center: An Introduction

AirMax4GW 4G LTE + WiFi Outdoor Gateway

Open Source Network: Software-Defined Networking (SDN) and OpenFlow

How To Learn Cisco Cisco Ios And Cisco Vlan

NComputing L-Series LAN Deployment

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

D1.2 Network Load Balancing

The FEDERICA Project: creating cloud infrastructures

Software Defined Network (SDN)

Remote PC Guide Series - Volume 1

Computer Networking Networks

Software Defined Network Application in Hospital

Elastic Management of Cluster based Services in the Cloud

Application Note Gigabit Ethernet Port Modes

Funded by the European Union s H2020 Programme. D4.1 Virtual Collaboration Platform

Project 4: IP over DNS Due: 11:59 PM, Dec 14, 2015

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

SDN, a New Definition of Next-Generation Campus Network

Open Flow in Europe: Linking Infrastructure and Applica:ons [OFELIA] Reza Nejaba) Mayur P Channegowda, Siamak Azadolmolky, Dimitra Simeounidou

Intel Network Builders Solution Brief. Intel and ASTRI* Help Mobile Network Operators Support Small Cell Networks

Software-Defined Networking for the Data Center. Dr. Peer Hasselmeyer NEC Laboratories Europe

Output Power (without antenna) 5GHz 2.4GHz

Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments

Analysis of Network Segmentation Techniques in Cloud Data Centers

SSVVP SIP School VVoIP Professional Certification

Municipal Mesh Network Design

This document describes how the Meraki Cloud Controller system enables the construction of large-scale, cost-effective wireless networks.

Transcription:

Funded by the 7 th Framework Programme of the European Union Project Acronym: Project Full Title: SmartFIRE Grant Agreement: 611165 Enabling SDN Experimentation in Wireless Testbeds exploiting Future Internet Infrastructure in South Korea & Europe Project Duration: 26 months (Nov. 2013 - Dec. 2015) D2.2 SDN support for wireless islands and OpenFlow integration into OMF in Europe Deliverable Status: File Name: Due Date: Submission Date: Dissemination Level: Task Leader: Author: Final SMARTFIRE_ WP2_iMinds_D2.2_V0.3_03092014.pdf 31 August 2014 (M8) 10 September 2014 (M9) Public Bart Puype (iminds) Bart Puype (iminds)

Copyright Copyright 2013-2015 The SMARTFIRE Consortium Consisting of: UTH University of Thessaly Greece UPMC Université Pierre et Marie Curie France IMINDS IMinds Belgium UMU Universidad de Murcia Spain SIGMA Sigma Orionis France NICTA National ICT Australia Australia GIST Gwangju Institute of Science and Technology South Korea KISTI Korea Institute of Science and Technology Information South Korea KAIST Korea Advanced Institute of Science and Technology South Korea ETRI Electronics and Telecommunications Research Institute South Korea SNU Seoul National University South Korea Disclaimer All intellectual property rights are owned by the SMARTFIRE consortium members and are protected by the applicable laws. Except where otherwise specified, all document contents are: SMARTFIRE Project - All rights reserved. Reproduction is not authorised without prior written agreement. All SMARTFIRE consortium members have agreed to full publication of this document. The commercial use of any information contained in this document may require a license from the owner of that information. All SMARTFIRE consortium members are also committed to publish accurate and up to date information and take the greatest care to do so. However, the SMARTFIRE consortium members cannot accept liability for any inaccuracies or omissions nor do they accept liability for any direct, indirect, special, consequential or other losses or damages of any kind arising out of the use of this information. Page 2 of 15

Revision Control Version Author Date Status 0.2 iminds September 02, 2014 Initial Draft 0.3 UTH, UMU September 09, 2014 Final Draft 1.0 Final Draft reviewed FF Page 3 of 15

Executive summary The present document is a deliverable of the SMARTFIRE project, funded by the European Commission s Directorate-General for Communications Networks, Content & Technology (DG CONNECT), under its 7th EU Framework Programme for Research and Technological Development (FP7). This deliverable enables new experimentation scenarios in the European testbeds. The extended testbeds supporting tree topologies of OpenFlow switches, emulating the Future Internet backbone are described in Section 2. In addition in Section 3, it contains a detailed analysis of the features that will be integrated into the OMF framework. Included are design parameters and implementation details that should the followed in order to achieve the OpenFlow experimentation integration into the OMF framework. Page 4 of 15

Table of Contents 1. Introduction... 6 2. SDN support for wireless islands... 7 2.1 UTH NITOS island...7 2.2 iminds w-ilab.t island...7 2.3 University of Murcia Gaia Extended Research Architecture... 10 3. OpenFlow integration into OMF... 12 3.1 Switch and port configuration... 12 3.2 Stitching... 12 3.3 Slicing... 13 4. Conclusions... 15 References... 15 Page 5 of 15

1. Introduction In order to enable larger scale experiments with more complex networks such as a tree topology, the existing wireless testbeds of islands had to be extended with OpenFlow functionality. This brings OpenFlow control of the network by the experiment all the way into the wireless devices of the testbeds of the SmartFIRE infrastructure. Also, it establishes an interconnected OpenFlow backbone that federates the testbeds on the data-plane, using OpenFlow switches inside islands, and connectivity between EU as well as Korean partners. For some use cases, setting up and configuring this OpenFlow functionality is performed during experiment life-time. As some equipment or infrastructure in-between testbed facilities or partner islands will not be OpenFlow enabled, there will be some need to perform stitching or encapsulation of traffic at testbed gateways (and corrensponding de-encapsulation), in order to ensure correct forwarding of traffic. One problem with supporting large-scale experiments and multiple users, is that the facilities will be shared not only among SmartFIRE users, but also with other projects external to SmartFIRE. The partitioning of computing resources is generally provided by the facility managers (e.g., hypervisors or a platform which configures and installs server when requested for an experiment). For sharing of OpenFlow infrastructure, usage by multiple users is complicated by the fact that OpenFlow control has a centralized approach, which implementation-wise led to the fact that OpenFlow equipment connect to a single OpenFlow controller only. This means that in order to share said OpenFlow equipment, an OpenFlow hypervisor is required. Experiment 2 OpenFlow ctrl er Experiment OpenFlow controller Experiment 3 OpenFlow ctrl er Experiment 4 OpenFlow ctrl er flowvisor flowvisor Figure 1 Flowvisor slicing in an experiment context Flowvisor[1] is such a tool, which allows to a) aggregate multiple OpenFlow switches into one virtual switch instance exposed by Flowvisor; and b) connects to multiple controllers, letting each control a slice of OpenFlow resources. The aggregation allows installing for example one Flowvisor for each island (or heterogeneous testbed). This allows to hold some rules for OpenFlow flows locally in the island, increasing performance, especially in a EU-Korea context where experiments will have a high latency on some control links. The slicing is performed by assigning each controller a flowspace which partitions the physical OpenFlow resources. The slicing is similar to flow matching, i.e., it can be performed by assigning a VLAN (range), MAC address range, IP address range, TCP port (range) to a certain experiment controller. These controllers may not be aware of the slicing, and if they try to install a flow that exceeds the flowspace for their slice, it will be filtered/shrunk by the Flowvisor so as to not conflict with the other slices. In order to set up the flowspace slices, a separate management interface is available, which is controlled through the provisioning process or OMF toolset. The above functionalities (configuration, stitching, and slicing) are to be implemented in the SmartFIRE OMF toolset. Section 2 explains extensions to the wireless testbed islands; section 3 details an analysis and the implementation considerations for OMF integration of OpenFlow. Page 6 of 15

2. SDN support for wireless islands 2.1 UTH NITOS island NITOS facility is comprised of 2 wireless testbeds for experimentation with heterogeneous technologies. An outdoor testbed, featuring WiFi, WiMAX and LTE support and an indoor isolated testbed comprised of advanced powerful nodes. The outdoor and indoor testbeds consist of 45 and 40 nodes respectively. Each of the two testbeds also features two OpenFlow Ethernet switches interconnecting the nodes. The indoor testbed features two Pronto 3290 [2], depicted in Figure 2, and the outdoor features two HP 3800 [3], depicted in Figure 3. The HP switches interconnect the nodes experimental Ethernet interfaces. Through the slicing of each switch based on the ports allocated to each experimenter, several instances of OpenFlow controllers can instruct the switches concurrently and mutually excluded from each other. Figure 2: Pronto 3290 Figure 3: HP 3800 The control and management of the testbed is done using the control and Management Framework (OMF) open-source software. Users can perform their experiments by reserving slices (nodes, frequency spectrum) of the testbed through the NITOS Scheduler that together with OMF framework, support ease of use for experimentation and code development. OMF talks to Flowvisor that enables the individual control of each slice of OpenFlow resources. 2.2 iminds w-ilab.t island The iminds ilab.t experimental facilities include two wireless testbeds. The first one is located at the iminds/ghent University offices and is mostly used for sensor network experiments. The second one is located at Zwijnaarde campus, in a pseudo-shielded environment. The Zwijnaarde w-ilab.t wireless nodes are based on an x86 platform, simplifying the extensions of OpenFlow capabilities into this wireless testbed. Page 7 of 15

Figure 4: w-ilab.t Zwijnaarde nodes and access network The nodes house the following equipment: ZOTAC NM10-A-E mini-itx system (Atom D525, Gigabit LAN) 4GB DDR2 2x Sparklan WPEA-110N/E/11n mini PCIe 2T2R chipset: AR9280 (each supporting 2x2 mimo) RM090 sensornode [3] The nodes are powered by a PDU (controlled by the Emulab framework). The Ethernet switches providing wired access to the nodes are a combination of HP procurve 2510G, 2610 and 2626-PWR. The facility uses Emulab[5] to match the ilab.t virtual wall network emulation and experimentation facility. This facility has been used extensively already in OpenFlow experiments. Emulab allows the experimenter to request a number of servers from the virtual wall, and connect them in a topology specified by the user, by configuring the central switch which acts as a hub for connecting all equipment. For OpenFlow experiments on tree or mesh topologies, these topologies are configured on the facilities, nodes are requested and loaded with a software image (based on e.g. Ubuntu or Debian) which allows running a software OpenFlow switch such as the Open vswitch suite[6]. Additional nodes may be used for traffic generation, running the OpenFlow controller or even visualizing some of the traffic or monitoring results. The w-ilab.t wireless testbed also uses Emulab, and supports similar software images on the nodes, therefore OpenFlow functionality can be used on the wireless nodes themselves as well. Connecting the virtual wall and w-ilab.t testbeds allows for experiments with more complex topologies, with the virtual wall nodes acting as the backbone switches. The virtual wall nodes are Athlon and Xeon based and offer higher performance, supporting higher throughput and node degrees for OpenFlow switching. Since part of the network infrastructure between virtual wall and w-ilab.t (such as the HP procurve and iminds backbone switches) are not OpenFlow enabled, a wireless node will be connected to one or more virtual wall nodes using tunnelling, establishing a pointto-point link. For this setup, two main configuration are possible. A first option is to set up tunnelling such that each wireless interface is connected directly to a port on an OpenFlow switch (Open vswitch) in the virtual wall. In this case there is no OpenFlow capability on the wireless nodes. Tunnelling and bridging is used to separate (and multiplex) the traffic to each wireless interface until it reaches an OpenFlow component running inside the virtual wall. Note that a single Open vswitch instance in the virtual wall facility can operate wireless interfaces from multiple w-ilab.t nodes; also Page 8 of 15

increased performance may be available as the wireless nodes have a limited CPU (Atom D525). The downside is that in this setup, the virtual wall node is required, the wireless node cannot be directly integrated in an OpenFlow experiment. backbone network Virtual wall w-ilab.t ovs br br OpenFlow controller Figure 5: Tunnelling for wireless interfaces Nevertheless, a second setup is possible by installing an OpenFlow image on the wireless nodes. In this case tunnelling is only required to connect the Open vswitch instance running on the w-ilab.t node with that running on a virtual wall node. In this case it is possible to connect the wireless node directly to e.g. OFELIA OpenFlow infrastructure (by tunnelling to the iminds OFELIA gateway) or external islands through federation in the previous case it would require the traffic passing through a virtual wall node first (the virtual wall is connected to the iminds OFELIA OpenFlow facility with 10x 1Gbit/s Ethernet). However, in this scenario, the function of the OpenFlow component in the wireless node would likely be limited to (de)multiplexing traffic from/to the wireless interfaces onto the w-ilab.t access infrastructure and backbone network. For more complex routing and switching decisions for experiments with complex topologies, these would be performed on separate OpenFlow components outside the wireless nodes (i.e., Open vswitch on virtual wall nodes, or physical OpenFlow switches). backbone network Virtual wall w-ilab.t ovs ovs OpenFlow controller Figure 6: Tunnelling for OpenFlow enabled wireless nodes Page 9 of 15

Note that in the first case, the OpenFlow controller does not control the wireless node the bridging and tunnelling is set up by the experimental tools using OMF. In the second case, after Open vswitch is set up on the wireless node, it connects to the OpenFlow controller for the experiment. This controller may run on for example the virtual wall, but in a federated context it can run inside an experimental island separate from the iminds facility. 2.3 University of Murcia Gaia Extended Research Architecture The Gaia Extended Research Architecture is situated inside the Computer Science faculty from University of Murcia. Gaia has served during the last years as the testbed for different research projects. During these years many new components have been purchased and incorporated to its catalogue, several of them were wireless components. Figure 7: Espinardo Campus deployment. The blue dotted lines represent wireless directional Wimax connections. The blue triangles represent WLAN exterior antennas orientation. The concentric circles represent the omnidirectional Wimax antenna location. Apart from the common indoor apparels available in any computers laboratory, Gaia have also some external equipment which has been distributed over Espinardo Campus (Figure 7) covering an extension of 20ha with directional Wimax links, one omnidirectional Wimax access point and a couple of exterior WLAN antennas. Apart from the wireless links, a couple of VLANs over the University production fibre network have been arranged to be used as the infrastructure requests. One VLAN is used for administration purposes providing a secure and reliable way to remotely control any device and ensure that network signalling (openflow commands) arrive as expected. The remaining VLAN is the so called production VLAN which is used to establish tunnels between the desired points. Page 10 of 15

Figure 8: Espinardo campus SDN deployment. Green lines represent directional Wimax connections. All these wireless devices are managed through a SDN network. To deploy the SDN network, Gaia Extended Research Architecture takes advantage of the University of Murcia s environmental project RECICLATICA carried out by the University s administrative IT branch called ATICA. The idea of this project is to give a new opportunity to legacy hardware which is being replaced by newer one. Some 90 s PC, which were designed with extensibility in mind, have the characteristic of having a bunch of extension slots, in particular PCI slots. We request from Reciclatica the legacy hardware with as many slots as available (at least 5) to insert network cards. With this approach we can have as many switches as needed and distribute them over the network. Another advantage of this approach in front of buying expensive OpenFlow capable switches is the possibility of easily incorporate enhancements on the switching software (Open vswitch or any other alternative yet to come or to be developed). From the wireless point of view two different technologies are deployed 802.11b/g(n) and 802.16e. Inside Gaia Extended Research Architecture some OpenWRT based access points with Open vswitch software, in addition the Reciclatica based switches can be also extended using wireless pci cards or pcmcia wireless cards. The exceptional Gaia Extended Research Architecture equipment is allocated outside Gaia Extended Research Architecture and dispersed over the Espinardo campus. Four outdoor wireless antennas are collocated two by two on the roof of Mathematics faculty and Beaux Arts faculty. The Wimax (based on Alvarion BreezeMax hardware) deployment has different technologies varying from unidirectional, omnidirectional to 4.9Ghz to 5.4 Ghz spectrum. There is an omnidirectional 5.4Ghz access point situated on Mathematics which is reached by its corresponding client situated on Luis Vives building. 4.9Ghz base stations are situated on the roof of Gaia, D Building, Mathematics and Luis Vives and are reached respectively from Beaux Arts, Gaia, Gaia and ATICA building. An omnidirectional client is available for future tests. OMF 6.0 is on the process to be deployed into Gaia, the objective is initially to provide control over the switching elements and in a near future, thanks to the achievements promised in this project, extend the control to the wireless interfaces. Page 11 of 15

3. OpenFlow integration into OMF This section lists the required functionality that should be offered through OMF in the SmartFIRE federated facility. This functionality is the results from a requirements analysis related to the improved support of larger scale experiments, multiple users and federation across EU and EU-Korean facilities. 3.1 Switch and port configuration Part of the experimental facility will consist of physical OpenFlow enabled equipment. Such infrastructure will be generally shared with other experiments and projects external to SmartFIRE and consequently not be managed by a SmartFIRE controlled entity. However, the usage of software OpenFlow components such Open vswitch will lead to the need to perform management operations that are very similar to setting up and configuring OpenFlow switches. It can be assumed that such software OpenFlow switches may be set up on resources requested specifically for the experiment, and therefore have to be installed and configured from scratch. The functionality for configuring switches and ports includes the following: Install OpenFlow software and dependencies (unless the component is a hardware switch dedicated to the experiment or the software is already available on the disk image requested during the provisioning phase) Configure physical and virtual ports with correct parameters (L2/3). This may include setting up tunneling (see next sections). Configure the switch with a deterministic datapath ID, OpenFlow controller IP address (for the experiment) and bring it up. Enable the configured ports into the OpenFlow switch, with deterministic OpenFlow port numbers. For use cases requiring reconfiguration of the network topology, or for debugging purposes, we should also support: Disable/Re-enable ports into the OpenFlow switch Bring down a switch, and/or change parameters such as datapath ID or controller address. This functionality does not use the OpenFlow protocol, but will be based on a combination of CLI commands (software installation and port configuration) and existing tools and interfaces available for the specific management of the switch. 3.2 Stitching Stitching involves the connection of OpenFlow components across infrastructure that is not OpenFlow enabled for SmartFIRE experiments. For example, this includes backbone networks inter-connecting heterogeneous facilities in a single islands, L2 connectivity between islands (e.g., GÉANT VLAN) and intercontinental connectivity (possibly L3) connecting Korean and EU facilities. Such connectivity offers a varying amount of transparency, usually advertised as layer 3, layer 2 etc. Even in the case a point-topoint is layer 2 transparent, it may still be the case that intermediate switching equipment does in fact inspect Ethernet frames and decides to reroute traffic. For example, in the case where we want an experiment to send traffic from island 1 to island 2 and then onward to island 3, traffic may be sent directly from island 1 to 3 if the connectivity between the islands is provided by a single partner. In this case the experiment cannot be performed as designed. For such instances, tunneling or encapsulation of traffic is required, in order to shield it from intermediate switching equipment. The result is traffic that (to the external bandwidth provider) looks like point-to-point traffic between exactly two MAC or IP addresses: the end-points or gateway in the two islands that are being connected. Encapsulation or adaptation of the traffic may also be required when the interconnecting party is posing restrictions on the traffic, e.g., limiting traffic to one or a range of VLANs. Page 12 of 15

Stitching is the re-tagging of VLANs, encapsulation of traffic, NAT or MAC address rewriting etc. required in order to connect the islands over such infrastructure by adapting traffic at island gateways and devices, such that the stitching is invisible to the actual experimental traffic. Although inter-island connectivity may be configured statically, the encapsulation and adaptation requirements may still be visible to some extent to the experiment user (but not the actual OpenFlow controller). In that case some OMF functionality is needed in order to support these stitching procedures. For use cases that deal with dynamic bandwidth or testbeds where the provisioned equipment itself needs some sort of tunneling, configuration of stitching will be needed during experiment setup. Stitching functionality consists of: Configuring interfaces for tunneling, adaptation or encapsulation. Setting up kernel bridges (for software gateways) and bridging these outbound interfaces correctly with the corresponding experiment interfaces. Set up the encapsulating interfaces with appropriate MAC/IP addresses if needed, possibly renaming network interfaces etc. in order to hide the stitching from the experiment. In terms of implementation, the stitching can use (in loose order of preference, increasing complexity and decreasing performance): Q-in-Q, double VLAN tagging VLAN tag rewriting TUN/TAP tunneling (layer 2 tunnel, i.e., TAP tunneling) GRE tunneling (layer 2 tunnel, i.e., GRETAP) OpenVPN Setting up the encapsulation or tunneling may be performed using existing CLI tools such as the vconfig toolset (for VLANs), tun/tap tools, ip toolset (for GRE) and OpenVPN client/server tools. Apart from OpenVPN, these tools require matching kernel modules to be installed in the gateway and tunneling devices. In the case the gateway is not based on a Linux platform but rather a commercial hardware switch, the CLI of the switch should be used. Such switches are often limited to VLAN/GRE based solutions however, stitching functionality may therefore require the installation of a software gateway or adaptation box in some islands. 3.3 Slicing Slicing allows the usage of a single piece of OpenFlow equipment by multiple users. Some of the OpenFlow equipment is not controlled by SmartFIRE, and for those the slicing will be performed during the provisioning phase, e.g., by sending SFA RSpecs to the aggregate managers of that equipment. For the equipment directly controlled by SmartFIRE, OMF can be used to configure switches and slicing components (Flowvisor). Slicing does not necessarily require flowspace slicing. In the case of a wireless testbed where wireless nodes are completely dedicated to one experiment, these nodes will be connected to dedicated ports on OpenFlow switches. The infrastructure can therefore be sliced by simply re-assigning those ports to a separate datapath ID, and the correct OpenFlow controller. This under the assumption that the OpenFlow component supports this; for Open vswitch, a new switch instance with the appropriate ports can be configured, for hardware switches there may be proprietary solutions such as Virtual Switch Instance. This functionality requires only a simple adaptation of the implementation of OMF tools described in the Section 3.1. If we have the need to modify flowspaces or controller IP address, the OMF tools will need to talk to the Flowvisor instance. This can be done through the management interfaces, or by using the Flowvisor CLI toolset in a script. Care should be taken to properly Page 13 of 15

authenticate and authorize such actions in the case the Flowvisor and its management run outside of the experiment context; misconfiguration would impact other experiments. When setting up/configuring OpenFlow components (as per 3.1), the experiment may require connecting the experiment switches through the facility Flowvisor, instead of directly to the experiment controller. In this case the implementation of the configuration tools should make sure that there is no possibility for datapath ID conflicts, and of course the Flowvisor IP address should be used instead of the experiment OpenFlow controller s address. Page 14 of 15

4. Conclusions This deliverable outlines the extensions that have been added to the respective testbeds, in terms of hardware and management software, and with respect to support for SDN experiments. This SDN supports requires some additional tools and extensions to OMF. We have analyzed these requirements for switch & port configuration, stitching and slicing, and have included guidelines and design parameters for these features, as well as some implementation considerations. References [1] R. Sherwood et. al, FlowVisor: A Network Virtualization Layer, OpenFlow technical report, 2009 http://archive.openflow.org/downloads/technicalreports/openflow-tr-2009-1-flowvisor.pdf [2] Pronto 3290 http://www.pica8.org/products/p3290.php [3] HP 3800 http://h17007.www1.hp.com/us/en/networking/products/switches/hp_3800_switch_series/index.aspx#tab=tab_resources [4] RM090 design by Rmoni & iminds, http://ilabt.iminds.be/wilabt/hardwarelayout/rm090 [5] Emulab, total network testbed, http://www.emulab.net/ [6] Open vswitch, an open virtual switch, http://openvswitch.org/ Page 15 of 15