ORAN: OpenFlow Routers for Academic Networks



Similar documents
OpenFlow: Enabling Innovation in Campus Networks

Multiple Service Load-Balancing with OpenFlow

Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics

Introduction to OpenFlow:

Limitations of Current Networking Architecture OpenFlow Architecture

OpenFlow: Enabling Innovation in Campus Networks

OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?

Router Architectures

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features

Xperience of Programmable Network with OpenFlow

Open Flow Controller and Switch Datasheet

Open Source Network: Software-Defined Networking (SDN) and OpenFlow

Software Defined Networking What is it, how does it work, and what is it good for?

OpenFlow with Intel Voravit Tanyingyong, Markus Hidell, Peter Sjödin

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller

OpenFlow: History and Overview. Demo of routers

Securing Local Area Network with OpenFlow

Autonomicity Design in OpenFlow Based Software Defined Networking

Network Virtualization Based on Flows

OpenFlow Overview. Daniel Turull

Getting to know OpenFlow. Nick Rutherford Mariano Vallés

Software Defined Networking and the design of OpenFlow switches

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

Software Defined Networking

OpenFlow Based Load Balancing

Survey: Software Defined Networks with Emphasis on Network Monitoring

A collaborative model for routing in multi-domains OpenFlow networks

Software Defined Networking Basics

Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

Network Virtualization and Resource Allocation in OpenFlow-based Wide Area Networks

Extensible and Scalable Network Monitoring Using OpenSAFE

Tutorial: OpenFlow in GENI

libnetvirt: the network virtualization library

OpenFlow. Ihsan Ayyub Qazi. Slides use info from Nick Mckeown

OpenFlow/So+ware- defined Networks. Srini Seetharaman Clean Slate Lab Stanford University July 2010

IMPLEMENTATION AND EVALUATION OF THE MOBILITYFIRST PROTOCOL STACK ON SOFTWARE-DEFINED NETWORK PLATFORMS

Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management

Openflow: Enabling Innovation in Campus Networks

SDN, OpenFlow and the ONF

Software Defined Networking for Telecom Operators: Architecture and Applications

Steve Worrall Systems Engineer.

COMPSCI 314: SDN: Software Defined Networking

Programmable Networking with Open vswitch

Software Defined Networking Architecture

Building an Accelerated and Energy-Efficient Traffic Monitor onto the NetFPGA platform

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Fibre Channel over Ethernet in the Data Center: An Introduction

Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Networking Virtualization Using FPGAs

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

White Paper Abstract Disclaimer

Scalable Network Virtualization in Software-Defined Networks

The proliferation of the raw processing

HyperFlow: A Distributed Control Plane for OpenFlow

Intel DPDK Boosts Server Appliance Performance White Paper

Software Defined Networking

Carrier-grade Network Management Extensions to the SDN Framework

BROCADE NETWORKING: EXPLORING SOFTWARE-DEFINED NETWORK. Gustavo Barros Systems Engineer Brocade Brasil

OpenFlow MPLS and the Open Source Label Switched Router

Facilitating Network Management with Software Defined Networking

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring

D1.2 Network Load Balancing

Software Defined Networking (SDN) OpenFlow and OpenStack. Vivek Dasgupta Principal Software Maintenance Engineer Red Hat

SAN Conceptual and Design Basics

Software Defined Networking

Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework)

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

SDN AND SECURITY: Why Take Over the Hosts When You Can Take Over the Network

POX CONTROLLER PERFORMANCE FOR OPENFLOW NETWORKS. Selçuk Yazar, Erdem Uçar POX CONTROLLER ЗА OPENFLOW ПЛАТФОРМА. Селчук Язар, Ердем Учар

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

Enabling Software Defined Networking using OpenFlow

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

OpenFlow and Software Defined Networking presented by Greg Ferro. OpenFlow Functions and Flow Tables

Scaling 10Gb/s Clustering at Wire-Speed

How To Understand The Power Of A Network In A Microsoft Computer System (For A Micronetworking)

SDN. WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking. Recep Ozdag Intel Corporation

Isilon IQ Network Configuration Guide

SDN. What's Software Defined Networking? Angelo Capossele

Funded in part by: NSF, Cisco, DoCoMo, DT, Ericsson, Google, Huawei, NEC, Xilinx

BROADCOM SDN SOLUTIONS OF-DPA (OPENFLOW DATA PLANE ABSTRACTION) SOFTWARE

Facility Usage Scenarios

How To Understand The Power Of The Internet

Software Defined Networking What is it, how does it work, and what is it good for?

Switch Fabric Implementation Using Shared Memory

Time-based Updates in OpenFlow: A Proposed Extension to the OpenFlow Protocol

CS6204 Advanced Topics in Networking

Software Defined Networking (SDN) - Open Flow

7a. System-on-chip design and prototyping platforms

Software Defined Networks (SDN)

Towards Software Defined Cellular Networks

Extensible Datapath Daemon - A Review

An Intelligent Framework for Vehicular Ad-hoc Networks using SDN Architecture

OpenFlow Switching: Data Plane Performance

Status of OpenFlow research and test facilities in Europe

PE10G2T Dual Port Fiber 10 Gigabit Ethernet TOE PCI Express Server Adapter Broadcom based

Software-Defined Infrastructure and the SAVI Testbed

Transcription:

ORAN: OpenFlow Routers for Academic Networks A. Rostami,T.Jungel,A.Koepsel,H.Woesner,A.Wolisz Telecommunication Networks Group (TKN), Technical University of Berlin, Germany {rostami, wolisz}@tkn.tu-berlin.de European Center for Information and Communication Technologies (EICT GmbH), Berlin, Germany {tobias.jungel, andreas.koepsel, hagen.woesner}@eict.de Abstract We present the design and prototyping of an OpenFlow-enabled Gigabit Ethernet switch based on an ATCA industrial standard platform. Our work consists of an architectural design for introducing OpenFlow switch functionalities to the ATCA switching platform, design and development of an enhanced library for OpenFlow protocol stack, and porting the data-path functionalities and protocol end-point to the selected platform. We also carry out several performance evaluation tests on the developed switching platform and present selected results. The results demonstrate the successful introduction of OpenFlow protocol to the ATCA platform and contribute, among other things, to the identification of performance bottlenecks and crucial design issues in OpenFlow switches. I. INTRODUCTION OpenFlow [1] proposes decoupling at least partially the intelligence from data-path of a switch/router and delegating it to an external controller that can easily be programmed by network operators. In this approach, different control protocols (e.g., for signaling and routing) can be implemented or modified in the external controller without needing to make modifications to the switches data-path forwarding algorithms. To facilitate this process, OpenFlow defines an open standard interface for programming the switches datapath, that in turn enables network operators/researchers to design their own control logic and incorporate it into the network. The introduction of such a standard interface allows the development of control protocols and switching hardware to evolve independently from each other, that can lead to amorecompetitivemarket.anotherconsequenceofhaving astandardopeninterfaceforcontrollingswitches data-path is the simplicity in management of a network composed of switches developed by different vendors. In addition to that, with this approach the creation of new services are facilitated, since the network operators in an OpenFlow-enabled network are not bound to limitations imposed by different switch vendors. While the idea of OpenFlow has attracted a lot of attention in the networking community, the number of switches following high-performance industry standards on one hand, and being open for flexible configuration and supporting OpenFlow on the other hand is rather limited. Although several networking equipment vendors have already introduced OpenFlowenabled Ethernet switches, the available OpenFlow switches have a closed architecture, which limits the integration of 1 This work was supported in part by the Deutsche Telekom Innovation Laboratories. new components and functionalities, if at all possible. In fact, OpenFlow provides only the first step towards opening the switches and routers for developing and experimenting new control mechanisms and integrating them into the network. To achieve a fully programmable network that supports quick integration of new services, e.g. in wireless or optical domains for mobility or service aware transport networks, we need OpenFlow switches and routers that are flexible and extensible for the integration of new functionalities. These features are unfortunately missing in most of the currently available OpenFlow switches. In order to address this problem, we design and prototype an OpenFlow-enabled Gigabit Ethernet switch based on the Advanced Telecommunications Computing Architecture (ATCA) [2] industrial standard. The standard defines highly modular and vendor-independent components and platforms that are mostly used by telecommunications operators to create networks consisting of processing elements (computers) and data transport modules (e.g., Ethernet switches, mobile basestations and optical transmission cards). The modularity and the fact that there is a large market for ATCA components lead to low prices for the individual components. The advantages of using ATCA for an experimental network are manifold: flexibility and composability of components that support quick system development and extension for special purposes (e.g., wireless or optical networking), a well-introduced and widelydeployed industrial standard, which guarantees the availability of components over years, and finally the low cost, which is acrucialfactorindevelopingexperimentalset-ups. Design and implementation of an OpenFlow switch based on an ATCA platform introduces several challenges, as to our knowledge it is the first of its kind. The challenges are mainly related to the architectural design of the switch, i.e., mapping functional elements of an OpenFlow switch to the ATCA platform as well as to the development of an OpenFlow protocol library and porting it to the platform. In our work, we address the challenges and develop a prototypical composition of ATCA components that supports OpenFlow. This establishes an example for other universities and research institutions to follow the same strategy and build their own platforms that can jointly create a larger experimental facility, and hence we name our switch ORAN (OpenFlow Routers for Academic Networks). There are many examples for a successful implementation of this strategy like XORP [3] or NetFPGA [4], where through the use of standard hardware components and a publicly available software extension the

Fig. 1. OpenFlow Switch OF Protocol end point Flow Table secure channel control protocols OF Protocol end point OpenFlow Controller Functional elements of an OpenFlow switch and a controller whole branch of research has been opened. The rest of this paper is organized as follows. In Section II the concept of OpenFlow is reviewed and its building blocks are discussed. In Section III we introduce our selected ATCA platform and detail its capabilities and functional elements. In Section IV we present the architectural design of the ORAN switch and the mapping of the OpenFlow data-path functionalities to the selected hardware platform. In Section VwediscussthedesignandimplementationofanOpenFlow firmware for our switch. Section VI describes selected results on evaluation tests of the ORAN switch. Finally, we conclude in Section VII by summarizing the achieved results and discussing future works. II. OPENFLOW CONCEPT The concept of OpenFlow is based on separating forwarding logic and control functions of a (Ethernet) switch and implementing the latter in a remote control unit. To realize this, the proposed architecture for OpenFlow is composed of three main building blocks as depicted in Fig. 1. The building blocks are namely the OpenFlow switch, the controller and the OpenFlow protocol that enables the connection between the first two blocks over a secure channel [1]. In the following, we explain each of the three blocks in more details. An OpenFlow switch includes a data-path element that realizes the packet forwarding engine. In the heart of the data-path there is a table that is used to decide about how to forward incoming packets 1. The table has an entry per identified flow of packets, and hence it is called flow table 2. Each flow entry in the flow table includes rules for identifying the associated frames, a set of actions to be executed on the associated frames, and counters for collecting statistics about the frames that have successfully matched that entry. The datapath element is exposed to an external control unit by means of a standard application programming interface (API) (the OpenFlow protocol end-point in Fig. 1). The rules for matching incoming frames with flow entries are based on the values of frames L1-L4 header fields. Fig. 2 1 Throughout this paper we use packet and frame interchangeably. 2 Throughout this work we assume OpenFlow switch specification version 1.0.0 [5]. More recent versions of the OpenFlow switch specification allows more than one flow table per switch [6]. depicts the header fields that are used for the matching process in OpenFlow version 1.0. The value of a field for a flow entry can also be set to a wildcard, which means the content of that field in an incoming frame does not influence the result of the matching process. This facilitates a flexible definition of flows ranging from micro flows (e.g., an individual TCP/UDP flow) to highly aggregated flows (e.g., all packets originated from agivenautonomoussystem(as)domain).theflowtableis usually implemented in hardware using the ternary content addressable memory (TCAM) [7], which enables efficient matching of frames with flow entries at line rates. TCAMs are quite expensive as compared to conventional random access memories (RAMs), which can put a limit on the size of the flow table in an OpenFlow switch. Accordingly, it is of great importance to utilize the space of a flow table in an efficient manner. Once a frame enters the switch and hits a flow entry in the flow table, a set of actions associated with that entry are executed on the frame. The actions could be for example forwarding the frame to one or all switch ports. If no match can be found for an incoming Ethernet frame in the flow table, it is encapsulated in an OpenFlow frame and forwarded to the remote controller to which the switch is connected. The controller then decides about what to do with the frame and if necessary can cache its decision into the flow table in form of a new entry. In that case, the switch can directly handle upcoming frames of the same flow without consulting the controller. To realize that, an OpenFlow controller utilizes the API provided by the OpenFlow protocol to implement a control layer for a network of OpenFlow switches. A single controller can manage several OpenFlow switches if it enjoys enough resources (e.g., processing power). This is an important feature, which facilitates realization of a centralized control plane for a network of OpenFlow switches. The standard open API provided by the OpenFlow further offers the flexibility that any network provider can develop its own controller, which can also be used to control OpenFlow switches developed by different vendors. Therefore, there is no standard specification for developing an OpenFlow controller, which has led to the development of several controllers with different features. Examples of developed OpenFlow controllers are NOX [8] and SNAC [9], where the former one is available as an open-source implementation. In addition to that, through developing appropriate OpenFlow controllers one can realize the network virtualization, which can improve the network utilization and offer a possibility for running experimental activities in production networks. An example for an OpenFlow controller that enables virtualization of Ethernet switches is FlowVisor [10]. FlowVisor is a controller that acts between a network of physical OpenFlow switches and several OpenFlow controllers, where each controller controls a slice of the network. The responsibility of FlowVisor is to harmonize the usage of resources at the physical switches by the controllers in a way that no conflict occurs. In this approach, the physical switches do not need to be aware of the virtualization.

Ingress port Ether. Src. Ether. Dst. Ether. type VLAN id VLAN prior. IP Src. IP Dst. IP Proto. IP ToS TCP/UDP Src. port TCP/UDP Dst. port platform, which facilitate the development of large testbeds for OpenFlow. In addition, the AMC slots allow for extension of processing capacity for different use-cases (e.g., using network processor for further protocol de/encapsulation). Fig. 2. Header fields used for matching frames to flow entries III. ATCA PLATFORM OF THE ORAN SWITCH ATCA is a set of standard specifications for developing next-generation high-performance, carrier-grade communication equipments. The set of specifications have been developed in a cooperation among a large number of telecommunication equipment vendors and providers with the coordination of the PCI Industrial Computers Manufacturers Group (PICMG) [11]. The specifications, denoted as PICMG 3.X, are aimed at addressing the requirements of telecommunication equipment providers to have a standardized and carrier-grade common platform. This can help network equipment providers to offer solutions within a short time by integrating components from different vendors. That is, the ATCA contributes to improving simplicity, interoperability and time to market. These features have contributed to the popularity of ATCA platforms among equipment providers over the last several years. It has been estimated that the ATCA market grow to 6.7 billion USD by 2012 [12]. The hardware equipment used to develop the ORAN switch is an ATCA integrated platform with 10 GbE fabric and uplink modules from Kontron (Kontron OM5080 [13]). The platform, as shown in Figs. 3, is a 2U 19 platform with slots for two carrier boards. Each carrier board (Kontron AT8404) includes agigabitethernetswitchbasedonbroadcomchipset(bcm 56502) [14] with two 10GbE and twenty-four GbE ports, out of which 3 ports (two GbE and a 10GbE) appears on the front side of the platform as uplinks. A carrier board can host up to four mid-size advanced mezzanine cards (AMC). AMC slots can be used to extend the carrier boards for example with I/O functionalities (e.g., GbE ports) or processing modules (e.g., network processors). Each AMC slot is connected to the Gigabit Ethernet switch on the carrier board through five GbE ports. Additionally, the Gigabit Ethernet switches of the two carrier boards are interconnected through the backplane with a 10GbE link. The switching chipset enjoys internal packet memories and a TCAM-based flow table that can accommodate up to 2048 flow entries. Fig. 4 depicts a high-level architecture of AT8404 carrier boards. The Ethernet switch on each board is managed by an embedded PC (PowerPC 405GPr) running at 400 MHZ with 128 MB flash memory, 256 MB SDRAM and 16 kb D-cache. The PowerPC manages the Ethernet switch over a 32bit/33MHZ PCI bus. The switch controller (PowerPC processor) is accessible via a RS232 port on the front side of the platform. The carrier boards further support intelligent platform management interface (IPMI) and simple network management protocol (SNMP) for system management [13]. The dual interconnected GbE switching systems and the high total capacity are important features of the selected ATCA IV. ARCHITECTURAL DESIGN OF THE ORAN SWITCH We have designed and prototyped an architecture for supporting OpenFlow protocol stack on the AT8404 carrier board. The work is composed of two main parts namely, designing an architecture for introducing OpenFlow switch functionality into the ATCA carrier board as wells as the design and implementation of a new library for OpenFlow protocol and porting the developed protocol stack to the carrier board. In this section we focus on the architectural design of our switch. The first step in developing an OpenFlow hardware switch is to deal with mapping OpenFlow functional architecture to the selected hardware architecture. This is an important step since a proper architectural design would have a direct impact on the performance of the switch. A switch is composed of a data-path (also called fast path) for forwarding of data packets and a control path (also called slow path) to exchange routing, signaling and management information. As discussed in Section II, the idea of OpenFlow is to push the intelligence out of the switch as much as possible and to keep only the data-path and a minimal control/management interface inside the switch and expose this interface to a remote controller. Accordingly, in this work we focus on the design and implementation of the data-path and the OpenFlow protocol end-point inside the switch. A natural choice for the protocol end-point would be to implement it in the embedded system of the platform. The mapping of other functional elements of the data-path to the platform is a crucial step in our switch design process. According to the OpenFlow specification [5], once an Ethernet frame arrives at an input switch port and successfully passes the low layer processing, it will be delivered to the OpenFlow forwarding engine. The forwarding engine consists of a header parser, a flow matching processor (including the flow table), an action execution unit (e.g., forwarding, dropping) as well as an OpenFlow protocol encapsulation unit for delivering unknown frames to a remote OpenFlow controller. In general, there are two possibilities for implementing a functional element of the data-path on the platform. Specifically, each of the functional elements can be realized either in hardware (switch chipset) or alternatively in the software, i.e., running in a software on the embedded system. Obviously, the hardware realization of a functional element leads to a higher performance in terms of packet forwarding rate. Nonetheless, making a decision with this regard depends to a high extent on the capabilities and features of the selected platform. The features to be considered include the processing power of the embedded system, the available bandwidth between the embedded system and the switch chipset (the PCI interface) and above all the switch chipset itself and the functions that it supports. In case any functional element of the data-path, which deals with every incoming frame, cannot be directly supported in the chipset then the forwarding rate of the switch would

Fig. 3. Kontron OM5080 ATCA integrated platform Fig. 4. Schematic of the internal architecture of Kontron OM5080 ATCA platform [13] be limited by capabilities of the embedded system. The limiting factors here include the communication path between the switch chipset and the processor (i.e., the PCI bus) as well as the processing and buffering resources available at the embedded system. On the other hand, todays Ethernet switching platforms are designed in a way that almost all packet processing functions are supported in the switch chipset using ASIC (application specific integrated circuit), and the embedded system is typically responsible for table updates and other simple configuration functionalities. That is, the resources available in the embedded system of switches are usually limited. For example as detailed in the last section, the embedded system in our ATCA platform has a limited processing power and buffering capacity. It also has a connection with limited bandwidth to the switching chipset. This leads us to the conclusion that the data-path elements should be pushed into the hardware if the required functionalities can be supported by the switch chipset. We should also note that the performance of the switch in terms of the flow-entry handling rate would be in any case dependent on the embedded processor since the protocol end-point is implemented there. Therefore, the embedded processor of an OpenFlow switch can become a bottleneck of an OpenFlow switch if it is not powerful enough, regardless of how capable the switching chipset is. The chipset available on the selected ATCA platform directly supports header parsing, packet classification as well as frame forwarding based on a combination of header fields values of frames. As a result we map all these functionalitiesto the Broadcom Ethernet chipset of the carrier board in order to maximize the forwarding capacity of the switch. Nevertheless, the encapsulation of frames for which no match is found in the flow table cannot be directly supported in the hardware and therefore it is delegated to the embedded processor. In Section VI we will discuss the performance implications of pushing OpenFlow encapsulation into the embedded system. Fig. 5 shows a logical architecture and functional elements of the data-path in an OpenFlow switch and the mapping to our hardware platform. V. IMPLEMENTATION OF OPENFLOW FIRMWARE FOR ATCA CARRIER BOARD Various implementations of OpenFlow have recently been developed by academic and commercial institutions. Porting one of these available libraries seems to be a natural starting point for introducing OpenFlow to the ORAN switch. However, when having a closer look at the actual OpenFlow specification 3,itbecameclearthatOpenFlowisstillonits way to become more mature and a number of desirable features, although being already proposed, are still missing in the OpenFlow specification. Below, we review some of such desirable features in detail. First, let us consider the FlowVisor approach to defining a mechanism for virtualizing and slicing the overall name-space under the control of a data-path element [10]. While OpenFlow restricts itself to a single controller being responsible ofadatapath element, FlowVisor allows different controllers to control disjunct parts of the overall flow name-space in parallel. However, configuring name-spaces and name-space/controller mappings is done in FlowVisor via an external management interface, rather than allowing the controller and the data-path to negotiate these name-space parameters during establishment of the OpenFlow control connection. Proposition 1: An OpenFlow implementation should integrate OpenFlow and FlowVisor functionalities, thereby extending the basic OpenFlow API with means to enable the controller and the data-path to negotiate parts of a name-space under control by this control connection. The OpenFlow API binds mutually user and control plane, and OpenFlow s messages define the set of information exposed by the user plane towards the controlling entity. FlowVisor reveals immediately another interesting property of OpenFlow: its self-similarity. Acting as a proxy entity, FlowVisor manipulates OpenFlow messages that are traveling upstream from user to control plane and vice versa. This is aquiteusefulfeatureallowingnetworkoperatorstoreveal different levels of detail of their network towards a customer, and to change these exposed network properties dynamically over time if needed. Proposition 2: An OpenFlow implementation should enable native stacking of OpenFlow compliant layers on each other, not only on a single device, but rather distributed among different controlling entities. The OpenFlow framework makes specific assumptions about the information exposed by a user plane and its functional elements: consider OpenFlow s interface definition (i.e., Ethernet-based port with static behavior in terms of bandwidth). In fixed network deployments these assumptions will most likely prove valid. However, other link types exist, 3 Although we assume OpenFlow 1.0 in this project, things haven t changed considerably with OpenFlow 1.1.

IN Ports OUT Ports To Controller Buffer Embedded System Linux Kernel eth0 PCI Subsystem NIC Pkt In Parser Flow matching no match Action execution BCM 56502 ASIC (parser, flow matching, processing) Flow Table OF encapsulation Ethernet Ports Functional Elements of OpenFlow Data Path Hardware Data Path Fig. 5. Mapping OpenFlow functional elements to the ATCA carrier board e.g., dedicated transport channels with associated physical resources as in GMPLS-controlled optical paths, or in the wireless domain, where the channel s capacity underlying the logical link may vary considerably over time. Such properties make interfaces much more dynamic and exposing these dynamic characteristics is crucial for advanced control entities. Proposition 3: An OpenFlow implementation should expose more details about the functional elements in the user plane, e.g., interface details, where applicable. The level of details sent to the control plane should be negotiable to relieve implementors from dealing with unnecessary details. Processing in OpenFlow is typically limited to a restricted number of protocols, typically bounded by the capabilities of the underlying switching chipset. The ATCA platform facilitates integration of additional processing entities, e.g., network processor units or FPGA-based designs. Such entities differ from fixed switching chipsets as they support dynamic deployment of processing logic. OpenFlow s current specification assumes a rather static set of available processing capabilities (i.e., actions) and lacks means to dynamically manipulate processing logic on a data-path element. Various options for integrating such functionality compete with each other, e.g., virtual ports hosting processing entities, or generic actions that refer again to independent processing entities. The impact of advanced processing capabilities on other OpenFlow functional elements (like the packet classifier) must be taken into account carefully. Proposition 4: An OpenFlow implementation should allow integration of more advanced processing capabilities based on network processors or FPGA. Starting from these observations and desired features, we decided to create a new OpenFlow implementation from beginning. Based on C++ and the standard template library (STL), the implementation comprises a library based on the OpenFlow 1.0 specification and provides a hardware abstraction layer in order to facilitate porting the code to various hardware designs. Based on this core library, a data-path element was developed for a Linux PC environment and the AT8404 carrier board that serves as testing environment. As already outlined, the AT8404 carrier board hosts a PowerPC based embedded system for controlling the Broadcom BCM 56502 switching chipset. The data-path implementation has been ported to the PowerPC environment and the hardware abstraction layer has been extended by the functionality for controlling the Broadcom chipset API. Some of the key features of the developed library are as follows. AbaseclassprovidesfullOpenFlow1.0compatibility and supports arbitrary (only limited by memory and processing constraints) number of lower layer data-path elements and higher layer controlling entities. It uses RPC stub class for transforming OpenFlow messages transparently from TCP to C++ and vice versa. Full stacking supports for an arbitrary number of instances. The hardware abstraction layer is integrated with Broadcom chipset API. It supports multiple controlling entities per data-path element (FlowVisor like) using vendor extensions. We are planning to make our OpenFlow library available to public in the near future.

VI. PERFORMANCE EVALUATION OF THE ORAN SWITCH In this section we present selected results from the performance evaluation of the ORAN switch. One of the most important metrics in evaluating any switch is the maximum achievable throughput, i.e., the rate at which the switch can forward arriving frames without loss. In general, the throughput is evaluated for different packet sizes since for example small packets might saturate the maximum access rate of buffers in the switch. In addition, the throughput of an OpenFlow switch depends on flow-table matching results, because if an incoming frame is not matched to the existing flow entries, it should wait for a remote controller decision and it cannot be forwarded at the line rate. To account for both factors, i.e., frame sizes and the matching results, we have designed and carried out the following sets of experiments. In the first set of experiments we measure the throughput of the switch with a known input flow, i.e., we populate the flow table with flow entries and inject frames that match an installed flow entry. The action part of the flow entry is set to forward associated frames to a given port of the switch. The experiments are run for an exhaustive combination of 4 different frame sizes (64, 500, 1000 and 1518 Bytes) and 4 different number of flow entries pre-installed in the flow table (1, 100, 1000, 2048). In case of more than one flow entries, only one entry is relevant for the injected traffic and the rest are randomly generated just to fill the table to the desired level. Our measurements demonstrate that the ORAN switch can successfully forward incoming frames at the line rate. We have measured successful forwarding of frames at the line rate for up to 25 Gb/s input data rate. We should note that this is the maximum data rate at which we can inject frames to the switch due to limitations of our equipment. To further account for the impact of packet modifications as introduced by the OpenFlow specification on the throughput, we have repeated the experiments with updating the action part of the known flow to add a VLAN tag and forward associated frames to a given port of the switch. Also in this case, we have observed line rate forwarding and modification of frames for up to 25 Gb/s input data rate. This comes at no surprise, since in our switch forwarding of frames belonging to configured flows and per packet modifications such as VLAN tagging are carried out completely in the hardware. In the second set of experiments our aim is to evaluate the switching performance of the ORAN with the input traffic that does not hit existing entries in the flow table. For this kind of traffic, input frames are first copied over the PCI bus to the memory space of the embedded system, then the embedded processor encapsulates part of the frame into an OpenFlow frame and send it to the controller. Upon receiving apositiveanswerfromthecontrollerthepacketshouldbe copied again over the PCI bus to the Ethernet switch so it can be forwarded to the right output port. That is, as already discussed in Section IV, the PCI bus and the embedded system are the major limiting factors of the switch throughput in that case. To quantify the impact of the these limitations on the switch performance we measure the throughput of the PCI bus as well the packet throughput of the switch for the input traffic which is unknown to the flow table. For the PCI bus test we generate and inject to the switch frames again with different sizes as mentioned above for which there is no matching flow entry. Then, on the embedded PC we measure the rate of successfully delivered frames. That is, no extra processing is performed on the frames and they are simply discarded by the embedded PC. Fig. 6 depicts the results of our experiments on the PCI throughput. We observe that the maximum throughput is around 1500 kpps for the 64- Byte-long frames and it reduces with the frame size to around 20 kpps for the larger frames (1518 Bytes). Note that these numbers should be regarded as the maximum forwarding rates of the switch for unknown frames (with no matching flow entries). Next we measure the switch throughput for the same input traffic, i.e., the traffic flow with no matching flow entries. For this purpose, we use a combination of OpenFlow Packet-In and Packet-Out messages [5] as follows. Once a frame arrives to the switch it is encapsulated by the embedded PC and forwarded to the controller (Packt-In), which is attached to the embedded PC via a Fast Ethernet link. The controller is configured in a way to send a Packet-Out message for instructing the switch to send the incoming packet out on a given output port. Fig. 7 presents the results of the test for different values of input packet size. For the sake of comparison, we also conduct the same measurements on two commercial OpenFlow-capable Gigabit Ethernet switches. The first commercial switch we use, denoted by CS1 in Fig. 7, has a PowerPC for switch management running at 533 MHZ and 512 MB RAM. The second switch, denoted by CS2, has an embedded system with a PowerPC running at 666 MHZ and 256 MB RAM. From the figure we observe that our implementation outperforms both commercial switches in terms of the Packet-In/Packet-Out throughput test although both commercial switches have a relatively more powerful embedded system. Nevertheless, we see that in all considered cases the throughput of the switch for unknown flows is limited by the embedded switch management system to a few hundred packets per second. Also, note that for the ORAN and CS1 the throughput reduces with the packet size. In our case this is due to the requirements for copying every packet from the kernel space to the user space and back. The main point to take away from the above experiments is that although the switch can successfully forward at the line rate when incoming frames find matching flows in the flow table, for input frames with no pre-configured flow entry the embedded system of the switch becomes a major bottleneck. VII. CONCLUSION We presented the design and implementation of the Open- Flow switching protocol on an ATCA industrial platform. The programmability of the OpenFlow protocol combined with the modularity and composability of the ATCA as present in the ORAN make it an ideal platform for developing and testing various networking ideas. As an example, different OpenFlow protocol extensions can be easily tested on the ORAN. We also presented the performance results of the ORAN switch. The results demonstrate that the switch can success-

K Packets Per Second (KPPS) 1,600 1,400 1,200 1,000 800 600 400 200 0 64 500 1,000 1,518 Frame Size (Bytes) Fig. 6. Throughput of the PCI Bus connecting embedded system to the switch chipset Packets Per Second (PPS) 350 300 250 200 150 100 50 0 ORAN CS1 CS2 64 500 1,518 Frame Size (Bytes) Fig. 7. Throughput of the ORAN switch for unknown flows in comparison with two commercial OpenFlow switches fully forward the packets at the line rate on the condition that the incoming traffic hits the entries already installed in the flow table. Nevertheless, if the input traffic is unknown to the flow table and the switch needs to consult the controller for forwarding packets, the switch forwarding rate appears to be inadequate for many applications. This is due to the resource limitation of the embedded system and its PCI connection to the switch chipset, which is unfortunately quite typical in today s switching platforms. This together with the fact that consulting an external controller for packet forwarding is one of the most important add-ons of the OpenFlow concept lead us to the following conclusion. In an OpenFlow network under a realistic mix of traffic flows, the required packet processing capacity seems to be beyond what can be offered by the embedded system of the switch. This is particularly an issue if we take into account that some control protocols require more packets to be sent to the controller. There are several approaches that can be used to address this issue. One possibility is to minimize the need for communication between the switch and the controller for example through proactive flow installations. This can be further complemented by developing a new generation of switching platforms with more powerful embedded systems and more bandwidth between the embedded system and the switch chipset. Nevertheless, the embedded system in a typical switching platform is not envisaged to be used as a packet processing module. Therefore, we believe that the focus should be on offloading the required packet processing of the embedded system onto another interface, if this cannot be directly realized in the switch chipset. This is the topic of our future work, where we are going to make use of the flexibility of the ATCA platform and push the OpenFlow encapsulation process into one of the available processing modules (AMC cards). REFERENCES [1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, Openflow: enabling innovation in campus networks, SIGCOMM Comput. Commun. Rev., vol. 38, pp. 69 74, March 2008. [2] PICMG 3.0 Revision 3.0, AdvancedTCA Base Specification, 2008. [3] M. Handley, O. Hodson, and E. Kohler, Xorp: an open platform for network research, ACM SIGCOMM Computer Communication Review, vol. 33, no. 1, pp. 53 57, 2003. [4] J. Lockwood, N. McKeown, G. Watson, G. Gibb, P. Hartke, J. Naous, R. Raghuraman, and J. Luo, Netfpga an open platform for gigabit-rate network switching and routing, in Proc. IEEE International Conference on Microelectronic Systems Education, pp.160 161,IEEE,2007. [5] openflow switch consortium, Openflow switch specification, version 1.0.0., [Online]. Available: http://www.openflowswitch.org/documents/openflow-spec-v1.0.0.pdf. [6] openflow switch consortium, Openflow switch specification, version 1.1.0., [Online]. Available: http://www.openflow.org/documents/openflow-spec-v1.1.0.pdf. [7] K. Pagiamtzis and A. Sheikholeslami, Content-addressable memory (cam) circuits and architectures: A tutorial and survey, IEEE Journal of Solid-State Circuits, vol.41,no.3,pp.712 727,2006. [8] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, Nox: towards an operating system for networks, ACM SIGCOMM Computer Communication Review, vol.38,no.3,pp.105 110, 2008. [9] Simple network access control (snac), [Online]. Available: http://www.openflow.org/wp/snac/. [10] R. Sherwood, G. Gibb, K. Yap, G. Appenzeller, M. Casado, N. McKeown, and G. Parulkar, Flowvisor: A network virtualization layer, tech. rep., Openflow-tr-2009-1, Stanford University, 2009. [11] PCI Industrial Computers Manufacturers Group (PICMG) http://www.picmg.org. [12] S. Stanley, Atca, amc and microtca market: update and forecast, Heavy Reading, vol.7,no.9,2009. [13] Kontron, Om5080: Technical reference manual, [Online]. Available: http://emea.kontron.com/. [14] Broadcom, Bcm 56502: product brief, [Online]. Available: http://www.broadcom.com/collateral/pb/56502-pb03-r.pdf.