1 Zeppelin - A Third Generation Data Center Network Virtualization Technology based on SDN and James Kempf, Ying Zhang, Ramesh Mishra, Neda Beheshti Ericsson Research Abstract Just like computation and storage, networks in data centers require virtualization in order to provide isolation between multiple co-existing tenants. Existing data center network virtualization approaches can be roughly divided into two generations: a first generation approach using simple VLANs and addresses in various ways to achieve isolation and a second generation approach using IP overlay networks. These approaches suffer drawbacks. VLAN and based approaches are difficult to manage and tie networking directly into the physical infrastructure, reducing flexibility in placement and movement. IP overlay networks typically have an relatively low scalability limit in the number of tenant s that can participate in a virtual network and problems are difficult to debug. In addition, none of the approaches meshes easily with existing provider wide area VPN technology, which uses. In this paper, we propose a third generation approach: multiple layers of tags to achieve isolation and designate routes through the data center network. The tagging protocol can be either carrier Ethernet or, both of which support multiple layers of tags. We illustrate this approach with a scheme called Zeppelin: packet tagging using with a centralized SDN control plane implementing Openflow control of the data center switches. I. INTRODUCTION Cloud computing has experienced explosive growth, thanks to the proliferation of virtualization techniques, commodity devices and rich Internet connectivity. Cloud customers outsource their computation and storage to public providers and pay for the service usage on demand with the pay-as-you-go charging model. This model offers unprecedented advantages in terms of cost and reliability, compared to the traditional computing model that uses dedicated, in-house infrastructure. Today, a number of companies have provided public cloud computing services, such as Amazon s Elastic Compute Cloud (EC2) , Google s Google App Engine , Microsofts Azure Service Platform , Rackspaces Mosso , and GoGrid . To maximize economic benefits and resource utilization, the cloud provider usually simultaneously initiates multiple virtual machines to execute on the same physical server. Moreover, the cloud providers use multi-tenancy, where virtual machines from multiple customers, or tenants, can share the same machine. Most cloud providers only use host based virtualization technologies to realize separation and performance isolation between s on the end host machine level. In the network, the cloud provider deploys and manages a set of physical routers and links that carry traffic for all customers indistinguishably. The Service Level Agreement (SLA) defined in todays cloud computing are mainly centered around the computation and storage resources. An application s performance is still unpredictable due to the lack of guarantee in the network layer. Therefore, many companies are still reluctant to move their services or enterprise applications to the cloud, due to reliability, performance, security and privacy concerns. Thus, it is critical for cloud operators to deploy efficient traffic isolation techniques, for isolating performance among tenants, minimizing disruption, as well as preventing malicious DoS attacks. In this work, we aim at providing a scalable and effective solution for traffic isolation in the cloud platform. We categorize the existing work on cloud network virtualization into two generations: a first generation approach using simple VLANs and addresses to partition the traffic according to the specific tenant. The VLAN and address based approach has poor manageability, i.e., is difficult to configure and limited in the number of tenants it supports. In this approach, the mapping of virtual networks to the physical network is static, leaving little choice for dynamic placement and movement. To overcome these limitations, several new techniques using IP overlay networks have been proposed , , , considered as the second generation. The approach using IP overlay networks suffers from a different scalability problem. Typically it can only support hundreds of s per virtual network. In addition, mapping between the on-the-wire packets and the tenant s s emitting the traffic is not transparent, complicating debugging problems. Our scheme, called Zeppelin, builds upon , which is widely used in provider provisioned Virtual Private Networks and commonly supported in various switches. We propose an OpenFlow based controller application that oversees a set of based forwarding elements. We rely on the OpenFlow capable virtual switch on each physical server to encapsulate packets with a tenant-specific label, and a label specifying the last hop link of the destination. The Open- Flow controller maintains the mapping between the destination s address and the destination link label. To improve scalability, we use hierarchical labels, i.e., separating the tenant labels from the location labels. Zeppelin is an example of a third generation data center virtualization scheme, based on multiple layers of packet tagging instead of a single layer as with simple VLANs. We summarize the main contribution of this paper below. We propose a new cloud network virtualization architecture built on top of the and OpenFlow technologies. We
2 illustrate its usefulness on various use cases, such as creation and movement. We examine the correctness of our design by implementing it on the Mininet emulator, and perform an analysis to verify scalability. This paper is organized as follows. We discuss the existing work in Section II. The design of Zeppelin is presented in Section III and the implementation is described in Section IV. We perform analysis on the scalability in Section V and conclude in Section VI. II. RELATED WORK The first generation of data center network virtualization uses simple VLANs or addresses in various ways. In a data center, VLANs are used to partition the traffic according to the specific tenant that the belongs to. Such groups are identified by their unique VLAN ID. When the VLAN ID is restricted to 802.1q , the maximum size of the VLAN ID is 12 bits, limiting the number of tenants the virtualized data center network can support. In addition, if a server supports s from more than one tenant, the link between the server NIC and the first hop switch must be trunked. Managing VLAN trunks across multiple server to switch links is complicated and error-prone. Other simple approaches, such as address filtering or simple in encapsulation, suffer from similar scalability and managability constraints, especially when the traditional Ethernet distributed control plane is used. To overcome these issues, IP encapsulation approaches were developed. The basis of these approaches is a mesh of IP tunnels connecting the virtual switches on servers supporting the same tenant. The tenant Layer 2 traffic is isolated inside the IP tunnels. VXLAN (Virtual extensible LAN) technology , is an example of such an approach. The tunneling is realized by encapsulating Ethernet frames with four additional protocol headers: the outer Ethernet, outer IP, outer UDP and VXLAN headers. The VXLAN tunnel is set up between two end points (VTEP) whose IP addresses are used as the outer IP addresses. The VXLAN header is used to encode a 24 bit VXLAN Network Identifier (VNI) or segment ID, which can support up to 16 million separate virtual networks. The VTEPs, which are implemented in the virtual switch of hypervisors or the access switches, encapsulate packets with the s associated VNI. The realization of traffic isolation is implemented via an IP multicast scheme. Each VNI is mapped into a multicast group. The Internet Group Management Protocol (IGMP) messages are sent from the VTEP to the upstream switches to join and leave the group. The messages are only sent to members within the same multicast group, eliminating unnecessary unicast flooding. Another example of IP overlay virtualization is NVP  which uses Generic Route Encapsulation (GRE) tunnels  (among others). IP overlay approaches typically use centralized control planes where the IP addresses of the tenant overlays are tracked. This approach is often called software defined networking (SDN) . The IP overlay approach achieves a high degree of scalability in the number of tenant virtual networks that can be supported at the expense of the number of s per virtual network. While the topology of the tunnels can be optimized for a particular application, without loss of generality, it is suggested that a full mesh is used, i.e., setting up a tunnel between any pair of hypervisors in the tenant s virtual network. As the number of s increases, the number of virtual switches that must be updated when a tenant joins or leaves the network becomes unwieldy. Equally, the lack of tools for debugging problems in overlay networks makes network management more difficult than with solutions that use the underlying hardware support for traffic isolation. A third generation of data center virtualization technology utilizes modern tagging based approaches, such as SPBM (802.1aq ) or . This architecture uses multiple layers of packet tags to designate tenant virtual networks and routing through the data center physical network fabric. While VLAN-based virtualization approaches also use tags, only one tag is available. With multiple layers of tags, scalability in the number of networks can be achieved. Because only the networking elements involved in routing packets between two s need to be touched to set up routing, scalability in the number of s can be achieved. A mixture of centralized and distributed control planes is used, with smaller data centers favoring a centralized control plane and larger data centers favoring a distributed control plane. Recently, there have been efforts on building virtual overlay networks using SDN . In Zeppelin, we use for packet tagging and a centralized control plane with OpenFlow  as the control protocol for setting up flow state on the switches. For an example of a distributed data center fabric using SPBM, see Allan, et. al. . Flowvisor  represents another approach to cloud network virtualization based on OpenFlow. With Flowvisor, the data center OpenFlow controller uses all fields in the header to partition up the network, not just VLAN tags or labels. The header fields of the tenants are rewritten to map their flows into subsets of the header to isolate the tenant s traffic. In principle, this can allow two tenants to share the same VLAN tag if their address spaces overlap but they do not use all the addresses, for example, reducing the possibility that the VLAN tag space will run out. While Flowvisor represents an innovate approach to using OpenFlow for data center virtualization, there are some issues. Providing tenants with visibility into what fields are being remapped is required, and debugging problems becomes more complicated because the header fields on the wire don t represent a simple map of the tenant s network address space. In contrast, debugging the multi-tagging virtualized networks is much simpler, because there is a straightforward mapping between the tenant and the tags. III. DESIGN The physical infrastructure of data center networks is typically designed around a rack of servers connected into a top of rack switch (TORS). The TORS aggregates traffic from the servers in the rack and feeds it into the data
3 Basic Unicast Scheme: Data Plane Core Core R1Rm LSP-1 Aggregation Aggregation Aggregation Aggregation R1Rm LSP Ln Ln TORS R1RM L-1 R1RM L-2 Ln Ln GT ENet BT BT ENet Destination TORS Ln Ln Ln BT BT Virtual Green Blue GT ENet BT ENet GT ENet BT ENet Rack 1 Rack m GT ENet BT ENet Dest Virtual Green Blue GT ENet BT ENet DT Meeting Ericsson Internal Page 4 Fig. 2. Basic unicast routing scheme Fig. 1. Schematic of data center backbone showing full mesh logical overlay network center backbone network, where additional layers of switches (up to 2) further aggregate and distribute traffic. Various designs have been proposed for the backbone network: fattree , ring , etc. Zeppelin makes no assumptions about the backbone network design. The only requirements Zepplin makes on the network are: the the first layer of aggregation (TORS-equivalent) is connected by some kind of overlay network in a full mesh, which may or may not be, see Figure 1, the source-side TORS can route an -labelled packet into the overlay tunnel for the destination-side TORS, the destination-side TORS can route an labelled packet to the virtual switch (VS) and server designated by the label, Additionally, the source TORS may have multiple overlay tunnels available to a particular destination TORS, which would allow the TORS to do multi-path routing. In the following discussion, we assume that the TORS are OpenFlow-capable switches and that the overlay mesh between the TORS is also. The Zeppelin data center network virtualization scheme is based on an OpenFlow  control plane and an data plane . Although the system was implemented using OpenFlow 1.1, later versions of OpenFlow that support could also be used. The basic approach involves having an OpenFlow-capable virtual switch (VS) on the server tag packets from a source with a label encoding a tenant id and a label specifying the destination link between the destination TORS and the destination VS on the server where the destination resides. While labels as used in wide area networks typically have no meaning outside the router and link on which they appear, here we are repurposing them as a way to route to a specific destination link across the data center, rather than using the IP address or address to route to a specific destination node. The OpenFlow controller maintains the mapping between the destination IP/ address and the destination link label. In the following sections, we describe Zepplin s design for virtualizing the network for unicast routing and how Zepplin handles movement. A. Unicast Routing Scheme Figure 2 illustrates the basic unicast scheme with a simple example. In the figure, there are two tenants, the Green and the Blue. Both tenants have s running on the same servers, one server is in Rack 1 and the other in Rack m. The IP address of the two s running on the same server in Rack m is exactly the same, , though the addresses differ. The link between the source TORS and the source server on Rack 1 is assigned label, while the link between the destination TORS and the destination server on Rack m is assigned the label Ln. Two LSPs are available between the source TORS and destination TORS: R1RmLSP- 1 and R1RmLSP-2. This example was not chosen because it is representative of an actual deployment, but rather to illustrate that Zeppelin handles overlapping IP address spaces correctly, ensuring tenant isolation. Routing proceeds through the following steps (list numbers correspond to numbers in the figure): 1) The Green and Blue s emit a packet bound for their destination s in Rack m. The source VS pushes tenant labels on the packets, indicated by GT for the Green and BT for the Blue, and a label for the destination link, indicated by Ln. 2) The source TORS uses ECMP  to choose a different path for each tenant and pushes inter-tors routing labels onto the packets. Packets destined to the Green destination are routed over the LSP R1RmL-1 while those destined to the Blue are routed over the LSP R1RmL-2. 3) The packets are routed through the overlay to the destination TORS. The destination TORS pops the inter- TORS labels and inspects the link label. The link label indicates that the destination is on link Ln, so the destination TORS routes the packet out the corresponding physical port.
4 Server-T1- TID 1 TORS 1 SMVL Table TITL Table TLVLL Table TORS1 Fig. 3. tid1 TORS1 TID 1 T1- Server-T1- CNM Table Tables in the cloud network manager IP T1-4) The tenant label and destination IP address are used to determine which gets the packet. The destination VS pops the link label and tenant label and delivers the packet to the appropriate. To set up the routing, the OpenFlow tables in the source and destination VS and the source and destination TORS must be programmed. The following sections describes how Zeppelin interposes on common IP network operations to achieve this, and the changes required to the cloud operating system to support Zeppelin. B. Changes in Cloud Operating System The cloud operating system typically has a Cloud Execution Manager (CEM) that manages the operations necessary to start, terminate, and move a, and a Cloud Network Manager (CNM) that handles connecting a to a virtual network and to the Internet. For example, the CEM corresponds to Nova and the CNM corresponds to Neutron in the Open operating system . We assume that the cloud operating system provides service for multiple tenants, and that each tenant is identified by a tenant id. The CNM maintains mappings between tenant ids, TORS labels, and the and IP addresses of s and servers. Figure 3 illustrates the collection of tables implemented in the CNM that maintain these mappings. These tables and their use are: Server to VS Link table (SMVL table) - Look up the link label for the link between the TORS and a VS running on a server in the rack aggregated by the TORS using the server address as a key, Id to (TITL table)- Look up the tenant label using the tenant ID as a key, TORS to VS Link table (TLVLL table) - Look up the link label for the link between the TORS and a VS running on a server in the rack aggregated by the TORS using the TORS backbone mesh label as the key, CNM mapping table - Maintains the mapping between the tenant id, tenant, tenant IP address, and the server address. There is one CNM mapping table per tenant. These tables are used to set up and manage the configuration of OpenFlow routing in the data center. C. TORS Flow Table Rules for Packets Incoming to the Rack OpenFlow 1.1 supports multiple tables and we make use of them in Zeppelin. Zeppelin uses two tables for incoming packets, one to handle the inter-tors routing labels and one to handle the intra-rack link labels. For packets incoming to the rack, Table 1 is programmed with one rule per overlay connection in the inter-tors backbone network. This rule matches the inter-tors back bone overlay routing label. The corresponding action pops the label and sends the packet to Table 2. Because the number of racks in a data center may number in the thousands, to increase scalability these labels can be installed at the time an inter-tors overlay tunnel is needed. In this way, the limited flow table space is not used up by rule/action pairs unless there is a actively using the tunnel. In the second flow table, a rule/action pair forwards the packet to the appropriate link on the rack. Since the number of servers in the rack is limited (usually 20-40), these rule/action pairs are installed when the rack is initialized. The rule/action for the flow table entry is: If the top label matches the label for a particular link between the TORS and the virtual switch on the rack, forward the packet out the port connected to the link. These ensure that incoming packets are forwarded onto the correct link. D. TORS Flow Table Rules for Packets Outgoing from the Rack For routing outgoing packets, Table 1 in the source TORS is programmed with a rule/action pair that processes the packet through an ECMP Group. Figure 4 shows that part of the TORS Table 1 flow table associated with outgoing packet routing. The flow table itself contains entries with a rule matching the link labels for links between the destination TORS and destination server/vs. If a packet s header matches one of these, then the packet is sent to an ECMP group containing actions that prepare the packet for routing into one of the LSPs leading to the destination TORS. The group hashes the packet header then selects an action bucket based on the hash. The actions in the action buck push a label for one of the inter-tors back bone overlay LSPs, then forward the packet out one of the physical ports toward the destination TORS. Because there could be up to a hundred thousand destination VS/server links in the data center, these flow rules are only installed when a requests a connection with another or a destination outside the data center on the Internet.
5 Green Server Virtual Cloud Network Manager Lk T- Other fields Server IP T- Sent to Lk Group Lj Other fields Sent to Li Group DHCP Request DHCP Request (Fwd) Find IP Address (DHCP Relay or Server) Ln Other fields Sent to Ln Group Dest. IP * * * * * * OpenFlow FlowMod IP Protocol ARP DHCP Action Forward to CNM Forward to CNM ID ID Server/ VS Srvr GT IP GT IP * GT IP GT * Pop s, Forward to GT Server/VS VS-TORS Link Push BB-1, Forward Port1 Push BB-4, Forward Port1 Push BB-6, Forward Port1 DT Meeting Lk Hash Li Hash Lk Hash Push BB-2, Push BB-5, Push BB-7, Group Header Group Header Group Header Forward Port2 Forward Port2 Forward Port2 DT Meeting Ericsson Internal Page 6 GT DHCP Reply Fig. 6. Message sequence for address configuration Cloud Execution Manager Fig. 4. New Green : <, GT, Srvr > DT Meeting Ericsson Internal Page 5 TORS flow table rules for outgoing packet routing Fig. 5. Cloud Network Manager Server/VS GT ID ID GT VS-TORS Link OpenFlow FlowMod Server/VS Srvr IP GT IP Server Virtual Dest. IP * * * * * * * GT IP GT Message sequence for activating a E. Interventions on IP Network Operations IP Protocol ARP DHCP * Action Forward to CNM Forward to CNM Pop s, Forward to GT The virtual switches on the servers need to be programmed with OpenFlow rule/action pairs to intervene on IP address lookup and IP address configuration. There are two rule/action pairs per virtual switch: If the protocol type matches ARP, forward the packet to the controller. This ensures that a starting a session obtains the IP address information for the destination from the controller and allows the controller to install OpenFlow rules for routing between the two s. If the protocol type matches DHCP, forward the packet to the controller. This ensures that the controller knows the IP address of all s in the data center so it can perform the ARP intervention described above. This rule is only installed if the CEM allows tenants to use DHCP; otherwise, the CEM injects the IP address at the time the is started, and records the address without requiring an OpenFlow rule. F. Activation Figure 5 illustrates the message sequence involved in activating a on a server. At (1), the CEM sends a message to the CNM including the Green ID (), Green (GT ), and of the server (Srvr ) on which the will be instantiated. The CNM uses the Srvr as a key to look up the link label of the VS to TORS link in the SMVL table and the tenant ID as a key to look up the tenant label in the TITL table at (2). At (3), the CNM inserts an entry into the CNM mapping table for the, recording the tenant id, address, and server address, and the IP address if the CEM uses address injection to configure the IP address. If the CEM uses address injection, these are then used together with the Green IP addresses to program the OpenFlow table in the VS with the following rule/action pairs for handling inbound traffic (5): If the first label matches the VS-TORS link label for this VS, the second label matches the Green label, and the destination IP address matches the Green IP address, pop the labels and forward the packet to the Green s 1 address. This rule ensures that traffic to the VS incoming on the VS- TORS link is forwarded to the Green if the packet has the s IP address. If the CEM does not use address injection, the inbound traffic rule is installed when a DHCP request is made for an address, described in the next section. G. Address Configuration If the tenant uses DHCP, Figure 6 contains the message sequence between the, VS, and CNM. The Green sends a DHCP Request message which is intercepted by the server VS (1). The VS forwards the message to the CNM (2). The CNM then acts as either the DHCP server or as a DHCP relay, finding an address for the (3). When the address is found, the CNM records the address in the CNM mapping table (4). The CNM then looks up the Green label and the link label for the TORS-VS link at (5). At (6), the CNM programs the OpenFlow flow table in the VS for the inbound routing rule/action as described in the previous subsection. The CNM then replies to the with a DHCP Reply, containing the address (7). 1
6 DT Meeting Destination IP and Discovery Green ARP Request: GT Dest. IP Server Virtual ARP Request (Fwd) Cloud Network Manager ID GT D Server/ VS DSrvr IP GT DIP /Dest. TORS Dest. Virtual CNM Mapping Table Server/ ID VS Srvr GT VS Flow Table Dest. IP IP Protocol IP GT IP Action Cloud Network Manager Packet Buffer Cloud Execution Manager Destination VS Flow Table Dest. IP IP Protocol Action OpenFlow FlowMod IP Dest. IP Action Protocol Push, Push Ln, GT S GT DIP * * Forward to TORS Server/VS DSrvr VS-TORS Link Ln OpenFlow FlowMods See Figure 7 and text for and Dest.TORS and Dest. Virtual FlowMods * GT IP * Forward to CNM (data plane) GT- GT- * GT IP * GT- Pop s, Forward to GT ARP Reply: GT D DT Meeting Ericsson Internal Page 7 Hypervisor Hypervisor Virtual Virtual Fig. 7. Destination address discovery message sequence Blade + NIC HW Blade + NIC HW Server DT Meeting Ericsson Internal Page 8 Destination Server H. Destination IP and Discovery When a needs to contact another at an IP address, it will send an ARP message to discover the address associated with the IP address. Figure 7 shows the message sequence triggered by the Green s attempt to discover a destination address. The Green broadcasts an ARP Request message with the destination IP address (1). The VS intercepts the ARP Request and forwards it to the CNM (2). The CNM looks up the destination IP address in the Green CNM mapping table, and extracts the Srvr on which the destination is running (3). The CNM uses the destination server address to look up the destination VS link label in the SMVL table (4). The CNM then programs the source TORS with flow routes for incoming and outgoing traffic and the destination VS with flow routes for outbound traffic from the destination to the source (5), for those rules that were not installed at the time the TORS and VS were started. The TORS flow routes are described Sections III-C and III-D. The flow routes installed in the source VS are shown at (6). The rule/action pair is: If the matches the Green source and the IP address matches the Green destination IP address, push a Green label and Ln - the label for the link between the destination TORS and destination VS/server - onto the packet and forward the packet to the source TORS. The forward action to the source TORS is shown in in (7). Finally, the CNM replies to the requesting Green with an ARP reply resolving the IP address to the address of the destination (8). I. Movement One of the most difficult aspects of data center networking is dealing with movement. When such a movement occurs, the routes to the old server location are no longer valid. If the is moved into a new IP subnet, the IP address of the can become invalid and existing clients can lose connectivity. Figure 8 illustrates the process of movement. The solid arrows indicate control plane messages while the dashed arrows indicate data plane traffic. The numbers are keyed to the list below. 1 Fig. 8. movement 1) The CEM decides on a server and begins moving the Green. As long as the source is still capable of taking traffic, no routing changes are made. 2) When the source can no longer take traffic, the CEM notifies the CNM. 3) The CNM modifies the source VS flow table to cause any further traffic to the source Green to be forwarded to the CNM for buffering. 4) When the Green on the destination server is ready, the CEM notifies the CNM, or if the hypervisor supports gratuitous ARP, the sends an ARP which is forwarded to the CNM through the ARP rule in the destination server VS. 5) The CNM installs a rule into the destination server VS flow table to forward packets with the Green IP address and Green label to the Green. If there are no other s on the source TORS rack utilizing the link entries for outgoing and incoming traffic that the Green utilized, these are deleted to conserve TORS flow table space, (not shown). 6) The CNM sends any buffered packets to the Green on the destination server. 7) The CNM updates the CNM Mapping table entry for the s server address with the new server s address. After this change, any new sessions to the Green s server will be answered by the new. There are still flow table rules on VSs throughout the data center routing traffic on existing sessions to the old server. The CNM uses lazy update to change these rules to the new server. When a packet for the old arrives at the old server s VS, the VS encapsulates the packet and forwards it to the CNM since the rule for the old has been deleted. The CNM uses the tenant label on the packet to determine the tenant id and the IP address of the correspondent s to locate the VS with a rule pointing to the old server, and changes the flow rule on the correspondent s VS and TORS (if necessary) to forward traffic to the new server. The packet is then sent
7 CEM Module Main Functions Emulate Virtual coming up Emulate coming upon vswitch coming up Emulate Movement Emulate TORS coming up Emulate DHCP for a NOX CONTROLLER VS_Installed _Installed _Moving _Moved TORS_Installed CNM Module Main Functions Handle Messages from the CEM and send OF messages Handle ARP/DHCP/Broadcast packets from OF switches Update OF switches with new flow messages on topology changes TORS Session Migration Bootup Creation Setup Old New Modified Src VS 0 1 or Dst VS Src TORS k 0 0 or 1+2r or 1 Dst TORS or 1+2r or 1 TABLE I SUMMARY OF RULES INSTALLATION DHCP_Installed OF NOX + OpenFlow1.1 SECURE SIGNALLING PROTOCOL ( Openflow) OF The CNM module was implemented to have an east-west interface from the CEM module and a north-south OpenFlow interface to program the OpenFlow switches. We described the CEM-CNM interface in the section above. The Open- Flow interface consisted in functions to handle the OpenFlow PACKET-IN message for ARP, DHCP, and broadcast packets, together with a flow removal and join handler. Functions to program the TORS and VS with the rules described above were also implemented. Fig. 9. OF CEM and CNM implementation to the Green on the new server. By using lazy update, the CNM is saved from having to update all the rules on every VS at the time of movement. All flow rules ultimately either time out or are modified to send traffic to the new server. The CNM can allow the flow rule on the old VS to time out after a grace period, say half a day. movement events are not frequent enough that this should prove burden on the VS flow table storage. OF IV. IMPLEMENTATION ON MININET EMULATOR We used NOX  as the OpenFlow controller and implemented the CNM and CEM modules as applications running over the NOX middleware. As illustrated in Figure 9, the CEM and CNM modules talk to each other through well-defined messages. The message names are descriptive of their function, for example, Installed is sent to the CNM when a is instantiated. The CNM module, as explained in Section III, is responsible for sending OpenFlow messages to the switches when it receives a notification from the CEM module. We used Mininet  to emulate the data center network. The topology of the network was described through a JSON  file. This file was parsed by MiniNet to set up an emulated data center network. We modified MiniNet to store node metadata for every switch/host that was bought up on parsing the topology JSON. We added APIs to MiniNet to help set up a DHCP server. We also emulated movement through a JSON entry in our topology file. We added timer entries in the JSON file that helped determine the start of and finish of movement. V. ANALYTICAL AND EMULATED RESULTS In this section, we present our analysis results to show the scalability of Zeppelin. We are particularly interested in flow table state scalability, since that is a key issue for OpenFlow. We first summarize the rule updates in different scenarios and then discuss the dynamics using different experiments. Table I shows the number of rules that need to be installed in four different scenarios. When starting a TORS, one needs to install one rule for each server connecting to this TORS. Therefore, assuming that each TORS can support k number of servers, then k rules are installed. Each rule matches a VS- TORS label with the action of forwarding the packets to the corresponding server. No rules are needed in other switches. When a server is started, one and possibly two rules are installed on the VS to forward the ARP and DHCP packets to the controller (only the ARP rule is needed if the CEM configures IP addresses with address injection). These two rules are shared across all the s on the same VS. Therefore, they only need to be programmed once for all the s on a server. In addition, a rule is needed matching the VS-TORS label, the tenant label, and the s IP address, with the action of either popping the label or forwarding the packet. Although this rule contains three logically different actions, it can be implemented as one rule physically. Thus, we have 1 rule on the source VS. This rule is established either when the is started, if address injection is used, or when the IP address is obtained through DHCP. We propose to set up rules dynamically upon communication, which will significantly reduce the memory consumption on both the TORS and the VS. Upon a new session established between two s, a set of rules will be programmed on the fly. First, one needs to install a rule on the VS to match the destination with the action of pushing the tenant label and the destination VS-TORS link label, as well as forwarding to the source TORS. Second, the source TORS will need a rule to match the destination VS-TORS label and forward to the correct path. But if there is already a session between
8 any connected to this source TORS and the destination VS, this rule can be eliminated. In the forward direction, a separate rule is used to map the path from the source TORS to the corresponding forwarding next hop. Assuming multi-path routing is supported, then the TORS needs to have r number of rules for mapping r ECMP groups. Similarly, on the reverse direction, r rules are installed to match the inter-tors label and pop the label. Note that these rules may not be needed if there are already sessions established between the two TORS. Finally, when a is migrated to a different server, the controller will modify the inbound forwarding rule to forward the packet to the controller, instead of going to the. The corresponding new rule will be installed on the new VS. The rules on the TORS are updated on demand. Triggering by any packets sent to the old location, the controller will update the TORS with the new TORS location. Next, we use experiments to show the rules needed dynamically. We assume a tree topology with 12 racks, which is also used in . Each TORS is connected to 20 servers. Next, we simulate the process of creation and inter- communication. In the experiment, we continuously start instances. Each selects n number of other s to set up sessions, where n is a random number uniformly drawn from a pre-defined average value. In Figure 10 and Figure 11, we show two cases, i.e., average 5 sessions per and average 10 sessions. For each case, we compute the average number of rules per VS and per TORS, as a function of the number of s per VS/TORS. From Figure 10 we observe that the rules are linearly increasing as the number of s on the VS. However, for rules in TORS, the increasing trend slows down when the number of s are large, since many rules only need to be installed once between a pair of VSes and TORSes. Flow scalability is an important criterion when considering the applicability of OpenFlow. Curtis, et. al.  discuss the flow scalability of OpenFlow. They show that a typical TORS of the current generation can support 1500 OpenFlow rules in TCAM, whereas an average TORS has roughly 78,000 flows. In our scheme, flows heading to the same server share a forwarding rule on the TORS, so strategic placement of s can substantially reduce the amount of flow table space. In addition, as shown in Figure 11, the number of rules at 5 flows per is around 1500, while the number at 10 flows per is around 2500 for 1000 s per rack. The former number would be well within the current generation TORS capacity to handle, while the latter should be within the next generation. Finally, the analysis done by Curtis at. al. assumes the flow rules are handled in the TCAM. Many switch chips today contain support for, so a carefully optimized implementation of OpenFlow on such a chip could potentially avoid having to use the TCAM and might support even more rules. VI. CONCLUSION In this paper, we have described the design and implementation of Zeppelin, a third generation data center virtualization Avg. number of rules per VS Avg. number of rules per TORS Avg. number of rules on the VS 100 avg. 5 sessions per avg. 10 sessions per Fig. 10. Number of s per Server Average number of rules per VS Avg. number of rules on the TORS 500 avg. 5 sessions per avg. 10 sessions per Fig. 11. Number of s per TORS Average number of rules per TORS scheme based on labels. Zeppelin uses two layers of labels: one identifying the virtual network upon which the tenant is located and one identifying the routing path through the network. The routing path layer itself consists of two labels: one identifying the link between the destination last hop virtual switch and last hop top of rack switch, and one identifying the route through the back bone network. In contrast to the use of in wide area networks - where labels are meaningless outside of the link and router on which they are allocated - in Zeppelin, the labels have semantic relevance across the data center. A centralized controller handles assignment of labels and programming of virtual switches and top of rack switches using the OpenFlow protocol. An implementation of the scheme was built using NOX and tested on top of the MiniNet emulator. An analysis of the emulation shows that the flow rule scalability is within the limits of existing switch chips if TCAM is used to hold the rules, and could even be more scalable if the OpenFlow implementation utilizes the built-in support of the switch chips. For future work, we have planned an extension of Zeppelin to multicast, and an implementation on actual OpenFlow hardware. We also plan to study actual data center traffic to see if Zeppelin can efficiently support it.
9 VII. ACKNOWLEDGEMENTS The authors would like to thank Ponnanna Uthappa from USC for coding the NOX implementation. REFERENCES  Amazon Elastic Compute Cloud.  Google App Engine.  Windows Azure Platform.  The Rackspace Cloud.  Cloud Computing.  D. Farinacci, T. Li, S. Hanks, D. Meyer, and P. Traina, Generic Routing Encapsulation (GRE). RFC2784,  M. Mahalingam, D. Dutt, K. Duda, P. Agarwal, L. Kreeger, T. Sridhar, M. Bursell, and C. Wright, VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks. Internet Draft,  Networking in the Era of Virtualization. files/docs/.  E. Rosen, D. Tappan, G. Fedorkow, Y. Rekhter, D. Farinacci, T. Li, and A. Conta, Encoding. Internet Engineering Task Force, RFC 3032, January  IEEE Std Q-2011, Media Access Control () Bridges and Virtual Bridge Local Area Networks. IEEE,  SDN. networking.  D. Allen and N. Bragg, 802.1aq shortest path bridging: Design and evolution. IEEE Press,  Contrail: The Juniper SDN controller for virtual overlay network. http: //www.juniper.net/us/en/dm/junos-v-contrail/,  OpenFlow.  D. Allan, J. Farkas, P. Saltsidis, and J. Tantsura, Ethernet routing for large scale distributed data center fabrics, in Proceedings of the IEEE CloudNet,  R. Sherwood, M. Chan, A. Covington, G. Gibb, M. Flajslik, N. Handigol, T.-Y. Huang, P. Kazemian, M. Kobayashi, J. Naous, S. Seetharaman, D. Underhill, T. Yabe, K.-K. Yap, Y. Yiakoumis, H. Zeng, G. Appenzeller, R. Johari, N. McKeown, and G. Parulkar, Carving research slices out of your production networks with openflow, SIGCOMM Comput. Commun. Rev., vol. 40, no. 1, pp ,  A. L. Mohammad Al-Faris and A. Vahdat, A scalable, commodity data center network architecture, in SIGCOMM Proceedings,  K. T. L. S. Y. Z. Chuanxiong Guo, Haitao Wu and S. Luz, Dcell: A scalable and fault-tolerant network structure for data centers, in SIGCOMM Proceedings,  Openflow switch specification: Version implemented (wire protocol 0x02), Feburary  C. Hopps., Analysis of an equal-cost multi-path algorithm. Internet Engineering Task Force, RFC 2992, November  Openstack.  N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, Nox: towards an operating system for networks, SIGCOMM Comput. Commun. Rev., vol. 38, pp , July  Mininet.  JSON.  R. N. Mysore, A. Pamporis, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, Subramanya, and A. Vahdat, PortLand: A Scalable, Fault- Tolerant Layer 2 Data Center Network Fabric, in SIGCOMM  A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, and S. Banerjee, Devoflow: scaling flow management for high-performance networks, SIGCOMM Comput. Commun. Rev., vol. 41, pp , August 2011.
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (email@example.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (firstname.lastname@example.org) Senior Architect, Limelight
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center
Axon: A Flexible Substrate for Source- routed Ethernet Jeﬀrey Shafer Brent Stephens Michael Foss Sco6 Rixner Alan L. Cox 2 Ethernet Tradeoffs Strengths Weaknesses Cheap Simple High data rate Ubiquitous
Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
SDN and Data Center Networks 10/9/2013 1 The Rise of SDN The Current Internet and Ethernet Network Technology is based on Autonomous Principle to form a Robust and Fault Tolerant Global Network (Distributed)
Application Note #38 February 2004 What is VLAN Routing? This Application Notes relates to the following Dell product(s): 6024 and 6024F 33xx Abstract Virtual LANs (VLANs) offer a method of dividing one
Software Defined Networks (SDN) Nick McKeown Stanford University With: Martín Casado, Teemu Koponen, Scott Shenker and many others With thanks to: NSF, GPO, Stanford Clean Slate Program, Cisco, DoCoMo,
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business
Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer
SDN CONTROLLER IN VIRTUAL DATA CENTER Emil Gągała PLNOG, 30.09.2013, Kraków INSTEAD OF AGENDA 2 Copyright 2013 Juniper Networks, Inc. www.juniper.net ACKLOWLEDGEMENTS Many thanks to Bruno Rijsman for his
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
SDN AND SECURITY: Why Take Over the s When You Can Take Over the Network SESSION ID: TECH0R03 Robert M. Hinden Check Point Fellow Check Point Software What are the SDN Security Challenges? Vulnerability
How SDN will shape networking Nick McKeown Stanford University With: Martín Casado, Teemu Koponen, Sco> Shenker and many others With thanks to: NSF, GPO, Stanford Clean Slate Program, Cisco, DoCoMo, DT,
Flexible Network Address Mapping for Container-based Clouds Kyung-Hwa Kim, Jae Woo Lee, Michael Ben-Ami, Hyunwoo Nam, Jan Janak and Henning Schulzrinne Columbia University, New York, NY, USA Abstract Container-based
OpenStack Neutron Outline Why Neutron? What is Neutron? API Abstractions Plugin Architecture Why Neutron? Networks for Enterprise Applications are Complex. Image from windowssecurity.com Why Neutron? Reason
Georg Ochs, Smart Cloud Orchestrator (email@example.com) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Mobile Cloud Computing Lecture 02b Cloud Computing II 吳 秀 陽 Shiow-yang Wu T. Sridhar. Cloud Computing A Primer, Part 2: Infrastructure and Implementation Topics. The Internet Protocol Journal, Volume 12,
20. Switched Local Area Networks n Addressing in LANs (ARP) n Spanning tree algorithm n Forwarding in switched Ethernet LANs n Virtual LANs n Layer 3 switching n Datacenter networks John DeHart Based on
OpenFlow and Software Defined Networking presented by Greg Ferro OpenFlow Functions and Flow Tables would like to thank Greg Ferro and Ivan Pepelnjak for giving us the opportunity to sponsor to this educational
LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control
Open Source Network: Software-Defined Networking (SDN) and OpenFlow Insop Song, Ericsson LinuxCon North America, Aug. 2012, San Diego CA Objectives Overview of OpenFlow Overview of Software Defined Networking
2013 SDN 高 峰 論 壇 SDN Architecture and Service Trend Dr. Yu-Huang Chu Broadband Network Lab Chunghwa Telecom Co., Ltd., Taiwan 10/09/13 1 Outlines SDN & NFV introduction Network Architecture Trend SDN Services
WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable
Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Communication Networks Software Defined Networking (SDN) Prof. Dr. Admela Jukan Dr.
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
Funded in part by: NSF, Cisco, DoCoMo, DT, Ericsson, Google, Huawei, NEC, Xilinx Nick McKeown, Guru Parulkar, Guido Appenzeller, Nick Bastin, David Erickson, Glen Gibb, Nikhil Handigol, Brandon Heller,
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: firstname.lastname@example.org Tel: 0041-798658759 Agenda 1 TRILL Overview
Software Defined Networking Stefano Giordano Dipartimento di Ingegneria dell Informazione Università di Pisa 3D Reference model of ISDN Hourglass reference model of a TCP/IP network Network Ossification
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical
Network Virtualization and Software-defined Networking Chris Wright and Thomas Graf Red Hat June 14, 2013 Agenda Problem Statement Definitions Solutions She can't take much more of this, captain! Challenges
Virtual Machine in Data Center Switches Huawei Virtual System Contents 1 Introduction... 3 2 VS: From the Aspect of Virtualization Technology... 3 3 VS: From the Aspect of Market Driving... 4 4 VS: From
Open Source Networking for Cloud Data Centers Gaetano Borgione Distinguished Engineer @ PLUMgrid April 2015 1 Agenda Open Source Clouds with OpenStack Building Blocks of Cloud Networking Tenant Networks
Internet Protocol: IP packet headers 1 IPv4 header V L TOS Total Length Identification F Frag TTL Proto Checksum Options Source address Destination address Data (payload) Padding V: Version (IPv4 ; IPv6)
Presenter: Vinit Jain, STSM, System Networking Development, IBM System & Technology Group A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio
Using Network Virtualization to Scale Data Centers Synopsys Santa Clara, CA USA November 2014 1 About Synopsys FY 2014 (Target) $2.055-2.065B* 9,225 Employees ~4,911 Masters / PhD Degrees ~2,248 Patents
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
WHITEPAPER Improving Overlay Solutions with Hardware-Based VXLAN Termination Connections Between the Virtual and Physical World Abstract As virtualization and cloud technologies become more prevalent in
CCNA R&S: Introduction to Networks Chapter 5: Ethernet 184.108.40.206 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.
Network Virtualization Ben Pfaff Nicira Networks, Inc. Preview Data Centers Problems: Isolation, Connectivity Solution: Network Virtualization Network Tunnels A Network Virtualization Architecture Open
Scalable Fault Management for OpenFlow James Kempf, Elisa Bellagamba, András Kern, Dávid Jocha, Attila Takacs Ericsson Research email@example.com Pontus Sköldström Acreo AB, Stockholm, Sweden firstname.lastname@example.org
ware Network Virtualization Technical WHITE PAPER January 2013 ware Network Virtualization Table of Contents Intended Audience.... 3 Overview.... 3 Components of the ware Network Virtualization Solution....
Simplify IT With Cisco Application Centric Infrastructure Barry Huang email@example.com Nov 13, 2014 There are two approaches to Control Systems IMPERATIVE CONTROL DECLARATIVE CONTROL Baggage handlers follow
WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Auxiliary Protocols IP serves only for sending packets with well-known addresses. Some questions however remain open, which are handled by auxiliary protocols: Address Resolution Protocol (ARP) Reverse
Expert Reference Series of White Papers vcloud Director 5.1 Networking Concepts 1-800-COURSES www.globalknowledge.com vcloud Director 5.1 Networking Concepts Rebecca Fitzhugh, VMware Certified Instructor
Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and
Cloud Networking From Theory to Practice Ivan Pepelnjak (firstname.lastname@example.org) NIL Data Communications Who is Ivan Pepelnjak (@ioshints) Networking engineer since 1985 Consultant, blogger (blog.ioshints.info),
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
BM 465E Distributed Systems Lecture 4 Networking (cont.) Mehmet Demirci Today Overlay networks Data centers Content delivery networks Overlay Network A virtual network built on top of another network Overlay
SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Eric Choi < email@example.com> Senior Manager, Service Provider Business Unit, APJ 2012 Brocade Communications Systems, Inc. EPF 7 2012/09/17 Software-Defined Networking
Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology Sujal Das Product Marketing Director Network Switching Karthik Mandakolathur Sr Product Line
Network Virtualization Network Admission Control Deployment Guide This document provides guidance for enterprises that want to deploy the Cisco Network Admission Control (NAC) Appliance for their campus
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
GTS Tech+Futures Workshop (Copenhagen) GTS Tech+Futures Workshop (Copenhagen) DREAMER and GN4-JRA2 on GTS CNIT Research Unit of Rome University of Rome Tor Vergata Outline DREAMER (Distributed REsilient
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Praveen Bhaniramka, Wei Sun, Raj Jain Department of Computer and Information Science The Ohio State University 201 Neil Ave, DL39 Columbus, OH 43210 USA Telephone Number: +1 614-292-3989 FAX number: +1
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Network Virtualization Solutions An Analysis of Solutions, Use Cases and Vendor and Product Profiles October 2013 The Independent Community and #1 Resource for SDN and NFV Tables of Contents Introduction
WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking Intel Ethernet Switch FM6000 Series - Software Defined Networking Recep Ozdag Intel Corporation Software Defined Networking
Software Defined Networking and OpenFlow: a Concise Review Stefano Forti firstname.lastname@example.org MSc in Computer Science and Networking Scuola Superiore Sant'Anna - University of Pisa 1. Introduction
Network irtualization History Network irtualization and Data Center Networks 263-3825-00 SDN Network irtualization Qin Yin Fall Semester 203 Reference: The Past, Present, and Future of Software Defined
Software Defined Networking & Openflow Autonomic Computer Systems, HS 2015 Christopher Scherb, 01.10.2015 Overview What is Software Defined Networks? Brief summary on routing and forwarding Introduction
MPLS Concepts Overview This module explains the features of Multi-protocol Label Switching (MPLS) compared to traditional ATM and hop-by-hop IP routing. MPLS concepts and terminology as well as MPLS label
Your consent to our cookies if you continue to use this website.