Hosted cloud computing has significantly. Scalable Network Virtualization in Software- Defined Networks FPO. Virtualization
|
|
|
- Veronica Warner
- 10 years ago
- Views:
Transcription
1 Virtualization FPO Scalable Network Virtualization in Software- Defined Networks Network virtualization gives each tenant in a data center its own network topology and control over its traffic flow. Software-defined networking offers a standard interface between controller applications and switch-forwarding tables, and is thus a natural platform for network virtualization. Yet, supporting numerous tenants with different topologies and controller applications raises scalability challenges. The FlowN architecture gives each tenant the illusion of its own address space, topology, and controller, and leverages database technology to efficiently store and manipulate mappings between virtual networks and physical switches. Dmitry Drutskoy Princeton University Eric Keller University of Colorado Jennifer Rexford Princeton University Hosted cloud computing has significantly lowered the barrier for creating new networked services. Likewise, experimental facilities such as the Global Environment for Network Innovations (GENI; www. geni.net) let researchers perform largescale experiments on a slice of shared infrastructure. By letting tenants share physical resources, virtualization is a key technology in these infrastructures. Although virtual machines are now the standard abstraction for sharing computing resources, the right abstraction for networks is a subject of ongoing debate. Existing solutions differ in the level of detail they expose to individual tenants. Amazon Elastic Compute Cloud (EC2) offers a simple abstraction in which all of a tenant s virtual machines can reach each other. Nicira extends this one big switch model by offering programmatic control at the network edge to enable, for example, improved access control (see en/network-virtualization-platform). Oktopus exposes a network topology so tenants can perform customized routing and access control based on knowledge about their own applications and traffic patterns. 1 Each abstraction is most appropriate for a different class of tenants. As more companies move to the cloud, providers must go beyond network bandwidth sharing to support a wider range of abstractions. With a flexible 2 Published by the IEEE Computer Society /13/$ IEEE IEEE INTERNET COMPUTING IC Kell.indd 2
2 Scalable Network Virtualization in Software-Defined Networks network virtualization layer, a cloud provider can support multiple abstractions ranging from a simple one big switch abstraction (in which tenants don t need to configure anything) to arbitrary topologies (in which tenants run their own control logic). The key to supporting various abstractions is a flexible virtualization layer that supports arbitrary topologies, address and resource isolation, and custom control logic. Supporting numerous tenants with different abstractions raises scalability challenges. For example, supporting virtual topologies requires that tenants be able to run their own control logic and learn about relevant topology changes. Software-defined networking (SDN) is an appealing platform for network virtualization because each tenant s control logic can run on a controller rather than on physical switches. In particular, OpenFlow offers a standard API for installing packet-forwarding rules, querying traffic statistics, and learning about topology changes. 2 Supporting multiple virtual networks with different topologies requires a way to map a rule or query issued on a virtual network to the corresponding physical switches, and to map a physical event (such as a link or switch failure) to the affected virtual components. Any virtualization solution must perform these mapping operations quickly, to give each tenant real-time control over its virtual network. Here, we present FlowN, an efficient and scalable virtualization solution. FlowN is built on top of SDN technology to provide programmable control over a network of switches. Tenants can specify their own address space, topology, and control logic. The FlowN architecture leverages advances in database technology to scalably map between the virtual and physical networks. Similarly, FlowN uses a shared controller platform, analogous to container-based virtualization, to efficiently run tenants controller applications. Experiments with our prototype FlowN system, built as an extension to the NOX 3 OpenFlow controller, show that these two design decisions lead to a fast, flexible, and scalable solution for network virtualization. Network Virtualization For hosted and shared infrastructures, as with cloud computing, providers should fully virtualize an SDN to represent the network to tenants. We next look at the requirements for virtualization in terms of specifying and isolating virtual infrastructures. SDN Controller Application To support the widest variety of tenants, a cloud provider should let each one specify custom control logic on its own network topology. SDN is quickly gaining traction as a way to program the network. In SDN, a logically centralized controller manages the collection of switches through a standard interface, letting the software control switches from different vendors. With the OpenFlow standard, 2 for example, the controller s interface to a hardware switch is effectively a flow table with a prioritized list of rules. Each rule encompasses a pattern that matches bits of the incoming packets, and actions that specify how to handle these packets. These actions include forwarding out of a specific port, dropping the packet, or sending the packet to the controller for further processing. The software controller interacts with the switches (for instance, handling packets sent to the controller) and installs the flow-table entries (for example, installing rules in a series of switches to establish a path between two hosts). With FlowN, each tenant can run its own controller application. Of course, not all tenants need this much control. Tenants wanting a simpler network representation can simply choose from default controller applications, such as all-to-all connectivity (similar to what Amazon EC2 offers) or an interface similar to a router (as with RouteFlow 4 ). This default controller application would run on top of the virtualization layer that FlowN provides. As such, tenants can decide whether they want full network control or a preexisting abstraction that matches their needs. Virtual Network Topology In addition to running a controller application, tenants also specify a network topology. This lets each tenant design a network for its own needs, such as favoring low latency for highperformance computing workloads or a high bisection bandwidth for data processing workloads. 5 With FlowN, each virtual topology consists of nodes, interfaces, and links. Virtual nodes can be either a server (virtual machine) or an SDN-based switch. Each node has a set of virtual interfaces that connect to other virtual interfaces via virtual links. Each virtual March/April IC Kell.indd 3
3 Virtualization Arbitrary embedder DB SDN-enabled network Address mapping Cache Tenant 2 application Tenant 1 application Container-based application virtualization Figure 1. System design. The FlowN architecture lets tenants write arbitrary controller software to fully control their own address space, defined by the fields in the packet header, and uses modern database technology to perform the mapping between the virtual and physical address space. component can include resource constraints for example, the maximum number of flow-table entries on the switch, the number of cores on a server, or the bandwidth and maximum latency for virtual links. The cloud provider runs an embedding algorithm 6 to map the requested virtual resources to the available physical ones. Importantly, with full virtualization, virtual topologies are decoupled from the physical infrastructure. This is in contrast to slicing the physical resources (as occurs with Flow- Visor 7 ), which lets tenants run their own controller over a portion of the traffic and a subset of the physical network. However, with slicing, the mapping between virtual and physical topologies is visible to the tenants. With FlowN, mappings aren t exposed; instead, tenants simply see their virtual networks. With this decoupling, cloud providers can offer virtual topologies with richer connectivity than the physical network, or remap virtual networks to hide the effects of failures or planned maintenance. Virtual nodes, whether switches or virtual machines, can move to different physical nodes without changing the tenant s network view. Address Space and Bandwidth Isolation Each tenant has an address space, defined by fields in the packet headers (source and destination IP address, TCP port numbers, and so on). Rather than divide the available address space among tenants, FlowN present virtual address spaces to each application. This gives each tenant control over all fields within the header (for instance, two tenants can use the same private IP addresses). To do this, the FlowN virtualization layer maps between the virtual and physical addresses. To distinguish between the traffic and rules for different tenants, the edge switches encapsulate incoming packets with a protocolagnostic extra header, transparent to the tenant s virtual machines and controller application. This extra header (for example, a virtual LAN, or VLAN) simply identifies the tenant we don t run the associated protocol logic (for example, per the VLAN spanning-tree protocol). In addition to address-space isolation, the virtualization solution must support bandwidth isolation. Although current SDN hardware doesn t let network controllers limit bandwidth usage, the recent OpenFlow specification includes this capability. 8 Using embedding algorithms, we guarantee bandwidth to each virtual link. As support for enforcing these allocations becomes available, we can incorporate them into FlowN. FlowN Architecture Overview Hosted cloud infrastructures are typically large datacenters that have many tenants. As such, our virtualization solution must scale in both the physical network s size and the number of virtual networks. Being scalable and efficient is especially critical in SDNs, where packets are not only handled in the hardware switches, but can also be sent to the centralized controller for processing. Virtualization has two main performance issues in an SDN context. First, an SDN controller must interact with switches through a reliable communication channel (such as SSL over TCP) and maintain a current view of the physical infrastructure (for example, which switches are alive). This incurs both memory and processing overhead, and introduces latency. Second, with virtualization, any interaction between a tenant s controller application and the physical switches must go through a mapping between the virtual and physical networks. As the number of virtual and physical switches increases, performing this mapping becomes a limiting factor in scalability. To overcome these issues, the FlowN architecture (see Figure 1) is based around two key 4 IEEE INTERNET COMPUTING IC Kell.indd 4
4 Scalable Network Virtualization in Software-Defined Networks design decisions. First, FlowN lets tenants write arbitrary controller software that has full control over the address space and can target an arbitrary virtual topology. However, we use a shared controller platform (such as NOX 3 ) rather than running a separate controller for each tenant. This approach is analogous to containerbased virtualization such as LXC for Linux ( or FreeBSD Jails ( handbook/jails.html). Second, we use modern database technology to perform the mapping between the virtual and physical address space. This provides a scalable solution that s easily extensible as new functionality is needed. Container-Based Virtualization Each tenant has a controller application that runs on top of its virtual topology. This application consists of handlers that respond to network events (such as topology changes, packet arrivals, and new traffic statistics) by sending new commands to the underlying switches. Each application should have the illusion //that it?// runs on its own controller. However, running a full-fledged controller for each tenant is unnecessarily expensive. Instead, FlowN supports container-based virtualization by mapping API calls in the NOX controller back and forth between the physical and virtual networks. Full Controller Virtualization Overhead Running a separate controller for each tenant seems like a natural way to support network virtualization. In this solution, the virtualization system exchanges OpenFlow messages directly with the underlying switches, and with each tenant s controller. This system maintains the relationships between physical and virtual components, and whatever encapsulation is applied to each tenant s traffic. When physical events happen in the network (for example, a virtual link or switch failure, or a packet-in event for a particular virtual network), the system translates them to one or more virtual events and sends the corresponding OpenFlow message to the appropriate tenants. Similarly, when a tenant s controller sends an OpenFlow message, the virtualization system converts the message (for example, by mapping virtual switch identifiers to physical switch identifiers, including the tenant-specific encapsulation header in the packet-handling rules) before sending a message to the physical switch. The FlowVisor system follows this approach, 7 virtualizing the switch data plane by mapping OpenFlow messages sent between the switches and the per-tenant controllers. Using the Open- Flow standard as the interface to the virtualization system has some advantages (tenants can select any controller platform, for instance), but introduces unnecessary overhead. Repeatedly marshalling and unmarshalling parameters in OpenFlow messages incurs extra latency. Running a complete controller instance for each tenant involves running a large code base, which consumes extra memory. Periodically checking for the separate controllers liveness incurs additional overhead. The overhead for supporting a single tenant might not be that significant. However, when we consider that the virtualization layer will now have to provide the full interface of switches for each virtual switch (which will outnumber the physical switches by at least an order of magnitude), the cumulative overhead is significant requiring more computing resources and incurring extra, unnecessary latency. Container-Based Controller Virtualization Instead, we adopt a solution inspired by containerbased virtualization, in which a shared kernel runs multiple user-space containers with independent name spaces and resource scheduling. FlowN is a modified NOX controller that can run multiple applications, each with its own address space, virtual topology, and event handlers. Rather than map OpenFlow protocol messages, FlowN maps between NOX API calls. In essence, FlowN is a special NOX application that runs its own event handlers that call tenant-specific event handlers. For example, when a packet arrives at the controller, FlowN runs its packet-in event handler, identifying the appropriate tenant (based on the VLAN tag on the packet, for example) and invoking that tenant s own packet-in handler. Similarly, if a physical port fails, FlowN s port-status event handler identifies the virtual links traversing the failed physical port, and invokes the port-status event handler for each affected tenant with the ID of its failed virtual port. Similarly, when a tenant s event handler invokes an API call, FlowN intercepts the call and translates between the virtual and physical components. Suppose a tenant calls a March/April IC Kell.indd 5
5 Virtualization function that installs a packet-handling rule in a switch. FlowN maps the virtual switch ID to the corresponding physical switch s identifier, checks that the tenant hasn t exceeded its allotted space for rules on that switch, and modifies the pattern and actions in the rule. When modifying a rule, FlowN changes the pattern to include the tenantspecific VLAN tag, and the actions to forward on to the physical ports associated with the tenant s virtual ports. Next, FlowN invokes the underlying NOX function to install the modified rule in the associated physical switch. FlowN follows a similar approach to intercept other API calls for removing rules, querying traffic statistics, sending packets, and so on. Each tenant s event handlers run within its own thread. Although we haven t incorporated any strict resource limits, CPU scheduling provides fairness among the threads. Furthermore, running a separate thread per tenant protects against a tenant s controller application not returning (for example, having an infinite loop) and preventing other controller applications from running. Database-Driven Mappings Container-based controller virtualization reduces the overhead of running multiple controller applications. However, any interaction between a virtual topology and the physical network still requires a mapping between virtual and physical spaces. This can easily become a bottleneck, which FlowN overcomes by leveraging advances in database technology. Virtual Network Mapping Overhead To provide each tenant with its own address space and topology, FlowN maps between virtual and physical resources. It encapsulates tenants packets with a unique header field (for example, a VLAN tag) as they enter the network. To support numerous tenants, the switches swap the labels at each hop in the network. This lets a switch classify packets based on the physical interface port, the label in the encapsulation header, and the fields the tenant application has specified. Each switch can thus uniquely determine which actions to perform. The virtualization software running on the controller determines these labels. A virtualto-physical mapping occurs when an application modifies the flow table (adds a new flow rule, for example). The virtualization layer must alter the rules to uniquely identify the virtual link or switch. A physical-to-virtual mapping occurs when the physical switch sends a message to the controller (as when a packet doesn t match any flow-table rule). The virtualization layer must de-multiplex the packet to the right tenant, and identify the right virtual port and virtual switch. These mappings can occur either one-toone, as when installing a new rule or handling a packet sent to the controller, or one-to-many, as with link failures that affect multiple tenants. In general, these mappings are based on various combinations of input and output parameters. Using a custom data structure with custom code to perform these mappings can easily become unwieldy, leading to software that s difficult to maintain and extend. More importantly, this custom software would need to scale across multiple physical controllers. Depending on the mappings complexity, a single controller machine eventually hits a limit on how many mappings it can perform per second. To scale further, the controller can run on multiple physical servers. With custom code and in-memory data structures, distributing the state and logic in a consistent fashion becomes extremely difficult. FlowVisor takes this custom data-structure approach. 7 Although FlowVisor doesn t provide full virtualization it instead slices the network resources it must still map an incoming packet to the appropriate slice. In some cases, hashing can help perform a fast lookup, but this isn t always possible. For example, for the one-to-many physical to virtual mappings, FlowVisor iterates over all tenants, and for each tenant, it performs a lookup with the physical identifier. Topology Mapping with a Database Rather than using an in-memory data structure with custom mapping code, FlowN uses modern database technology. Both the topology descriptions and the assignment to physical resources lend themselves directly to a database-style relational model. Each virtual topology is uniquely identified by some key, and consists of several nodes, interfaces, and links. Nodes contain the corresponding interfaces, and links connect one interface to another. FlowN describes physical topology similarly. It maps each virtual node to one physical node; each virtual link 6 IEEE INTERNET COMPUTING IC Kell.indd 6
6 Scalable Network Virtualization in Software-Defined Networks becomes a path, or a collection of physical links and a hop counter giving those links ordering. FlowN stores mapping information in two tables. The first stores the node assignments, mapping each virtual node to one physical node. The second stores the path assignment by mapping each virtual link to a set of physical links, each with a hop count number that increases in the path direction. Mapping between virtual and physical space then becomes a simple matter of issuing an SQL query. For example, for packets received at the controller for software processing, we must remove the encapsulation tag and modify the identifier for the switch and port on which the packet was received. We can realize this with the following query: SELECT L.Tenant_ID, L.node_ID1, L.node_port1 FROM Tenant_Link L, Node_T2P_Mapping M WHERE VLAN_tag = x AND M.physical_node_ID = y AND M.tenant_ID = L.tenant_ID AND L.node_ID1 = M.tenant_node_ID Other events are handled similarly, including lookups that yield multiple results for instance, when a physical switch fails, and we must fail all virtual switches currently mapped to that physical switch. Although using a relational database reduces code complexity, the real advantage is that we can capitalize on years of research to achieve a durable state and highly scalable system. Because we expect many more reads than writes in this database, we can run a master database server that handles any writes (for topology changes and adding new virtual networks, for example). FlowN then uses multi ple slave servers to replicate the state across multiple servers. Because the mappings don t change often, the system can then use caching to optimize for mappings that frequently occur. With a replicated database, we can partition the FlowN virtualization layer across multiple physical servers, colocated with each replica of the database. Each physical server interfaces with a subset of the physical switches and performs the necessary physical-to-virtual mappings. These servers also run the controller application for a subset of tenants and perform the associated virtual-to-physical mappings. In some cases, a tenant s virtual network might span physical switches that different controller servers have handled. In this case, FlowN simply sends a message from one controller server (say, responsible for the physical switch) to another (running the tenant s controller application) over a TCP connection. More efficient algorithms for assigning tenants and switches to servers are an interesting area for future research. Evaluation FlowN is a scalable and efficient SDN virtualization system. Here, we compare our FlowN prototype with unvirtualized NOX to determine the virtualization layer overhead and FlowVisor to evaluate scalability and efficiency. We built a FlowN prototype by extending the Python NOX version 1.0 OpenFlow controller. 3 This controller runs without any applications initially, instead providing an interface to add applications (that is, for each tenant) at runtime. Our algorithm for embedding new virtual networks is based on related work. 6 The embedder populates a MySQL version database with mappings between virtual and physical spaces. We implement all schemes using the InnoDB engine. For encapsulation, we use the VLAN header, pushing a new header at the ingress of the physical network, swapping labels as necessary in the core, and popping the header at the egress. Alongside each database replica, we run a memcached instance that gets populated with the database lookup results and provides faster access times should a lookup be repeated. We run our prototype on a virtual machine running Ubuntu LTS, given full resources from three processors of a i CPU at 3.30 GHz, 2 Gbytes of memory, and an SSD drive (Crucial m4 SSD 64 Gbyte). We perform tests by simulating OpenFlow network operation on another virtual machine (running on an isolated processor with its own memory space) using a modified cbench 9 to generate packets with the correct encapsulation tags. We then measure latency by measuring the time between when cbench generates a packetin event and when it receives a response. The virtual network controllers for each network are simple learning switches that operate on individual switches. In our setup, each new packet-in event triggers a rule installation, which the cbench application receives. March/April IC Kell.indd 7
7 Virtualization Researchers have proposed network virtualization in various contexts. In early work, ATM switches were partitioned into switchlets to enable the dynamic creation of virtual networks. 1 In industry, router virtualization is already available in commercial routers ( swconfig-system-basics/html/virtual-router-config.html). This lets multiple service providers share the same physical infrastructure, as with the Virtual Network Infrastructure (VINI), 2 or as a means for a single provider to simplify management of a single physical infrastructure among many services, as with ShadowNet. 3 More recently, researchers have introduced network virtualization solutions in the context of software-defined networks (SDNs) to complement the virtualized computing infrastructure in multitenant datacenters (see doc.cfm?t=pflowcontroller or 4 Although considerable work has been done for network virtualization in the SDN environment, Related Work in Network Virtualization current solutions differ from the approach we describe in the main text in how they split the address space and represent virtual topology. With FlowN we fully virtualize the address space and the topology, with a scalable and efficient system. References 1. J. van der Merwe and I. Leslie, Switchlets and Dynamic Virtual ATM Networks, Proc. IFIP/IEEE Int l Symp. Integrated Network Management, IEEE, 1997, pp A. Bavier et al., In VINI Veritas: Realistic and Controlled Network Experimentation, Proc. ACM SIGCOMM 2006 Conf., ACM, 2006, pp X. Chen, Z.M. Mao, and J. van der Merwe, ShadowNet: A Platform for Rapid and Safe Network Evolution, Proc. Usenix Ann. Technical Conf., Usenix Assoc., 2009, pp R. Sherwood et al., Can the Production Network Be the Testbed? Proc. 9th Conf. Operating Systems Design and Implementation, Usenix Assoc., 2010, article no Latency (ms) FlowN with memcached FlowN without memcached FlowVisor Unvirtualized Number of virtual networks Figure 2. Latency versus virtual network count. As the number of virtual networks the virtualization layer must support increases, the overhead, as measured by the latency in how long it takes a control message to be processed, also increases. As seen here, FlowN has a higher overhead due to the database, but scales better than FlowVisor. Although directly comparing FlowN and FlowVisor is difficult because they re performing different functionality (see Figure 2), FlowN has a much slower increase in latency than FlowVisor as more virtual networks are added. FlowVisor has lower latency for small numbers of virtual networks as the overhead of using a database, as compared to a custom data structure, dominates at these small scales. However, at larger sizes (roughly 100 virtual networks and greater), the scalability of the database approach that FlowN uses wins out. The overall increase in latency over the unvirtualized case is less than 0.2 ms for the prototype that uses memcached and 0.3 ms for the one that doesn t. As we move toward virtualized and, in many cases, multitenant computing infrastructures, the network must also be virtualized to complement the already virtualized servers. SDNs give us the tools to achieve this virtualization, but to date, many have taken a stance on what interface, or abstraction, should be provided to the network operators. Rather than take a stand on what abstraction users of these virtualized infrastructures want, we instead provide a flexible system for partitioning resources with FlowN (on which support for different abstractions can be built). Importantly, the system s scalability is paramount. With FlowN, we capitalize on modern database technology and use ideas from container-based virtualization to allow the network virtualization layer to scale beyond the limits of today s virtualization technology. In doing so, we move a step closer toward cloud computing infrastructures in which tenants are in greater control over their own networks. 8 IEEE INTERNET COMPUTING IC Kell.indd 8
8 Scalable Network Virtualization in Software-Defined Networks References 1. C. Wilson et al., Better Never than Late: Meeting Deadlines in Datacenter Networks, Proc. ACM SIGCOMM 2011 Conf., ACM, 2011, pp N. McKeown et al., OpenFlow: Enabling Innovation in Campus Networks, ACM SIGCOMM Computer Comm. Rev., vol. 38, no. 2, 2008, pp N. Gude et al., NOX: Towards an Operating System for Networks, ACM SIGCOMM Computer Communication Rev., vol. 38, no. 3, 2008, pp M.R. Nascimento et al., Virtual Routers as a Service: The RouteFlow Approach Leveraging Software-Defined Networks, Proc. Conf. Future Internet Technologies (CFI), ACM, 2011, pp K. Webb, A. Snoeren, and K. Yocum, Topology Switching for Data Center Networks, Proc 11th Usenix Conf. Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services (Hot-ICE), Usenix Assoc., N. Chowdhury, M. Rahman, and R. Boutaba, Virtual Network Embedding with Coordinated Node and Link Mapping, Proc. 28th IEEE Conf. Computer Communications (INFOCOM 09), IEEE, 2009, pp R. Sherwood et al., Can the Production Network Be the Testbed? Operating Systems Design and Implementation, Oct. 2010, pp Openflow Switch Specification , Open Networking Foundation, Apr. 2012; images/stories/downloads/specification/openflowspec-v1.3.0.pdf. 9. Openflow Operations Per Second Controller Benchmark, OpenFlow specification, Mar. 2011; wk/index.php/oflops. Dmitry Drutskoy is a computer scientist at Elysium Digital, an IP litigation consulting and digital forensics company in Boston. His research interests are in computer networks, databases, and 3D graphics. Drutskoy has an MSE in computer science from Princeton University. Contact him at [email protected]. Eric Keller is an assistant professor at the University of Colorado. His overall research aim is to create a secure and reliable end-to-end infrastructure for dependable networked services, using a cross-layer approach from networking, computer architecture, operating systems, and distributed systems. Keller has a PhD in electrical engineering from Princeton University. Contact him at [email protected]. Jennifer Rexford is a professor in the computer science department at Princeton University. Her research interests include Internet routing, network measurement, and network management, with the larger goal of making data networks easier to design, understand, and manage. Rexford has a PhD in computer science and electrical engineering from the University of Michigan. She s the coauthor of Web Protocols and Practice (Addison-Wesley, 2001), and a member of IEEE. Contact her at [email protected]. Selected CS articles and columns are also available for free at March/April IC Kell.indd 9
Scalable Network Virtualization in Software-Defined Networks
Scalable Network Virtualization in Software-Defined Networks Dmitry Drutskoy Princeton University Eric Keller University of Pennsylvania Jennifer Rexford Princeton University ABSTRACT Network virtualization
OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu [email protected] [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller Srinivas Govindraj, Arunkumar Jayaraman, Nitin Khanna, Kaushik Ravi Prakash [email protected], [email protected],
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
Network Virtualization
Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
libnetvirt: the network virtualization library
libnetvirt: the network virtualization library Daniel Turull, Markus Hidell, Peter Sjödin KTH Royal Institute of Technology, School of ICT Stockholm, Sweden Email: {danieltt,mahidell,psj}@kth.se Abstract
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Software Defined Networking
Software Defined Networking Richard T. B. Ma School of Computing National University of Singapore Material from: Scott Shenker (UC Berkeley), Nick McKeown (Stanford), Jennifer Rexford (Princeton) CS 4226:
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics
Information- Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics Funding These educational materials have been developed as part of the instructors educational
Securing Local Area Network with OpenFlow
Securing Local Area Network with OpenFlow Master s Thesis Presentation Fahad B. H. Chowdhury Supervisor: Professor Jukka Manner Advisor: Timo Kiravuo Department of Communications and Networking Aalto University
Open Source Network: Software-Defined Networking (SDN) and OpenFlow
Open Source Network: Software-Defined Networking (SDN) and OpenFlow Insop Song, Ericsson LinuxCon North America, Aug. 2012, San Diego CA Objectives Overview of OpenFlow Overview of Software Defined Networking
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
How To Understand The Power Of The Internet
DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach, Kurose, Ross Slides: - Course book Slides - Slides from Princeton University COS461
Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering
Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Communication Networks Software Defined Networking (SDN) Prof. Dr. Admela Jukan Dr.
Ten Things to Look for in an SDN Controller
Ten Things to Look for in an SDN Controller Executive Summary Over the last six months there has been significant growth in the interest that IT organizations have shown in Software-Defined Networking
CS6204 Advanced Topics in Networking
CS6204 Advanced Topics in Networking Assoc Prof. Chan Mun Choon School of Computing National University of Singapore Aug 14, 2015 CS6204 Lecturer Chan Mun Choon Office: COM2, #04-17 Email: [email protected]
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Software Defined Networking Basics
Software Defined Networking Basics Anupama Potluri School of Computer and Information Sciences University of Hyderabad Software Defined Networking (SDN) is considered as a paradigm shift in how networking
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework)
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework) John Sonchack, Adam J. Aviv, Eric Keller, and Jonathan M. Smith Outline Introduction Overview of OFX Using OFX Benchmarks
Software Defined Networking for Telecom Operators: Architecture and Applications
2013 8th International Conference on Communications and Networking in China (CHINACOM) Software Defined Networking for Telecom Operators: Architecture and Applications Jian-Quan Wang China Unicom Research
The Internet: A Remarkable Story. Inside the Net: A Different Story. Networks are Hard to Manage. Software Defined Networking Concepts
The Internet: A Remarkable Story Software Defined Networking Concepts Based on the materials from Jennifer Rexford (Princeton) and Nick McKeown(Stanford) Tremendous success From research experiment to
Programmable Networking with Open vswitch
Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Software Defined Networks
Software Defined Networks Damiano Carra Università degli Studi di Verona Dipartimento di Informatica Acknowledgements! Credits Part of the course material is based on slides provided by the following authors
Live Migration of an Entire Network (and its Hosts)
Live Migration of an Entire Network (and its Hosts) Eric Keller, Soudeh Ghorbani, Matt Caesar, Jennifer Rexford University of Colorado UIUC Princeton University ABSTRACT Live virtual machine (VM) migration
IPOP-TinCan: User-defined IP-over-P2P Virtual Private Networks
IPOP-TinCan: User-defined IP-over-P2P Virtual Private Networks Renato Figueiredo Advanced Computing and Information Systems Lab University of Florida ipop-project.org Unit 3: Intra-cloud Virtual Networks
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
OpenFlow: Enabling Innovation in Campus Networks
OpenFlow: Enabling Innovation in Campus Networks Nick McKeown Stanford University Presenter: Munhwan Choi Table of contents What is OpenFlow? The OpenFlow switch Using OpenFlow OpenFlow Switch Specification
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Network Services in the SDN Data Center
Network Services in the SDN Center SDN as a Network Service Enablement Platform Whitepaper SHARE THIS WHITEPAPER Executive Summary While interest about OpenFlow and SDN has increased throughout the tech
SDN. WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking. Recep Ozdag Intel Corporation
WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking Intel Ethernet Switch FM6000 Series - Software Defined Networking Recep Ozdag Intel Corporation Software Defined Networking
Data Analysis Load Balancer
Data Analysis Load Balancer Design Document: Version: 1.0 Last saved by Chris Small April 12, 2010 Abstract: The project is to design a mechanism to load balance network traffic over multiple different
Cloud computing doesn t yet have a
The Case for Cloud Computing Robert L. Grossman University of Illinois at Chicago and Open Data Group To understand clouds and cloud computing, we must first understand the two different types of clouds.
Network performance in virtual infrastructures
Network performance in virtual infrastructures A closer look at Amazon EC2 Alexandru-Dorin GIURGIU University of Amsterdam System and Network Engineering Master 03 February 2010 Coordinators: Paola Grosso
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.
From Active & Programmable Networks to.. OpenFlow & Software Defined Networks Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S. Braun University of Basel Cs321 - HS 2012 (Slides material from www.bigswitch.com)
Virtualization, SDN and NFV
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management
Research Paper Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management Raphael Eweka MSc Student University of East London
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger
Mobility Management Framework in Software Defined Networks
, pp. 1-10 http://dx.doi.org/10.14257/ijseia.2014.8.8,01 Mobility Management Framework in Software Defined Networks Kyoung-Hee Lee Department of Computer Engineering, Pai Chai University, Korea [email protected]
OpenFlow: Concept and Practice. Dukhyun Chang ([email protected])
OpenFlow: Concept and Practice Dukhyun Chang ([email protected]) 1 Contents Software-Defined Networking (SDN) Overview of OpenFlow Experiment with OpenFlow 2/24 Software Defined Networking.. decoupling
Autonomous Fast Rerouting for Software Defined Network
Autonomous ast Rerouting for Software Defined Network 2012.10.29 NTT Network Service System Laboratories, NTT Corporation Shohei Kamamura, Akeo Masuda, Koji Sasayama Page 1 Outline 1. Background and Motivation
Xperience of Programmable Network with OpenFlow
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control
Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
SDN CENTRALIZED NETWORK COMMAND AND CONTROL
SDN CENTRALIZED NETWORK COMMAND AND CONTROL Software Defined Networking (SDN) is a hot topic in the data center and cloud community. The geniuses over at IDC predict a $2 billion market by 2016
The Platform as a Service Model for Networking
The Platform as a Service Model for Networking Eric Keller Princeton University [email protected] Jennifer Rexford Princeton University [email protected] Abstract Decoupling infrastructure management
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
A collaborative model for routing in multi-domains OpenFlow networks
A collaborative model for routing in multi-domains OpenFlow networks Xuan Thien Phan, Nam Thoai Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology Ho Chi Minh city, Vietnam
DESIGN AND ANALYSIS OF TECHNIQUES FOR MAPPING VIRTUAL NETWORKS TO SOFTWARE- DEFINED NETWORK SUBSTRATES
DESIGN AND ANALYSIS OF TECHNIQUES FOR MAPPING VIRTUAL NETWORKS TO SOFTWARE- DEFINED NETWORK SUBSTRATES Tran Song Dat Phuc - Uyanga Department of Computer Science and Engineering SeoulTech 2014 Table of
OpenFlow - the key standard of Software-Defined Networks. Dmitry Orekhov, Epam Systems
OpenFlow - the key standard of Software-Defined Networks Dmitry Orekhov, Epam Systems Software-defined network The Need for a New Network Architecture Limitations of Current Networking Technologies Changing
Software Defined Networking (SDN) - Open Flow
Software Defined Networking (SDN) - Open Flow Introduction Current Internet: egalitarian routing/delivery based on destination address, best effort. Future Internet: criteria based traffic management,
Networking in the Era of Virtualization
SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.
Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings
Solution Brief Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings Introduction Accelerating time to market, increasing IT agility to enable business strategies, and improving
Multi Protocol Label Switching (MPLS) is a core networking technology that
MPLS and MPLS VPNs: Basics for Beginners Christopher Brandon Johnson Abstract Multi Protocol Label Switching (MPLS) is a core networking technology that operates essentially in between Layers 2 and 3 of
Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013
Network Virtualization and Software-defined Networking Chris Wright and Thomas Graf Red Hat June 14, 2013 Agenda Problem Statement Definitions Solutions She can't take much more of this, captain! Challenges
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support Qingwei Du and Huaidong Zhuang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics,
Software Defined Networking & Openflow
Software Defined Networking & Openflow Autonomic Computer Systems, HS 2015 Christopher Scherb, 01.10.2015 Overview What is Software Defined Networks? Brief summary on routing and forwarding Introduction
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? Many slides stolen from Jennifer Rexford, Nick McKeown, Scott Shenker, Teemu Koponen, Yotam Harchol and David Hay Agenda
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
SDN/Virtualization and Cloud Computing
SDN/Virtualization and Cloud Computing Agenda Software Define Network (SDN) Virtualization Cloud Computing Software Defined Network (SDN) What is SDN? Traditional Network and Limitations Traditional Computer
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE PANG-WEI TSAI, CHUN-YU HSU, MON-YEN LUO AND CHU-SING YANG NATIONAL CHENG KUNG UNIVERSITY, INSTITUTE OF COMPUTER
Analysis on Virtualization Technologies in Cloud
Analysis on Virtualization Technologies in Cloud 1 V RaviTeja Kanakala, V.Krishna Reddy, K.Thirupathi Rao 1 Research Scholar, Department of CSE, KL University, Vaddeswaram, India I. Abstract Virtualization
MASTER THESIS. Performance Comparison Of the state of the art Openflow Controllers. Ahmed Sonba, Hassan Abdalkreim
Master's Programme in Computer Network Engineering, 60 credits MASTER THESIS Performance Comparison Of the state of the art Openflow Controllers Ahmed Sonba, Hassan Abdalkreim Computer Network Engineering,
Software Defined Networking and the design of OpenFlow switches
Software Defined Networking and the design of OpenFlow switches Paolo Giaccone Notes for the class on Packet Switch Architectures Politecnico di Torino December 2015 Outline 1 Introduction to SDN 2 OpenFlow
White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com
SDN 101: An Introduction to Software Defined Networking citrix.com Over the last year, the hottest topics in networking have been software defined networking (SDN) and Network ization (NV). There is, however,
Advanced Study of SDN/OpenFlow controllers
Advanced Study of SDN/OpenFlow controllers Alexander Shalimov [email protected] Vasily Pashkov [email protected] Dmitry Zuikov [email protected] Ruslan Smeliansky [email protected] Daria Zimarina [email protected]
Designing Virtual Network Security Architectures Dave Shackleford
SESSION ID: CSV R03 Designing Virtual Network Security Architectures Dave Shackleford Sr. Faculty and Analyst SANS @daveshackleford Introduction Much has been said about virtual networking and softwaredefined
Towards Software Defined Cellular Networks
Towards Software Defined Cellular Networks Li Erran Li (Bell Labs, Alcatel-Lucent) Morley Mao (University of Michigan) Jennifer Rexford (Princeton University) 1 Outline Critiques of LTE Architecture CellSDN
Research on Video Traffic Control Technology Based on SDN. Ziyan Lin
Joint International Mechanical, Electronic and Information Technology Conference (JIMET 2015) Research on Video Traffic Control Technology Based on SDN Ziyan Lin Communication University of China, Beijing
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Eric Choi < [email protected]> Senior Manager, Service Provider Business Unit, APJ 2012 Brocade Communications Systems, Inc. EPF 7 2012/09/17 Software-Defined Networking
Intel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks Aaron Rosen and Kuang-Ching Wang Holcombe Department of Electrical and Computer Engineering Clemson University Clemson,
Technical Bulletin. Enabling Arista Advanced Monitoring. Overview
Technical Bulletin Enabling Arista Advanced Monitoring Overview Highlights: Independent observation networks are costly and can t keep pace with the production network speed increase EOS eapi allows programmatic
Applying SDN to Network Management Problems. Nick Feamster University of Maryland
Applying SDN to Network Management Problems Nick Feamster University of Maryland 1 Addressing the Challenges of Network Management Challenge Approach System Frequent Changes Event-Based Network Control
Wedge Networks: Transparent Service Insertion in SDNs Using OpenFlow
Wedge Networks: EXECUTIVE SUMMARY In this paper, we will describe a novel way to insert Wedge Network s multiple content security services (such as Anti-Virus, Anti-Spam, Web Filtering, Data Loss Prevention,
Autonomicity Design in OpenFlow Based Software Defined Networking
GC'12 Workshop: The 4th IEEE International Workshop on Management of Emerging Networks and Services Autonomicity Design in OpenFlow Based Software Defined Networking WANG Wendong, Yannan HU, Xirong QUE,
A Study on Software Defined Networking
A Study on Software Defined Networking Yogita Shivaji Hande, M. Akkalakshmi Research Scholar, Dept. of Information Technology, Gitam University, Hyderabad, India Professor, Dept. of Information Technology,
Cisco Nexus Data Broker: Deployment Use Cases with Cisco Nexus 3000 Series Switches
White Paper Cisco Nexus Data Broker: Deployment Use Cases with Cisco Nexus 3000 Series Switches What You Will Learn Network Traffic monitoring using taps and Switched Port Analyzer (SPAN) is not a new
Rackspace Cloud Databases and Container-based Virtualization
Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many
Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre
Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre Wilfried van Haeren CTO Edgeworx Solutions Inc. www.edgeworx.solutions Topics Intro Edgeworx Past-Present-Future
Network Virtualization History. Network Virtualization History. Extending networking into the virtualization layer. Problem: Isolation
Network irtualization History Network irtualization and Data Center Networks 263-3825-00 SDN Network irtualization Qin Yin Fall Semester 203 Reference: The Past, Present, and Future of Software Defined
What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates
What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates 1 Goals of the Presentation 1. Define/describe SDN 2. Identify the drivers and inhibitors of SDN 3. Identify what
Network Virtualization and its Application to M2M Business
Network Virtualization and its Application to M2M Business M2M Partner Event Dr. Markus Breitbach Deutsche Telekom, Group Technology Rotterdam, 2011/09/28 Image source: http://clementvalla.com/work/google-earth-bridges/
DESIGN AND VERIFICATION OF LSR OF THE MPLS NETWORK USING VHDL
IJVD: 3(1), 2012, pp. 15-20 DESIGN AND VERIFICATION OF LSR OF THE MPLS NETWORK USING VHDL Suvarna A. Jadhav 1 and U.L. Bombale 2 1,2 Department of Technology Shivaji university, Kolhapur, 1 E-mail: [email protected]
EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications
ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani
Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers
Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers s Executive Summary Cloud computing, video streaming, and social media are contributing to a dramatic rise in metro
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
SDN Security Design Challenges
Nicolae Paladi SDN Security Design Challenges SICS Swedish ICT! Lund University In Multi-Tenant Virtualized Networks Multi-tenancy Multiple tenants share a common physical infrastructure. Multi-tenancy
SIMPLE NETWORKING QUESTIONS?
DECODING SDN SIMPLE NETWORKING QUESTIONS? Can A talk to B? If so which what limitations? Is VLAN Y isolated from VLAN Z? Do I have loops on the topology? SO SDN is a recognition by the Networking industry
