On Controller Performance in Software-Defined Networks
|
|
|
- Samson Perkins
- 10 years ago
- Views:
Transcription
1 On Controller Performance in Software-Defined Networks Amin Tootoonchian University of Toronto/ICSI Martin Casado Nicira Networks Sergey Gorbunov University of Toronto Rob Sherwood Big Switch Networks Yashar Ganjali University of Toronto 1 Introduction Network architectures in which the control plane is decoupled from the data plane have been growing in popularity. Among the main arguments for this approach is that it provides a more structured software environment for developing network-wide abstractions while potentially simplifying the data plane. As has been adopted elsewhere [11], we refer to this split architecture as Software-Defined Networking (SDN). While it has been argued that SDN is suitable for some deployment environments (such as homes [17, 13], data centers [1], and the enterprise [5]), delegating control to a remote system has raised a number of questions on control-plane scaling implications of such an approach. Two of the most often voiced concerns are: (a) how fast can the controller respond to data path requests?; and (b) how many data path requests can it handle per second? There are some references to the performance of SDN systems in the literature [16, 5, 3]. For example, an oft-cited study shows that a popular network controller (NOX) handles around 30k flow initiation events 1 per second while maintaining a sub-10ms flow install time [14]. Unfortunately, recent measurements of some deployment environments suggests that these numbers are far from sufficient. For example, Kandula et al. [9] found that a 1500-server cluster has a median flow arrival rate of 100k flows per second. Also, Benson et al. [2] show that a network with 100 switches can have spikes of 10M flows arrivals per second in the worst case. In addition, the 10ms flow setup delay of an SDN controller would add a 10% delay to the majority of flows (short-lived) in such a network. This disconnect between relatively poor controller performance and high network demands has motivated a 1 Throughout this paper we use flow initiation and requests interchangeably. spate of recent work (e.g., [6, 18]) to address perceived architectural inefficiencies. However, there really has been no in-depth study on the performance of a traditional SDN controller. Rather, most published results were gathered from systems that were never optimized for performance. To underscore this point, as we describe in more detail below, we were able to improve the performance of NOX, an open source controller for OpenFlow networks, by more than 30 times. Therefore, the goal of this paper is to offer a better understanding of the controller performance in the SDN architecture. The specific contributions are: We present NOX-MT a publicly-available multithreaded successor of NOX [8]. The purpose of NOX- MT is to establish a new lower bound on the maximum throughput. Unlike previous studies and implementations that were not tuned for performance, NOX-MT uses well-known optimization techniques (e.g., I/O batching) to improve the baseline performance. These optimizations lets NOX-MT outperform NOX by a factor of 33 on a server with two quad-core 2GHz processors. We design a series of flow-based benchmarks embodied in our tool, cbench, that we make freely available for others use. Cbench emulates any number of Open- Flow switches to measure different performance aspects of the controller including the minimum and maximum controller response time, maximum throughput, and the throughput and latency of the controller with a bounded number of packets on the fly. We present a study of SDN controller performance using four publicly-available OpenFlow controllers: NOX, NOX-MT, Beacon, and Maestro [4]. We consider NOX as the baseline for our performance study since it has been previously used in different papers [14, 18, 6]. 2 NOX-MT NOX whose measured performance motivated several recent proposals on improving control plane efficiency
2 has a very low flow setup throughput and large flow setup latency. Fortunately, this is not an intrinsic limitation of the SDN control plane: NOX is not optimized for performance and is single-threaded. We present NOX-MT, a slightly modified multithreaded successor of NOX, to show that with simple tweaks we were able to significantly improve NOX s throughput and response time. The techniques we used to optimize NOX are quite well-known including: I/O batching to minimize the overhead of I/O, porting the I/O handling harness to Boost Asynchronous I/O (ASIO) library (which simplifies multi-threaded operation), and using a fast multiprocessor-aware malloc implementation that scales well in a multi-core machine. Despite these modifications, NOX-MT is far from perfect. It does not address many of NOX s performance deficiencies, including but not limited to: heavy use of dynamic memory allocation and redundant memory copies on a perrequest basis, and using locking were robust wait-free alternatives exist. Addressing these issues would significantly improve NOX s performance. However, they require fundamental changes to the NOX code base and we leave them to future work. To the best of our knowledge, NOX-MT was the first effort in enhancing controller performance and motivated other controllers to improve. The experiments presented in this paper were performed in May Ever since all controllers studied in this paper have significantly changed and have different performance characteristics. We emphasize that our goal in this paper is to show that SDN controllers can be optimized to be very fast. 3 Experiment Setup In an effort to quantify controller performance, we create a custom tool, cbench [12], to measure the number of flow setups per second that a controller can handle. In SDN, the OpenFlow controller 2 must setup and tear down flow-level forwarding state in OpenFlow switches. This flow setup process can happen statically before packets arrive ( proactively ) or dynamically as part of the next hop lookup process ( reactively ). Reactive flow setups are particularly sensitive because they add latency to the first packet in a flow. Once set up, the flow forwarding state remains cached on the OpenFlow switch so that this process is not repeated for subsequent packets in the same flow. The OpenFlow controller also communicates how long to cache the state: either indefinitely, after a fixed timeout, or after a period of inactivity. We choose to focus on the flow setup process because both 2 While the specifics of this description use an OpenFlow-based terminology, the flow setup operation is common to all flow-based network architectures, e.g., setting up virtual circuits in ATM, configuring labels in MPLS, or physical circuits in optical networks. because it is integral to SDN and because it is perceived to be the likeliest source of performance bottleneck. Our tool, cbench [12], measures various performance issues related to flow setup time. Cbench emulates a configurable number of OpenFlow switches that all communicate with a single OpenFlow controller. Each emulated switch sends a configurable number of new flow (Open- Flow packet in) messages to the OpenFlow controller, waits for the appropriate flow setup (OpenFlow flow mod or packet out) responses, and records the difference in time between request and response. Cbench supports two modes of operation: latency and throughput mode. In latency mode, each emulated switch maintains exactly one outstanding new flow request, waiting for a response before soliciting the next request. Latency mode measures the OpenFlow controller s request processing time under low-load conditions. By contrast, in throughput mode, each switch maintains as many outstanding requests as buffering will allow, that is, until the local TCP send buffer blocks. Thus, throughput mode measures the maximum flow setup rate that a controller can maintain. Cbench also supports a hybrid mode with n-new flow requests outstanding, to explore between these two extremes. We analyzed cbench to ensure that it is not a bottleneck in our experiments. For that, we instrumented cbench to report the average number of requests on the fly for different experiments. Cbench also reports average throughput and response time. According to Little s theorem (L = λw) [10] the average number of outstanding requests must match the product of the system throughput and the average response time. Throughout our experiments these numbers were in agreement. The slight differences are an artifact of taking the average over the samples collected in each run. Using cbench, we evaluated the flow setup throughput and latency of four publicly available OpenFlow controllers using cbench: (a) NOX [8] is a single-threaded C++ OpenFlow controller adopted by both industry and academia. (b) NOX-MT is an optimized multi-threaded successor of NOX we developed and presented in this paper. (c) Maestro [4] is a multi-threaded Java-based controller from Rice university. (d) Beacon 3 is a multithreaded Java-based controller from Stanford university and Big Switch Networks. We used the latest available version of each controller available as of May In all the experiments, each controller runs the L2 switching application provided by the controller. 4 For 3 Through private correspondence with Beacon s author, we were informed that Beacon performs twice better on a 64-bit OS. We note that the results reported in this paper are for 32-bit runs. However, we are not aware of the reason for this gap in performance. 4 We choose to use L2 switching for our evaluation for two primary reasons. First, it provides a lower-bound for lookup since the full forwarding decision can be accomplished with a hash followed by a sin-
3 each switch on the path, the switch application performs MAC address learning. Each packet is forwarded out of the last port on which the traffic from the destination MAC address is seen. Packets with unknown destinations are flooded. In NOX, for each switch, the mapping between the MAC-switch tuple and the port number is stored in a hash table. The switch application has mostly read-only work load: only requests with newly observed source MAC addresses trigger an insert/update in the hash table, and the number of such events is proportional to the product of the number of hosts in the network and the number of switches. To minimize interference, we run the controller and cbench on two separate servers (8 2GHz and GHz CPU cores respectively, both with 4GB of DDR2 ram) 5. Each server has two Gigabit ports directly connected to the other server. The Gigabit links are teamed together to provide a 2Gbps control bandwidth required for some experiments. Throughout our experiments cbench s CPU utilization is consistently less than 50%. We occasionally run multiple parallel instances of cbench on different processors to verify cbench s fairness in serving different emulated switches (sockets) as well as its accuracy. Each test consists of 4 loops each lasting 5 seconds. The first five seconds (first loop) is considered as controller warm-up and its results are discarded. Each test uses 100k unique MAC addresses (representing 100k emulated end hosts). For experiments with fixed number of switches, we chose to present the results for 32 emulated switches because we do not expect a large number of switches to be stressing the network simultaneously. To maximize the stress on the controller and to ensure that the bandwidth is not the bottleneck to the extent possible, we use 82-byte sized OpenFlow packet-in messages (referred to as requests in the graphs). 6 gle lookup. Thus, the test is not heavily augmented by overhead of the lookup (which could be the case with something like longest-prefix match implemented in software). And secondly, basic switching has been implemented in all the controllers we tested. 5 All servers run Debian Squeeze with Linux bigmem, gcc 4.4.5, GNU libc 2.11, GNU libc++ 3, Boost 1.42, Sun Java 1.6, and TCMalloc 1.5. NOX and NOX-MT are compiled with no support for python and with debugging disabled. All controllers are run with logging disabled and no verbose output. For NOX, NOX-MT, and Maestro threads are bound to distinct CPUs. We used TCMalloc [7] malloc implementation since it provides a faster alternative to GNU libc s malloc with better scalability in multi-threaded programs. Also, unless otherwise noted, we run the experiments with default sizes for Linux networking stack buffers as well as NIC drivers ring buffers. Cubic is the default congestion control algorithm on Linux All our estimates on the bandwidth usage is based on this size. For instance, throughout the paper we map 1.45M requests to 1Gbps control bandwidth. That is because 1.45 Mreq sec 82 bytes req 951.2Mbps and assuming that the majority of Ethernet frames are MTU-sized (to account for Ethernet+IP+TCP overhead), this number roughly corresponds to 1Gbps. 8 bits byte = 4 Controller Throughput Individual controllers throughput is an important factor in deciding the overall number of controllers required to handle the network control load. Our focus in this section is to study the maximum throughput in the system in various settings. Maximum throughput: Figure 1 shows the average maximum throughput of different controllers with different number of threads. The results suggest that: (a) NOX-MT shows a significantly better performance compared to the other controllers. It saturates 1Gbps control bandwidth with four 2GHz CPU cores. (b) As expected, all the multi-threaded controllers achieve near-linear scalability with the number of threads (cores), because the controller s workload is mostly readonly minimizing the amount of required serialization. Figure 1: Average maximum throughput achieved with different number of threads. We use 32 emulated switches and 100k unique MAC addresses per switch. NOX-MT saturates 1Gbps (1.45Mreq/sec) of control bandwidth using four 2GHz CPU cores. Since NOX is single-threaded it only has a single point in this and similar graphs. Relation with the number of active switches: Ideally, controller s aggregate throughput should not be affected by the number of switches connected to it. However, increased contention across threads, TCP dynamics, and task scheduling overhead within the controller are factors that can lead to a degraded performance if we have a large number of highly active switches. To study the impact of the number of switches on controller performance, we measure the average maximum throughput with different number of switches and threads in Figure 2. We observe that, adding more threads beyond the number of active switches does not improve throughput. Also, increase in the number of switches beyond a threshold reduces the overall throughput, because: (a) I/O handling overhead increases, (b) contention on the task queue and other shared resources increases, and (c) I/O and job batching become less effective. We note that the number of switches presented in
4 our experiments only reflects the highly active (i.e. producing an extremely large number of requests per second) switches. In a typical network it is very unlikely that all the switches are highly active at the same time, so each controller should be able to manage a far larger number of switches. Relation with the load level: Cbench s throughput mode effectively keeps the pipe between itself and the controller full all the time. However, since we have large buffers across all layers in the networking stack (e.g., network adapter s ring buffer, network interface s send and receive queues, TCP buffers, etc.), this pipe is quite large. Our experiments verify that average maximum throughputs with 2 12 outstanding requests are very close to the ones with unlimited number of outstanding requests (see Figure 3). With the same throughput, it follows from the Little s law that doubling the number of requests in the system doubles the response time. In our experiments, the average delay with unlimited outstanding requests is almost an order of magnitude larger than the limited ones even though they achieve the same level of throughput. We study the controller response time corresponding to different load levels in Section 5. Effect of write-intensive workload: Write-intensive workloads increase the contention in the network control applications. For the switch control application, having a large number of unique source MAC addresses result in a write-intensive workload. As shown in Figure 4, both Maestro and Beacon are significantly affected by this workload. However, NOX-MT does not exhibit a similar behavior. That is because NOX- MT s switch application minimizes contention in such scenarios by partitioning the network s MAC address table among a pool of hash tables selected by the hash of the MAC address. Figure 4: Average maximum throughput for various levels of write-intensive workloads with 32 switches and 4 threads. 5 Controller Response Time In a software-defined network where flow setup is performed reactively, controller response time directly affects the flow completion times. In this section, we present benchmarks to measure minimum (least load) and maximum controller (maximum load) response time of SDN controllers. Then, we study the relation between the load level and response time as well as the number of switches and response time. Minimum response time: To measure minimum control plane response time, we constrain the number of packets on the fly to be exactly one. Besides controller service time, this minimum response time includes traversal of networking stacks on both the controller and cbench sides twice, as well as the processing time of the controller. Average response times of all controllers are between 100 and 150 microseconds. Maximum response time: Maximum response time for each controller is observed when the maximum number of packets on the fly is not bounded (i.e., when the benchmakrer exhausts all the buffers in between the emulated switch and the controller). Since NOX-MT has the highest throughput, it has the least response time compared to others. As we see in Table 1, the average number of packets on the fly across these experiments is inversely proportional to the controller throughput (in accordance with the Little s law) Relation with the load level: To better understand the relation between controller load and response time, we plotted the response time CDFs fixing the load level to 2 12 requests on the fly (see Figure 5) and the response time varying the load level (see Figure 6). We find that for the same workload adding more threads decreases the response time. Also, doubling the number of outstanding requests doubles the response time, but does not significantly affect the throughput (see Section 4). Relation with the number of active switches: Varying the number of active switches (see Table 2), we observe the same pattern for delay as we observed for throughput in Figure 2. Adding more CPUs beyond the number of switches does not improve latency, and serving far larger number of switches than available CPUs results in a noticeable increase in the response time.
5 (a) NOX-MT (b) Beacon (c) Maestro Figure 2: Average maximum throughput with different number of switches. NOX-MT shows nearly identical average maximum throughput for 16, 32 and 64 emulated switches. With 256 emulated switches the performance of all three controllers degrades. (a) NOX-MT (b) Beacon (c) Maestro Figure 3: Average maximum throughput with different number of threads and limits on the maximum overall number of requests on the fly. It is possible to achieve an almost maximum throughput even with limited number of requests on the fly for all three controllers. NOX NOX-MT Beacon Meastro ± ± ± ± ± ± ± ± ± ± ± ± ± Table 1: Worst-case average response time and standard deviation (milliseconds) for various number of threads with 32 switches and unlimited number of outstanding flow setup requests. (a) Single-threaded (b) Four threads (c) Eight threads Figure 5: Response time CDF for different controllers with 32 switches, and 212 maximum requests on the fly. NOX-MT has the lowest response time for 1, 4 and 8 threads. NOX-MT Beacon Meastro ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Table 2: Response time (milliseconds) varying the number of switches for runs with 4 threads and 212 requests on the fly.
6 (a) NOX-MT (b) Beacon (c) Maestro Figure 6: Response time CDF for various maximum number of requests on the fly with 32 switches, and four threads. 6 Concluding Remarks We present NOX-MT that establishes a new lower bound on the maximum throughput for SDN controllers. Our microbenchmarks demonstrate that existing controllers all perform significantly better than what current literature assumes. On an eight-core machine with 2GHz CPUs, NOX-MT handles 1.6 million requests per second with an average response time of 2ms. We emphasize that controller responsiveness is the primary factor to decide if additional controllers should be deployed. We are not, however, suggesting that a single physical controller is enough to manage a sizeable network. High availability and maintaining low response times are among the reasons that a network needs multiple controllers. However, given the low frequency of failures and changes to the network topology, it seems that maintaining a consistent logically centralized view of the network across controllers (e.g., [15]) is feasible. Finally, we note that understanding overall SDN performance remains an open research problem. Systemwide performance is likely a complex function of topology, work load, equipment, and forwarding complexity. Our single-controller microbenchmarks presented in this paper are just a first step towards the understanding of the performance implications of SDN. References [1] AL-FARES, M., LOUKISSAS, A., AND VAHDAT, A. A scalable, commodity data center network architecture. In Proceedings of the ACM SIGCOMM 2008 conference on Data communication (2008), ACM. [2] BENSON, T., AKELLA, A., AND MALTZ, D. A. Network traffic characteristics of data centers in the wild. In Proceedings of IMC 2010 (2010), ACM. [3] CAESAR, M., CALDWELL, D., FEAMSTER, N., REXFORD, J., SHAIKH, A., AND VAN DER MERWE, J. Design and implementation of a routing control platform. In Proceedings of the USENIX NSDI 2005 (Berkeley, CA, USA, 2005), USENIX Association. [4] CAI, Z., COX, A. L., AND NG, T. S. E. Maestro: A system for scalable OpenFlow control. Tech. Rep. TR10-11, Rice University - Department of Computer Science, December [5] CASADO, M., FREEDMAN, M. J., PETTIT, J., LUO, J., MCK- EOWN, N., AND SHENKER, S. Ethane: taking control of the enterprise. SIGCOMM CCR 37, 4 (2007), [6] CURTIS, A. R., MOGUL, J. C., TOURRILHES, J., YALAGAN- DULA, P., SHARMA, P., AND BANERJEE, S. Devoflow: scaling flow management for high-performance networks. In Proceedings of the ACM SIGCOMM 2011 (2011), ACM. [7] GOOGLE. Thread-Caching Malloc. goog-perftools.sourceforge.net/doc/tcmalloc.html. [8] GUDE, N., KOPONEN, T., PETTIT, J., PFAFF, B., CASADO, M., MCKEOWN, N., AND SHENKER, S. NOX: towards an operating system for networks. SIGCOMM CCR 38, 3 (2008), [9] KANDULA, S., SENGUPTA, S., GREENBERG, A., PATEL, P., AND CHAIKEN, R. The nature of data center traffic: measurements & analysis. In Proceedings of IMC 2009 (2009), ACM, pp [10] LEON-GARCIA, A., PROBABILITY, AND RANDOM PROCESSES FOR ELECTRICAL ENGINEERING. Probability, statistics, and random processes for electrical engineering, 3rd ed. ed. Pearson/Prentice Hall, Upper Saddle River, NJ, [11] [12] ROB SHERWOOD AND KOK-KIONG YAP. Cbench: an Open- Flow Controller Benchmarker. wk/index.php/oflops. [13] SUNDARESAN, S., DE DONATO, W., FEAMSTER, N., TEIX- EIRA, R., CRAWFORD, S., AND PESCAPÈ, A. Broadband internet performance: a view from the gateway. In Proceedings of the ACM SIGCOMM 2011 (Aug. 2011), ACM. [14] TAVAKOLI, A., CASADO, M., KOPONEN, T., AND SHENKER, S. Applying nox to the datacenter. In Proceedings of HotNets- VIII (2009). [15] TOOTOONCHIAN, A., AND GANJALI, Y. Hyperflow: a distributed control plane for openflow. In Proceedings of the 2010 internet network management conference on Research on enterprise networking (Berkeley, CA, USA, 2010), USENIX Association, pp [16] YAN, H., MALTZ, D. A., NG, T. S. E., GOGINENI, H., ZHANG, H., AND CAI, Z. Tesseract: A 4D network control plane. In Proceedings of the USENIX NSDI 2007 (2007). [17] YIAKOUMIS, Y., YAP, K.-K., KATTI, S., PARULKAR, G., AND MCKEOWN, N. Slicing home networks. In Proceedings of the 2nd ACM SIGCOMM workshop on Home networks (2011), ACM. [18] YU, M., REXFORD, J., FREEDMAN, M. J., AND WANG, J. Scalable flow-based networking with difane. In Proceedings of the ACM SIGCOMM 2010 (2010), ACM, pp
Towards an Elastic Distributed SDN Controller
Towards an Elastic Distributed SDN Controller Advait Dixit, Fang Hao, Sarit Mukherjee, T.V. Lakshman, Ramana Kompella Purdue University, Bell Labs Alcatel-Lucent ABSTRACT Distributed controllers have been
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
FlowSense: Monitoring Network Utilization with Zero Measurement Cost
FlowSense: Monitoring Network Utilization with Zero Measurement Cost Curtis Yu 1, Cristian Lumezanu 2, Yueping Zhang 2, Vishal Singh 2, Guofei Jiang 2, and Harsha V. Madhyastha 1 1 University of California,
Scalability of Control Planes for Software Defined Networks:Modeling and Evaluation
of Control Planes for Software Defined Networks:Modeling and Evaluation Jie Hu, Chuang Lin, Xiangyang Li, Jiwei Huang Department of Computer Science and Technology, Tsinghua University Department of Computer
Advanced Study of SDN/OpenFlow controllers
Advanced Study of SDN/OpenFlow controllers Alexander Shalimov [email protected] Vasily Pashkov [email protected] Dmitry Zuikov [email protected] Ruslan Smeliansky [email protected] Daria Zimarina [email protected]
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
FlowSense: Monitoring Network Utilization with Zero Measurement Cost
FlowSense: Monitoring Network Utilization with Zero Measurement Cost Curtis Yu 1, Cristian Lumezanu 2, Yueping Zhang 2, Vishal Singh 2, Guofei Jiang 2, and Harsha V. Madhyastha 1 1 University of California,
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Performance Evaluation of OpenDaylight SDN Controller
Performance Evaluation of OpenDaylight SDN Controller Zuhran Khan Khattak, Muhammad Awais and Adnan Iqbal Department of Computer Science Namal College Mianwali, Pakistan Email: {zuhran2010,awais2010,adnan.iqbal}@namal.edu.pk
HyperFlow: A Distributed Control Plane for OpenFlow
HyperFlow: A Distributed Control Plane for OpenFlow Amin Tootoonchian University of Toronto [email protected] Yashar Ganjali University of Toronto [email protected] Abstract OpenFlow assumes a
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework)
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework) John Sonchack, Adam J. Aviv, Eric Keller, and Jonathan M. Smith Outline Introduction Overview of OFX Using OFX Benchmarks
Software Defined Networking: Advanced Software Engineering to Computer Networks
Software Defined Networking: Advanced Software Engineering to Computer Networks Ankush V. Ajmire 1, Prof. Amit M. Sahu 2 1 Student of Master of Engineering (Computer Science and Engineering), G.H. Raisoni
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
A collaborative model for routing in multi-domains OpenFlow networks
A collaborative model for routing in multi-domains OpenFlow networks Xuan Thien Phan, Nam Thoai Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology Ho Chi Minh city, Vietnam
Enabling Software Defined Networking using OpenFlow
Enabling Software Defined Networking using OpenFlow 1 Karamjeet Kaur, 2 Sukhveer Kaur, 3 Vipin Gupta 1,2 SBS State Technical Campus Ferozepur, 3 U-Net Solutions Moga Abstract Software Defined Networking
ASIC: An Architecture for Scalable Intra-domain Control in OpenFlow
ASIC: An Architecture for Scalable Intra-domain Control in Pingping Lin, Jun Bi, Hongyu Hu Network Research Center, Department of Computer Science, Tsinghua University Tsinghua National Laboratory for
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
OFCProbe: A Platform-Independent Tool for OpenFlow Controller Analysis
c 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising
On Scalability of Software-Defined Networking
SOFTWARE DEFINED NETWORKS On Scalability of Software-Defined Networking Soheil Hassas Yeganeh, Amin Tootoonchian, and Yashar Ganjali, University of Toronto ABSTRACT In this article, we deconstruct scalability
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
On Bringing Software Engineering to Computer Networks with Software Defined Networking
On Bringing Software Engineering to Computer Networks with Software Defined Networking Alexander Shalimov Applied Research Center for Computer Networks, Moscow State University Email: [email protected]
Scalable Network Virtualization in Software-Defined Networks
Scalable Network Virtualization in Software-Defined Networks Dmitry Drutskoy Princeton University Eric Keller University of Pennsylvania Jennifer Rexford Princeton University ABSTRACT Network virtualization
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management
Research Paper Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management Raphael Eweka MSc Student University of East London
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Orion: A Hybrid Hierarchical Control Plane of Software-Defined Networking for Large-Scale Networks
2014 IEEE 22nd International Conference on Network Protocols Orion: A Hybrid Hierarchical Control Plane of Software-Defined Networking for Large-Scale Networks Yonghong Fu 1,2,3, Jun Bi 1,2,3, Kai Gao
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Intel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller Srinivas Govindraj, Arunkumar Jayaraman, Nitin Khanna, Kaushik Ravi Prakash [email protected], [email protected],
OPENFLOW-BASED LOAD BALANCING GONE WILD
OPENFLOW-BASED LOAD BALANCING GONE WILD RICHARD WANG MASTER S THESIS IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCE IN ENGINEERING DEPARTMENT OF COMPUTER SCIENCE PRINCETON UNIVERSITY
Kandoo: A Framework for Efficient and Scalable Offloading of Control Applications
Kandoo: A Framework for Efficient and Scalable Offloading of Control s Soheil Hassas Yeganeh University of Toronto [email protected] Yashar Ganjali University of Toronto [email protected] ABSTRACT
11.1 inspectit. 11.1. inspectit
11.1. inspectit Figure 11.1. Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: http://www.inspectit.eu/) has been developed by NovaTec.
Towards an Ethernet Learning Switch and Bandwidth Optimization using POX Controller
Towards an Ethernet Learning Switch and Bandwidth Optimization using POX Controller Abhishek Bagewadi 1, Dr. K N Rama Mohan Babu 2 M.Tech Student, Department Of ISE, Dayananda Sagar College of Engineering,
Datacenter Network Large Flow Detection and Scheduling from the Edge
Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou [email protected] Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
SDN Security Design Challenges
Nicolae Paladi SDN Security Design Challenges SICS Swedish ICT! Lund University In Multi-Tenant Virtualized Networks Multi-tenancy Multiple tenants share a common physical infrastructure. Multi-tenancy
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Software Defined Networking Architecture
Software Defined Networking Architecture Brighten Godfrey CS 538 October 8 2013 slides 2010-2013 by Brighten Godfrey The Problem Networks are complicated Just like any computer system Worse: it s distributed
OpenDaylight Performance Stress Tests Report
OpenDaylight Performance Stress Tests Report v1.0: Lithium vs Helium Comparison 6/29/2015 Introduction In this report we investigate several performance aspects of the OpenDaylight controller and compare
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Enhancing Responsiveness and Scalability for OpenFlow Networks via Control-Message Quenching
Enhancing Responsiveness and Scalability for OpenFlow Networs via Control-Message Quenching Tie Luo, Hwee-Pin Tan, Philip C. Quan, Yee Wei Law and Jiong Jin Institute for Infocomm Research, A*STAR, Singapore
High-performance vswitch of the user, by the user, for the user
A bird in cloud High-performance vswitch of the user, by the user, for the user Yoshihiro Nakajima, Wataru Ishida, Tomonori Fujita, Takahashi Hirokazu, Tomoya Hibi, Hitoshi Matsutahi, Katsuhiro Shimano
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
The libfluid OpenFlow Driver Implementation
The libfluid OpenFlow Driver Implementation Allan Vidal 1, Christian Esteve Rothenberg 2, Fábio Luciano Verdi 1 1 Programa de Pós-graduação em Ciência da Computação, UFSCar, Sorocaba 2 Faculdade de Engenharia
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Extensible and Scalable Network Monitoring Using OpenSAFE
Extensible and Scalable Network Monitoring Using OpenSAFE Jeffrey R. Ballard [email protected] Ian Rae [email protected] Aditya Akella [email protected] Abstract Administrators of today s networks are
Hadoop Technology for Flow Analysis of the Internet Traffic
Hadoop Technology for Flow Analysis of the Internet Traffic Rakshitha Kiran P PG Scholar, Dept. of C.S, Shree Devi Institute of Technology, Mangalore, Karnataka, India ABSTRACT: Flow analysis of the internet
SIDN Server Measurements
SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources
OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin
OpenFlow with Intel 82599 Voravit Tanyingyong, Markus Hidell, Peter Sjödin Outline Background Goal Design Experiment and Evaluation Conclusion OpenFlow SW HW Open up commercial network hardware for experiment
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
Dynamic Controller Deployment in SDN
Dynamic Controller Deployment in SDN Marc Huang, Sherrill Lin, Dominic Yan Department of Computer Science, University of Toronto Table of Contents Introduction... 1 Background and Motivation... 1 Problem
Xperience of Programmable Network with OpenFlow
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Applying NOX to the Datacenter
Applying NOX to the Datacenter Arsalan Tavakoli UC Berkeley Martin Casado and Teemu Koponen Nicira Networks Scott Shenker UC Berkeley, ICSI 1 Introduction Internet datacenters offer unprecedented computing
MAGENTO HOSTING Progressive Server Performance Improvements
MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 [email protected] 1.866.963.0424 www.simplehelix.com 2 Table of Contents
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
Cisco Integrated Services Routers Performance Overview
Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
WHITE PAPER. SDN Controller Testing: Part 1
WHITE PAPER SDN Controller Testing: Part 1 www.ixiacom.com 915-0946-01 Rev. A, April 2014 2 Table of Contents Introduction... 4 Testing SDN... 5 Methodologies... 6 Testing OpenFlow Network Topology Discovery...
Facility Usage Scenarios
Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support Qingwei Du and Huaidong Zhuang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics,
CS6204 Advanced Topics in Networking
CS6204 Advanced Topics in Networking Assoc Prof. Chan Mun Choon School of Computing National University of Singapore Aug 14, 2015 CS6204 Lecturer Chan Mun Choon Office: COM2, #04-17 Email: [email protected]
Network Virtualization History. Network Virtualization History. Extending networking into the virtualization layer. Problem: Isolation
Network irtualization History Network irtualization and Data Center Networks 263-3825-00 SDN Network irtualization Qin Yin Fall Semester 203 Reference: The Past, Present, and Future of Software Defined
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
libnetvirt: the network virtualization library
libnetvirt: the network virtualization library Daniel Turull, Markus Hidell, Peter Sjödin KTH Royal Institute of Technology, School of ICT Stockholm, Sweden Email: {danieltt,mahidell,psj}@kth.se Abstract
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Software-defined Latency Monitoring in Data Center Networks
Software-defined Latency Monitoring in Data Center Networks Curtis Yu 1, Cristian Lumezanu 2, Abhishek Sharma 2, Qiang Xu 2, Guofei Jiang 2, Harsha V. Madhyastha 3 1 University of California, Riverside
Software Defined Networking Basics
Software Defined Networking Basics Anupama Potluri School of Computer and Information Sciences University of Hyderabad Software Defined Networking (SDN) is considered as a paradigm shift in how networking
HERCULES: Integrated Control Framework for Datacenter Traffic Management
HERCULES: Integrated Control Framework for Datacenter Traffic Management Wonho Kim Princeton University Princeton, NJ, USA Email: [email protected] Puneet Sharma HP Labs Palo Alto, CA, USA Email:
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
OpenTM: Traffic Matrix Estimator for OpenFlow Networks
OpenTM: Traffic Matrix Estimator for OpenFlow Networks Amin Tootoonchian, Monia Ghobadi, Yashar Ganjali {amin,monia,yganjali}@cs.toronto.edu Department of Computer Science University of Toronto, Toronto,
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? Many slides stolen from Jennifer Rexford, Nick McKeown, Scott Shenker, Teemu Koponen, Yotam Harchol and David Hay Agenda
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General
Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper
: Moving Storage to The Memory Bus A Technical Whitepaper By Stephen Foskett April 2014 2 Introduction In the quest to eliminate bottlenecks and improve system performance, the state of the art has continually
