High-Fidelity Switch Models for Software-Defined Network Emulation
|
|
|
- Ilene Bell
- 10 years ago
- Views:
Transcription
1 High-Fidelity Switch Models for Software-Defined Network Emulation Danny Yuxing Huang, Kenneth Yocum, and Alex C. Snoeren Department of Computer Science and Engineering University of California, San Diego {dhuang, kyocum, ABSTRACT Software defined networks (SDNs) depart from traditional network architectures by explicitly allowing third-party software access to the network s control plane. Thus, SDN protocols such as Open- Flow give network operators the ability to innovate by authoring or buying network controller software independent of the hardware. However, this split design can make planning and designing large SDNs even more challenging than traditional networks. While existing network emulators allow operators to ascertain the behavior of traditional networks when subjected to a given workload, we find that current approaches fail to account for significant vendor-specific artifacts in the SDN switch control path. We benchmark OpenFlow-enabled switches from three vendors and illustrate how differences in their implementation dramatically impact latency and throughput. We present a measurement methodology and emulator extension to reproduce these control-path performance artifacts, restoring the fidelity of emulation. Categories and Subject Descriptors C.4 [Performance of Systems]: [Modeling Techniques; Measurement Techniques]; C.2 [Computer Communication Networks]: Keywords SDN; Software-Defined Networking; OpenFlow; Modeling 1. INTRODUCTION The performance of applications and services running in a network is sensitive to changing configurations, protocols, software, or hardware. Thus network operators and architects often employ a variety of tools to establish the performance and correctness of their proposed software and hardware designs before deployment. Network emulation has been an important tool for this task, allowing network operators to deploy the same end-system applications and protocols on the emulated network as on the real hardware [8, 13, 15]. In these environments, the interior of the network comprises a real-time link emulator that faithfully routes and switches packets according to a given specification, accurately accounting Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. HotSDN 13, August 16, 2013, Hong Kong, China. Copyright 2013 ACM /13/08...$ for details such as buffer lengths and AQM settings. In general, these emulators do not attempt to reproduce vendor-specific details of particular pieces of hardware, as end-to-end performance is typically dominated by implementation-agnostic features like routing (choice of links), link contention, and end-system protocols (application or transport-layer). However, software defined networks (SDNs) challenge this assumption. In traditional switching hardware the control plane rarely affects data plane performance; it is reasonable to assume the switch forwards between ports at line rate for all flows. In contrast, an OpenFlow-enabled switch outsources its control plane functions to an external controller. New flows require a set-up process that typically involves the controller and modifications to the internal hardware flow tables (e.g., ternary content-addressable memory, or TCAM). Moreover, this process depends upon the particular hardware and software architecture of the switch, including its CPU, TCAM type and size, and firmware. Thus, while the maturity of traditional network switch designs have allowed emulators to make simplifying assumptions, the relatively youthful state of SDN hardware implies that network emulators may not be able to make similar generalizations. We argue that high-fidelity, vendor-specific emulation of SDN (specifically, OpenFlow) switches is required to provide accurate and repeatable experiments. In particular, we show that architectural choices made by switch vendors impact the end-to-end performance of latency-sensitive applications, such as webpage load times, database transactions, and some scientific computations. State-of-the-art SDN emulation depends upon the use of multiple software switches, such as Open vswitch (OVS), to emulate a network of OpenFlow devices [7, 11]. While these systems allow network operators to test innovative OpenFlow controller software, our results show that, in general, it is impossible for an operator to use these emulators to determine how applications will perform over a network consisting of OpenFlow switches produced by a particular (set of) vendor(s). For instance, different vendors employ distinct TCAM technologies that affect the size of the hardware flow table (the maximum number of entries may even change based on rule complexity [2]) and the rate at which rules may be installed. Switch firmware may buffer and/or delay flow installations, or even employ a software flow table with various policies regarding how to promote flows to the hardware table. For example, by default OVS allows 65,000 rules with nearly instantaneous rule installation; in contrast, a Fulcrum Monaco switch in our testbed has a 511-entry flow table that can insert at most 42 new rules a second. To characterize how these differences affect application performance, we perform an empirical investigation of three vendors OpenFlow switches. We find two general ways in which switch 43
2 Switch Firmware CPU Link Flow table entries Flow setup rate Software table HP ProCurve J9451A K MHz 1 Gbps flows/sec Yes Fulcrum Monaco Reference Open vswitch MHz 10 Gbps flows/sec No Quanta LB4G indigo-1.0-web-lb4g-rc3 825 MHz 1 Gbps flows/sec Yes Open vswitch on Xeon X3210 version GHz 1 Gbps 65, flows/sec Only Table 1: The three vendor switches studied in this work, as well as our host-based OVS SDN emulator. design affects network performance: control path delays and flow table designs. Section 2 describes how these two classes of artifacts impact network performance and quantifies the error introduced by unmodified OVS emulations. Section 3 describes a methodology to thumbprint important hardware switch behavior in a blackbox fashion and replicate it in an emulator. Even with our simple model, we find that an appropriately calibrated emulation infrastructure can approximate the behavior of the switches we study. 2. MOTIVATION We focus on two particular switch artifacts: control path delays and flow table characteristics. Our treatment is not exhaustive; there are likely to be other controller/switch interactions that highlight vendor differences beyond those discussed here. For instance, OpenFlow s pull-based flow monitoring is a well-known source of control plane overhead, directly affecting rule installation rates [4]. We study performance differences across switch designs from three vendors listed in Table 1. This particular set of switches represents distinct vendors, chipsets, link rates, OpenFlow implementations, and hardware/software flow table sizes. We also perform our experiments on a host-based switch running an in-kernel Open vswitch module, a setup similar to that used in Mininet to emulate SDN switches [7]. As a host-based switch, OVS has significant advantages in terms of management CPU and flow table memory (although our experiments switch packets between only two ports). 2.1 Variations in control path delays We begin by studying the impact of control path delays, i.e. the sequence of events that occurs when a new flow arrives that leads to the installation of a new forwarding rule in the switch. In general, OpenFlow switches have two forwarding modes. For the packets of flows that match existing entries in the hardware flow table, they forward at line rate. Packets from unmatched flows, on the other hand, are sent along the control path, which is often ordersof-magnitude slower. This control path consists of three steps: First, the switch moves (some of) the packet from the ingress line card to the management CPU and sends an OpenFlow packet_in event to the controller. The controller subsequently builds an appropriate packet matching rule and sends a flow_mod event back to the switch. The switch processes the flow_mod event by installing the new rule into the forwarding table and then either sending the packet out the designated egress port or dropping it, in accordance with the newly installed rule. The first and last steps are potentially influenced by the hardware and firmware in the switch, independent of the delays introduced by the controller. To test whether or not this is the case with current switches, we compare the performance of Redis, a popular open-source key-value store, when run across single-switch networks consisting of each of our three switches as well as an OVS emulator. Redis allows us to easily vary the number and size of the TCP connections it employs by changing the size of the client pool and requested value sizes, respectively. Figure 1: Experimental testbed topology. Figure 1 illustrates the testbed topology used in each of our experiments. The server has three 1-Gbps ports that are all connected to the OpenFlow switch under test. The first, eth0, is connected to the switch s control port. A POX [12] controller running the default L2-learning module listens on this Ethernet port. By default, it installs an exact-match rule for every new incoming flow. The next two server ports are connected to data ports on the switch. To prevent the data traffic from bypassing the switch, we place eth1 and eth2 in separate Linux network name spaces. Experiments with the OVS-based emulator run on the same machine; the emulated links are gated to 1 Gbps using Linux Traffic Control (TC). In no experiment (even those using OVS) was the host CPU utilization greater than 80%. Our first experiment studies control-path delays by starting Redis clients on distinct source ports every 50 ms. Each client requests a single 64-byte value and we record the query completion time. Each request requires the switch to invoke the control path described above. The experiment lasts two minutes; we report results from the last minute to capture steady-state behavior. The switches under test belong to researchers from a number of research institutions. Each switch is connected to a local test server according to the topology described above, but the physical switch and server are located at the respective institution; we conduct experiments remotely via SSH. All of the servers are similarly provisioned; we verify that all Redis queries complete well within 1 ms when the required flow rules have been installed in advance. Figure 2 shows the CDF of query completion times for each switch. The first observation is that all of the switches exhibit distinct delay distributions that are almost entirely due to variations in control path delays. Here OVS has an obvious advantage in terms of management CPU and flow table size and its Redis queries complete considerably faster than any of the hardware switches (including the 10-Gbps Monaco). Thus, latency-sensitive applications might achieve good performance using an OVS-based emulator but suffer delays when deployed on actual hardware switches. Moreover, significant differences exist even between the two 1-Gbps switches. Hence, we argue that for workloads (and/or controller software) that necessitate frequent flow set up, OVS-based emulations must 44
3 Figure 2: Query completion times for Redis clients. Figure 3: Throughput of concurrent Redis requests. Active flow counts fit within each switch s flow table. account for individual switch artifacts to accurately predict application performance. Section 3 discusses our methodology for quantifying this effect and reproducing it in an OVS-based emulator. 2.2 Impact of flow table design This section explores how differences in flow table hardware and management software affect the forwarding performance of Open- Flow switches. The flow table not only defines the maximum working set of flows that can be concurrently forwarded at line rate, but it also determines the ability of an OpenFlow controller to modify and update those rules. Most often, they are implemented in TCAMs, which combine the speed of exact-match tables with the flexibility of wild-card rules that match many header fields. However, TCAMs are relatively complicated, expensive, and power hungry; economics often limits the physical number of entries available. Moreover, TCAM rule insertion algorithms often consider the current set of rules, and may rewrite them entirely, which can gate the maximum allowed rate of hardware rule insertions. In general, emulating the correct flow table size is important since an OpenFlow network design should ensure that the flow working set (concurrent set of flows) at any switch fits within its flow table. Flows that remain in the hardware flow table will see line rate forwarding; otherwise, they experience zero effective capacity. Given this zero/one property, it is easy to emulate the performance impacts of table size. One could develop a controller module that keeps track of the current size of the flow table. It behaves normally when the flow table size is within some threshold; otherwise, the controller stops issuing new flow_mod events to mimic a full flow table. Even if the flow working set is sufficiently small, however, the presence of shorter flows can still create flow table churn. This operating regime (by one account 90% of flows in a data center send less than 10 KB [1]) is affected by other characteristics of flow table management. Four that we observe are buffering flow installation (flow_mod) events, the use of a software flow table, automatic rule propagation between the software and hardware tables, and hardware rule timeouts. For example the HP and Quanta switches both use software tables, but manage them differently. While the Quanta switch only uses the software table once the hardware table is full, the HP switch s management is more complicated. If flow_mod events arrive more rapidly than 8 per second, rules might be placed into the software table, which, like the Quanta switch, forwards flows at only a few megabits per second. In fact, the firmware buffers new rules that may have arrived too fast for the hardware table to handle. Whether a rule enters the hardware or software table is a function of the event inter-arrival rate and the Figure 4: The emulator manipulates OpenFlow events between the controller and OVS in order to approximate the performance of a given hardware switch. current capacity of the hardware table. A further complicating factor is a rule-promotion engine that migrates rules from the software to the hardware table. While a full exploration of all the interactions (and attempts to emulate them) is beyond the scope of this work, we use a flowchurn experiment to illustrate the impact of these differences. Using the same set-up as shown in Figure 1, we start Redis clients, asynchronously, every 10 ms, each requesting a 5-MB value from the server. Throughout the experiment, we ensure that the maximum number of concurrent flows does not exceed the size of the hardware flow table for each switch (2000, 1910, 1500, and 510 for OVS, Quanta, HP, and Monaco, respectively). We set the idle flow timeout to ten seconds for each switch. Figure 3 plots the distribution of per-client throughputs. Even though the number of active flows does not exceed the flow table size, each switch behaves in a markedly different way. The Monaco and Quanta switches drop flows while waiting for existing entries to time out. In contrast, the HP switch never rejects flows as rules are spread across hardware and software tables; the HP s hardware table is never full during the experiment. Although the Quanta switch also has a software table, it is not used to buffer up excess rules like the HP switch. Instead, the controller receives a failure whenever the hardware table is full. 45
4 Figure 5: Performance of the HP switch (in red) as compared to an unmodified OVS (dashed) and our HP switch emulator (blue). (a) Distribution of query completion times for Redis clients. (b) Distribution of switch processing time for ingress packets along the control path. (c) Distribution of switch processing time for egress packets. Figure 6: Performance of the Monaco switch (red) and our Monaco switch emulator (blue); OVS repeated from Figure 5 for reference. Figure 7: Performance of the Quanta switch (red) and our Quanta switch emulator (blue); OVS repeated from Figure 5 for reference. Figure 8: QQ-plots compare the emulator and hardware switch distributions of Redis query completion times. 46
5 Like control path delays, OVS emulation is a poor approximation for any of the emergent behaviors that arise from vendors flow table management schemes. We leave measuring and reproducing these effects as future work. 3. EMULATING CONTROL PATH DELAY This section examines how to measure and reproduce specific switch control path delays in an OVS-based emulator. Section 3.1 introduces a methodology for measuring the components of control path delay affected by switch design. We then design an emulator (Section 3.2) that incorporates these statistics. Section 3.3 shows that even this basic thumbprint and replicate strategy allows us to closely approximate control path delays for our target switches. 3.1 Measuring control path delays As discussed in Section 2.1, OpenFlow switches introduce delays along the control path in two places: Ingress packet processing: This delay occurs between the arrival of the first packet of a new flow and the packet_in event sent to the controller. Egress packet processing: This delay occurs between the arrival of a flow_mod event and sending the buffered packet through the egress port. We find these delays to be markedly different across vendors, as ingress and egress events must leverage the hardware flow table and switch CPU. For each switch we capture its specific ingress and egress delay distributions using the set-up shown in Figure 1. This process uses the same experimental procedure as described in Section 2.1 but with two additions. First, we modify the L2-learning POX module, such that it records the times at which packet_in events are received and packet_out events are sent. Second, tcpdump captures time stamps of all Redis packets on both eth1 and eth2. Since there is one frame of reference for all time stamps, it is straightforward to calculate the ingress and egress delay distributions. Figure 5(a) shows the distribution of 64-byte Redis query completion times for the HP ProCurve switch (in red) and OVS (dashed black), both reproduced from Figure 2. The other two subfigures, 5(b) and 5(c), show the distributions of the component ingress and egress delays observed during the experiment. Compared with OVS, the HP switch introduces significantly more latency and loss (not shown) along the control path. As a result, Redis clients complete faster on OVS than on HP. Figures 6 and 7 plot the results for the remaining two hardware switches. The differences between the ingress and egress delay distributions account for the cross-vendor variations in Redis performance shown in Figure Control path emulation with OVS Like other OpenFlow emulation architectures [11], we use OVS to emulate individual OpenFlow switches. However to reproduce the control path delays observed in the prior section we design a proxy-assisted OVS emulator. As shown in Figure 4, an emulation proxy intercepts OpenFlow control traffic between the controller and OVS. Since OVS delays control packets differently from hardware switches, the emulator must shape the intercepted packets in a way that would resemble the ingress and egress delay distribution of the target hardware switch. However, like the hardware switches, OVS exhibits its own control path delays. Thus, we must adjust the emulation in real-time to account for varying OVS delays. As shown in Figure 5(b), half of the packet_in events experience 5 40 ms of delay in OVS. If t OVS is the observed OVS ingress delay and t TS is the a desired delay taken from the target switch s ingress distribution, then the emulator needs to introduce an additional delay of max(0, t TS t OVS ). We use inverse transform sampling [5] to compute the desired delay. The OVS emulator measures t OVS as the time elapsed between the ingress packet and the packet-in event. We compute the quantile, q, of t OVS by looking up the value in the previously measured ingress delay distribution of OVS (i.e., Figure 5(b)). This process has the effect of uniformly sampling the quantile, q. The emulator then uses q to index the ingress distribution of the target switch to obtain a uniformly sampled ingress delay: t TS. This process minimizes the chance that t TS < t OVS. The egress delay sample is calculated in a similar fashion from the respective egress distributions. 3.3 Evaluation Having established that different OpenFlow switches exhibit different behaviors, here our goal is to determine if our proxy emulator can reproduce those performance artifacts successfully. To do so, we attempt to reproduce the Redis query performance distributions found in Figure 2 using our proxy-assisted OVS emulator. Figure 8 shows three QQ plots that compare the distributions of Redis performance on our proxy-assisted emulation versus the actual hardware. If the distributions are identical, all points will line up on the y = x diagonal. While far from perfect, in each case the emulator is able to reproduce the general shape of the distribution the general slope of each QQ plot is close to 1. The means differ by the following amounts with 95% confidence intervals: 202 ± 72 ms for HP, 17 ± 5 ms for Monaco, and 63 ± 18 ms for Quanta. Our current emulator is just a proof of concept; there are still systematic gaps between emulation and reality. For example, the QQ plot is above the y = x diagonal in Figure 8(c). Although the emulated distribution resembles the original one, the emulator may have failed to account for system- or OVS-specific overhead that caused the emulation to run slower. As shown in Figure 7, the emulated distribution (blue line) is almost always below the the actual distribution (red line). Similar differences can be observed in Figure 6, where our emulated distribution is mostly slower than that on the Monaco switch. Because the gap between emulation and reality is relatively consistent, it is trivial to subtract a constant delay from the emulated distribution. This adjustment shifts the QQ plots in Figures 8(b) and (c) downwards to better align with the diagonal (not shown). In contrast to the Monaco (8(b)) and Quanta (8(c)) experiments, the HP emulation (8(a)) accurately reproduces only the lower 65% of the distribution; it is less accurate for the upper quantiles. The blue curve in Figure 5(a) plots the CDF of the distribution. Although proxy emulation does better than plain OVS, it remains slightly faster than the true HP hardware. Unfortunately, examining the control path s delay profile does not provide many clues. As illustrated in Figures 5(b) and (c), the emulated ingress and egress delay distributions along the control path (blue) match the shape of the original distributions (red). Evidently, there are additional artifacts that the emulator fails to capture. As flow_mod events arrive at the switch faster than the hardware table can handle, we suspect that some flows are matched in the software table, which causes the poor performance on the hardware switch. We are working to extend the emulator to capture more of these flow table artifacts. Eventually, by combining with the current emulator, we hope to be able to emulate the HP switch more accurately. In summary, we are able to emulate hardware OpenFlow switches significantly better than OVS, although substantial dif- 47
6 ferences remain. For the HP switch, replicating the software table properties will reduce this gap. For the Monaco and Quanta switches, we can further optimize the emulator to lower the overhead. Moreover, we can develop a tool that automatically analyzes and tunes the distributions of control path delays for instance, by subtracting some constant to improve accuracy. 4. RELATED WORK Mininet is a popular option for emulating SDN-based networks [7, 11]. To emulate an OpenFlow network, it houses OVS instances on a single server on a virtual network topology that runs real application traffic. As we have shown, however, this approach lacks fidelity. The authors of OFLOPS recognize vendor-specific variations as we do [14]. Their benchmark framework injects various workloads of OpenFlow commands into switches. Based on the output, the authors have observed that there are significant differences in how various hardware switches process OpenFlow events, further underscoring the need to capture such variations in emulation. We focus primarily on vendor-specific variations in the control plane. Data planes can also be different across vendors. To help predict the application performance, commercially available tools, such as those provided by Ixia and Spirent, test the data plane with various workloads. Other work analyzes the output packet traces and constructs fine-grained queuing models of switches [9, 10]. This work is complimentary to and would combine well with ours. A related effort in modeling OpenFlow switches is the NICE project [3], which claims that OVS contains too much state and incurs unnecessary non-determinism. As a result, they implement only the key features of an OpenFlow switch, including such essential tasks as flow_mod and packet_out. We similarly simplify our abstraction of OpenFlow switches and model parameters that vary across vendors. 5. CONCLUSION In this work, we have shown OVS is a poor approximation of real OpenFlow hardware, and that simple steps can dramatically increase the accuracy of emulation. At the moment, we are still trying to address two challenges: 1. Changing the experimental workload may affect the software models of switches. By varying the workload, we hope to construct more accurate models that can predict emergent behaviors of the switch under complex network environments. 2. CPU poses a major performance bottleneck for OVS. In cases where the load on OVS is high or where the hardware switch is fast, the OVS-based emulator is likely to be slower than the hardware. We are exploring the use of time dilation [6] to slow down the end-hosts traffic, thus preventing the CPU from throttling OVS. When both of these challenges are solved, we will be able to further shrink the gap between the emulated and actual performance. Eventually, we hope to deploy an emulator that is able to replicate tens or hundreds of OpenFlow switches with high fidelity at scale. 6. ACKNOWLEDGMENTS This work is supported in part by the National Science Foundation through grant CNS We would like to thank Marco Canini and Dan Levin for granting us remote access to their Quanta switch, and George Porter for the use of a Fulcrum Monaco switch. 7. REFERENCES [1] T. Benson, A. Akella, and D. A. Maltz. Network traffic characteristics of data centers in the wild. In Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, pages , [2] Broadcom. Personal communications, March [3] M. Canini, D. Venzano, P. Perešíni, D. Kostić, and J. Rexford. A NICE way to test OpenFlow applications. In Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation, pages 10 10, [4] A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, and S. Banerjee. Devoflow: Scaling flow management for high-performance networks. In Proceedings of the ACM SIGCOMM Conference, pages , [5] L. Devroye. Non-Uniform Random Variate Generation, chapter 2.1, pages Springer-Verlag, [6] D. Gupta, K. V. Vishwanath, M. McNett, A. Vahdat, K. Yocum, A. C. Snoeren, and G. M. Voelker. DieCast: Testing distributed systems with an accurate scale model. ACM Transactions on Computer Systems, 29(2):4:1 4:48, May [7] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown. Reproducible network experiments using container-based emulation. In Proceedings of the 8th International Conference on Emerging Networking Experiments and Technologies, pages , [8] M. Hibler, R. Ricci, L. Stoller, J. Duerig, S. Guruprasad, T. Stack, K. Webb, and J. Lepreau. Large-scale virtualization in the emulab network testbed. In Proceedings of the USENIX Annual Technical Conference, pages , [9] N. Hohn, D. Veitch, K. Papagiannaki, and C. Diot. Bridging router performance and queuing theory. In Proceedings of the ACM SIGMETRICS Conference, pages , [10] D. Jin, D. Nicol, and M. Caesar. Efficient gigabit Ethernet switch models for large-scale simulation. In IEEE Workshop on Principles of Advanced and Distributed Simulation, pages 1 10, May [11] B. Lantz, B. Heller, and N. McKeown. A network in a laptop: Rapid prototyping for software-defined networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, pages 19:1 19:6, [12] NOXRepo.org. About POX. [13] R. Ricci, J. Duerig, P. Sanaga, D. Gebhardt, M. Hibler, K. Atkinson, J. Zhang, S. Kasera, and J. Lepreau. The FlexLab approach to realistic evaluation of networked systems. In Proceedings of the 4th USENIX Conference on Networked Systems Design and Implementation, pages 15 15, [14] C. Rotsos, N. Sarrar, S. Uhlig, R. Sherwood, and A. W. Moore. OFLOPS: An open framework for OpenFlow switch evaluation. In Proceedings of the 13th International Conference on Passive and Active Measurement, pages 85 95, [15] A. Vahdat, K. Yocum, K. Walsh, P. Mahadevan, D. Kostić, J. Chase, and D. Becker. Scalability and accuracy in a large-scale network emulator. In Proceedings of the 5th USENIX Symposium on Operating Systems Design and Implementation, pages ,
Towards an Elastic Distributed SDN Controller
Towards an Elastic Distributed SDN Controller Advait Dixit, Fang Hao, Sarit Mukherjee, T.V. Lakshman, Ramana Kompella Purdue University, Bell Labs Alcatel-Lucent ABSTRACT Distributed controllers have been
Software-defined Latency Monitoring in Data Center Networks
Software-defined Latency Monitoring in Data Center Networks Curtis Yu 1, Cristian Lumezanu 2, Abhishek Sharma 2, Qiang Xu 2, Guofei Jiang 2, Harsha V. Madhyastha 3 1 University of California, Riverside
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller Srinivas Govindraj, Arunkumar Jayaraman, Nitin Khanna, Kaushik Ravi Prakash [email protected], [email protected],
WHITE PAPER. SDN Controller Testing: Part 1
WHITE PAPER SDN Controller Testing: Part 1 www.ixiacom.com 915-0946-01 Rev. A, April 2014 2 Table of Contents Introduction... 4 Testing SDN... 5 Methodologies... 6 Testing OpenFlow Network Topology Discovery...
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Large-scale Network Protocol Emulation on Commodity Cloud
Large-scale Network Protocol Emulation on Commodity Cloud Anirup Dutta University of Houston [email protected] Omprakash Gnawali University of Houston [email protected] ABSTRACT Network emulation allows us
Wedge Networks: Transparent Service Insertion in SDNs Using OpenFlow
Wedge Networks: EXECUTIVE SUMMARY In this paper, we will describe a novel way to insert Wedge Network s multiple content security services (such as Anti-Virus, Anti-Spam, Web Filtering, Data Loss Prevention,
Intel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
Understanding Latency in Software Defined Networks
Understanding Latency in Software Defined Networks Junaid Khalid Keqiang He, Sourav Das, Aditya Akella Li Erran Li, Marina Thottan 1 Latency in SDN Mobility Centralized Controller Openflow msgs 1 flows
packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.
Implementation of an Emulation Environment for Large Scale Network Security Experiments Cui Yimin, Liu Li, Jin Qi, Kuang Xiaohui National Key Laboratory of Science and Technology on Information System
Dynamic Controller Deployment in SDN
Dynamic Controller Deployment in SDN Marc Huang, Sherrill Lin, Dominic Yan Department of Computer Science, University of Toronto Table of Contents Introduction... 1 Background and Motivation... 1 Problem
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? Many slides stolen from Jennifer Rexford, Nick McKeown, Scott Shenker, Teemu Koponen, Yotam Harchol and David Hay Agenda
Measuring Control Plane Latency in SDN-enabled Switches
Measuring Control Plane Latency in SDN-enabled Switches Keqiang He, Junaid Khalid, Aaron Gember-Jacobson, Sourav Das, Chaithan Prakash, Aditya Akella, Li Erran Li*, Marina Thottan* University of Wisconsin-Madison,
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
OpenFlow: Concept and Practice. Dukhyun Chang ([email protected])
OpenFlow: Concept and Practice Dukhyun Chang ([email protected]) 1 Contents Software-Defined Networking (SDN) Overview of OpenFlow Experiment with OpenFlow 2/24 Software Defined Networking.. decoupling
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
Network Programmability Using POX Controller
Network Programmability Using POX Controller Sukhveer Kaur 1, Japinder Singh 2 and Navtej Singh Ghumman 3 1,2,3 Department of Computer Science and Engineering, SBS State Technical Campus, Ferozepur, India
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework)
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework) John Sonchack, Adam J. Aviv, Eric Keller, and Jonathan M. Smith Outline Introduction Overview of OFX Using OFX Benchmarks
OPENFLOW-BASED LOAD BALANCING GONE WILD
OPENFLOW-BASED LOAD BALANCING GONE WILD RICHARD WANG MASTER S THESIS IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCE IN ENGINEERING DEPARTMENT OF COMPUTER SCIENCE PRINCETON UNIVERSITY
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
Performance Evaluation of OpenFlow Devices
Performance Evaluation of OpenFlow Devices Mariusz Żal, Janusz Kleban Poznan University of Technology Faculty of Electronic and Telecommunications Chair of Communication and Computer Networks Email: [email protected],
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
Scotch: Elastically Scaling up SDN Control-Plane using vswitch based Overlay
Scotch: Elastically Scaling up SDN Control-Plane using vswitch based Overlay An Wang George Mason University [email protected] T. V. Lakshman Bell Labs, Alcatel-Lucent [email protected] Yang
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE PANG-WEI TSAI, CHUN-YU HSU, MON-YEN LUO AND CHU-SING YANG NATIONAL CHENG KUNG UNIVERSITY, INSTITUTE OF COMPUTER
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory
OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu [email protected] [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
How To Understand The Power Of The Internet
DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach, Kurose, Ross Slides: - Course book Slides - Slides from Princeton University COS461
FlowSense: Monitoring Network Utilization with Zero Measurement Cost
FlowSense: Monitoring Network Utilization with Zero Measurement Cost Curtis Yu 1, Cristian Lumezanu 2, Yueping Zhang 2, Vishal Singh 2, Guofei Jiang 2, and Harsha V. Madhyastha 1 1 University of California,
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Xperience of Programmable Network with OpenFlow
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering
Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Communication Networks Software Defined Networking (SDN) Prof. Dr. Admela Jukan Dr.
OFTEN Testing OpenFlow Networks
OFTEN Testing OpenFlow Networks Maciej Kuźniar EPFL Marco Canini TU Berlin / T-Labs Dejan Kostić EPFL Abstract Software-defined networking and OpenFlow in particular enable independent development of network
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
Software Defined Networking and the design of OpenFlow switches
Software Defined Networking and the design of OpenFlow switches Paolo Giaccone Notes for the class on Packet Switch Architectures Politecnico di Torino December 2015 Outline 1 Introduction to SDN 2 OpenFlow
Performance of Cisco IPS 4500 and 4300 Series Sensors
White Paper Performance of Cisco IPS 4500 and 4300 Series Sensors White Paper September 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
FlowSense: Monitoring Network Utilization with Zero Measurement Cost
FlowSense: Monitoring Network Utilization with Zero Measurement Cost Curtis Yu 1, Cristian Lumezanu 2, Yueping Zhang 2, Vishal Singh 2, Guofei Jiang 2, and Harsha V. Madhyastha 1 1 University of California,
Business Cases for Brocade Software-Defined Networking Use Cases
Business Cases for Brocade Software-Defined Networking Use Cases Executive Summary Service providers (SP) revenue growth rates have failed to keep pace with their increased traffic growth and related expenses,
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support Qingwei Du and Huaidong Zhuang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics,
Scalability of Control Planes for Software Defined Networks:Modeling and Evaluation
of Control Planes for Software Defined Networks:Modeling and Evaluation Jie Hu, Chuang Lin, Xiangyang Li, Jiwei Huang Department of Computer Science and Technology, Tsinghua University Department of Computer
GUI Tool for Network Designing Using SDN
GUI Tool for Network Designing Using SDN P.B.Arun Prasad, Varun, Vasu Dev, Sureshkumar Assistant Professor Department of Computer Science and Engineering, Saranathan College of Engineering, Tamil Nadu,
Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network
Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network Rohit Katiyar [email protected] Prakash Pawar [email protected] Kotaro Kataoka [email protected] Abhay Gupta [email protected]
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Design and Implementation of Dynamic load balancer on OpenFlow enabled SDNs
IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 8 (August. 2013), V4 PP 32-41 Design and Implementation of Dynamic load balancer on OpenFlow enabled SDNs Ragalatha
Smart Queue Scheduling for QoS Spring 2001 Final Report
ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS Smart Queue Scheduling for QoS Spring 2001 Final Report By Haijing Fang([email protected]) & Liu Tang([email protected])
Software Defined Networks
Software Defined Networks Damiano Carra Università degli Studi di Verona Dipartimento di Informatica Acknowledgements! Credits Part of the course material is based on slides provided by the following authors
Investigating the Impact of Network Topology on the Processing Times of SDN Controllers
IFIP, (25). This is the author s version of the work. It is posted here by permission of IFIP for your personal use. Not for redistribution. The definitive version was published in The Seventh International
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks Aaron Rosen and Kuang-Ching Wang Holcombe Department of Electrical and Computer Engineering Clemson University Clemson,
IMPLEMENTATION AND EVALUATION OF THE MOBILITYFIRST PROTOCOL STACK ON SOFTWARE-DEFINED NETWORK PLATFORMS
IMPLEMENTATION AND EVALUATION OF THE MOBILITYFIRST PROTOCOL STACK ON SOFTWARE-DEFINED NETWORK PLATFORMS BY ARAVIND KRISHNAMOORTHY A thesis submitted to the Graduate School New Brunswick Rutgers, The State
Performance Workload Design
Performance Workload Design The goal of this paper is to show the basic principles involved in designing a workload for performance and scalability testing. We will understand how to achieve these principles
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
Programmable Networking with Open vswitch
Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads
Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
High Performance Cluster Support for NLB on Window
High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) [email protected] [2]Asst. Professor,
A Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
OpenDaylight Performance Stress Tests Report
OpenDaylight Performance Stress Tests Report v1.0: Lithium vs Helium Comparison 6/29/2015 Introduction In this report we investigate several performance aspects of the OpenDaylight controller and compare
A Dell Technical White Paper Dell PowerConnect Team
Flow Control and Network Performance A Dell Technical White Paper Dell PowerConnect Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
IxNetwork OpenFlow Solution
IxNetwork OpenFlow Solution Solution Highlights OpenFlow Controller Emulation OpenFlow Switch Emulation OpenFlow Benchmarking Test OpenFlow Switch Conformance Test Key Features Software Defined Networking
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Network Performance Monitoring at Small Time Scales
Network Performance Monitoring at Small Time Scales Konstantina Papagiannaki, Rene Cruz, Christophe Diot Sprint ATL Burlingame, CA [email protected] Electrical and Computer Engineering Department University
Reactive Logic in Software-Defined Networking: Measuring Flow-Table Requirements
Reactive Logic in Software-Defined Networking: Measuring Flow-Table Requirements Maurizio Dusi, Roberto Bifulco, Francesco Gringoli, Fabian Schneider NEC Laboratories Europe, Germany E-mail: @neclab.eu
ORAN: OpenFlow Routers for Academic Networks
ORAN: OpenFlow Routers for Academic Networks A. Rostami,T.Jungel,A.Koepsel,H.Woesner,A.Wolisz Telecommunication Networks Group (TKN), Technical University of Berlin, Germany {rostami, wolisz}@tkn.tu-berlin.de
Comparing the Network Performance of Windows File Sharing Environments
Technical Report Comparing the Network Performance of Windows File Sharing Environments Dan Chilton, Srinivas Addanki, NetApp September 2010 TR-3869 EXECUTIVE SUMMARY This technical report presents the
Emulation of SDN-Supported Automation Networks
Emulation of SDN-Supported Automation Networks Peter Danielis, Vlado Altmann, Jan Skodzik, Eike Bjoern Schweissguth, Frank Golatowski, Dirk Timmermann University of Rostock Institute of Applied Microelectronics
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation
Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation ABSTRACT Ángel Cuevas Rumín Universidad Carlos III de Madrid Department of Telematic Engineering Ph.D Student
Transactional Support for SDN Control Planes "
Transactional Support for SDN Control Planes Petr Kuznetsov Telecom ParisTech WTTM, 2015 Software Defined Networking An emerging paradigm in computer network management Separate forwarding hardware (data
Open Source Network: Software-Defined Networking (SDN) and OpenFlow
Open Source Network: Software-Defined Networking (SDN) and OpenFlow Insop Song, Ericsson LinuxCon North America, Aug. 2012, San Diego CA Objectives Overview of OpenFlow Overview of Software Defined Networking
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
What you need to know about SDN control and data planes
What you need to know about SDN control and data planes EPFL Technical Report EPFL-REPORT-199497 Maciej Kuźniar, Peter Perešíni, Dejan Kostić EPFL KTH Royal Institute of Technology @epfl.ch
Concepts and Mechanisms for Consistent Route Transitions in Software-defined Networks
Institute of Parallel and Distributed Systems Department Distributed Systems University of Stuttgart Universitätsstraße 38 D-70569 Stuttgart Studienarbeit Nr. 2408 Concepts and Mechanisms for Consistent
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 Outline The transport service Elements of transport protocols A
Low-rate TCP-targeted Denial of Service Attack Defense
Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Software Defined Networking and OpenFlow: a Concise Review
Software Defined Networking and OpenFlow: a Concise Review Stefano Forti [email protected] MSc in Computer Science and Networking Scuola Superiore Sant'Anna - University of Pisa 1. Introduction
B4: Experience with a Globally-Deployed Software Defined WAN TO APPEAR IN SIGCOMM 13
B4: Experience with a Globally-Deployed Software Defined WAN TO APPEAR IN SIGCOMM 13 Google s Software Defined WAN Traditional WAN Routing Treat all bits the same 30% ~ 40% average utilization Cost of
An OpenFlow Extension for the OMNeT++ INET Framework
An OpenFlow Extension for the OMNeT++ INET Framework Dominik Klein University of Wuerzburg, Germany [email protected] Michael Jarschel University of Wuerzburg, Germany [email protected]
Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking
Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Roberto Bonafiglia, Ivano Cerrato, Francesco Ciaccia, Mario Nemirovsky, Fulvio Risso Politecnico di Torino,
ulobal: Enabling In-Network Load Balancing for Arbitrary Internet Services on SDN
ulobal: Enabling In-Network Load Balancing for Arbitrary Internet Services on SDN Alex F R Trajano, Marcial P Fernandez Universidade Estadual do Ceará Fortaleza, Ceará, Brazil Email: [email protected],
OpenFlow Overview. Daniel Turull [email protected]
OpenFlow Overview Daniel Turull [email protected] Overview OpenFlow Software Defined Networks (SDN) Network Systems Lab activities Daniel Turull - Netnod spring meeting 2012 2 OpenFlow Why and where was
Performance Targets for Developers
Performance Targets for Developers Nov 2011 Performance targets are usually given at the end user or overall system level, such as an end to end response time target or an overall throughput target. After
Adobe LiveCycle Data Services 3 Performance Brief
Adobe LiveCycle ES2 Technical Guide Adobe LiveCycle Data Services 3 Performance Brief LiveCycle Data Services 3 is a scalable, high performance, J2EE based server designed to help Java enterprise developers
OpenFlow: History and Overview. Demo of OpenFlow@home routers
Affan A. Syed [email protected] Syed Ali Khayam [email protected] OpenFlow: History and Overview Dr. Affan A. Syed OpenFlow and Software Defined Networking Dr. Syed Ali Khayam Demo of OpenFlow@home
Disaster-Resilient Backbone and Access Networks
The Workshop on Establishing Resilient Life-Space in the Cyber-Physical Integrated Society, March. 17, 2015, Sendai, Japan Disaster-Resilient Backbone and Access Networks Shigeki Yamada ([email protected])
STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT
STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
On real-time delay monitoring in software-defined networks
On real-time delay monitoring in software-defined networks Victor S. Altukhov Lomonosov Moscow State University Moscow, Russia [email protected] Eugene V. Chemeritskiy Applied Research Center for
