Fast Recovery in Software-Defined Networks
|
|
|
- Bernard May
- 10 years ago
- Views:
Transcription
1 Fast Recovery in Software-Defined Networks Niels L. M. van drichem, Benjamin J. van sten and Fernando. Kuipers Network rchitectures and Services, Delft University of Technology Mekelweg 4, 2628 CD Delft, The Netherlands bstract lthough Software-Defined Networking and its implementation OpenFlow facilitate managing networks and enable dynamic network configuration, recovering from network failures in a timely manner remains non-trivial. The process of (a) detecting the failure, (b) communicating it to the controller and (c) recomputing the new shortest paths may result in an unacceptably long recovery time. In this paper, we demonstrate that current solutions, employing both reactive restoration or proactive protection, indeed suffer long delays. We introduce a failover scheme with per-link Bidirectional Forwarding Detection sessions and preconfigured primary and secondary paths computed by an OpenFlow controller. Our implementation reduces the recovery time by an order of magnitude compared to related work, which is confirmed by experimental evaluation in a variety of topologies. Furthermore, the recovery time is shown to be constant irrespective of path length and network size. I. INTRODUCTION Recently, Software-Defined Networking (SDN) has received much attention, because it allows networking devices to exclusively focus on data plane functions and not control plane functionality. Instead, a central entity, often referred to as the controller, performs the control plane functionality, i.e. it monitors the network, computes forwarding rules and configures the networking nodes data planes. One of the benefits of the central controller entity introduced in SDN is its possibility to monitor the network for performance and functionality and reprogram when necessary. Where the controller can monitor overall network health as granular as observing per-flow characteristics, such as throughput, delay and packet loss [1], the most basic task is to maintain end-to-end connectivity between nodes. Hence, when a link breaks, the controller needs to reconfigure the network to restore or maintain end-to-end connectivity for all paths. However, the time-to-restoration of a broken path, beside the detection time, includes delay introduced by the propagation time of notifying the event to the controller, path re-computation, and reconfiguration of the network by the controller. s a result, controller-initiated path restoration may take over 1 ms to complete, which is considered too long for provider networks where at most 5 ms is tolerable [2]. In this paper, we introduce a fast (sub 5 ms) failover scheme relying on link-failure detection by combining primary and backup paths configured by a central OpenFlow [3] controller and implementing per-link failure detection using Bidirectional Forwarding Detection (BFD) [4], a protocol that detects failures by detecting packet loss in frequent streams of control messages. Our work is organized as follows. In section II, we discuss several failure detection mechanisms and analyze how network properties and BFD session configuration influence restoration time. In section III, we introduce and discuss the details of our proposed failover mechanism. Section IV presents our experiments, after which the results are discussed and analyzed to verify our failover mechanism. Related work is discussed in section V. Finally, section VI concludes this paper. II. FILURE DETECTION MECHNISMS In this section, we introduce different failure detection mechanisms and discuss their suitability for fast recovery. In subsection II- we analyze and minimize the elements that contribute to failure detection, while subsection II-B introduces OpenFlow s Fast Failover functionality. Before a switch can initiate path recovery, a failure must be detected. Depending on the network interface, requirements for link failure detection are defined for each network layer. Current OpenFlow implementations are mostly based on Ethernet networks. However, Ethernet was not designed with high requirements on availability and failure detection. In Ethernet, the physical layer sends heartbeats with a period of 16±8 ms over the link when no session is active. If the interface does not receive a response on the heartbeats within a set interval of 5 15 ms, the link is presumed disconnected [5]. Therefore, Ethernet cannot meet the sub 5 ms requirement and failure detection must be performed by higher network protocols. On the data-link layer multiple failure detection protocols exist, such as the Spanning Tree Protocol (STP) or Rapid STP [6], which are designed to maintain the distribution tree in the network by updating port status in switches. These protocols, however, can be classified as slow, as detection windows are in the order of seconds. Instead, in our proposal and implementation we will use BFD [4], which has proven to be capable of detecting failures within the required sub 5 ms detection window.. Bidirectional Forwarding Detection The Bidirectional Forwarding Detection (BFD) protocol implements a control and echo message mechanism to detect liveliness of links or paths between preconfigured end-points. Each node transmits control messages with the current state of the monitored link or path. node receiving a control message, replies with an echo message containing its respective session status. session is built up with a three-way handshake, after which frequent control messages confirm
2 absence of a failure between the session end-points. The protocol is designed to be protocol agnostic, meaning that it can be used over any transport protocol to deliver or improve failure detection on that path. While it is technically possible to deploy BFD directly on Ethernet, MPLS or IP, Open vswitch [7], a popular OpenFlow switch implementation also used in our experiments, implements BFD using a UDP/IP stream. The failure detection time T det of BFD depends on the transmit interval T i and the detection time multiplier M, T i defines the frequency of the control messages, while M defines the number of lost control packets before a session end-point is considered unreachable. Hence, the worst-case failure detection time equals T det = (M +1) T i. Typically, a multiplier of M = 3 is considered appropriate to prevent small packet loss from triggering false positives. The transmit interval T i is lower-bounded by the round-trip-time (RTT) of the link or path. Furthermore, BFD intentionally introduces a to 25% time jitter to prevent packet synchronization with other systems on the network. The minimal BFD transmit interval is given in equations (1) and (2), where T i,min is the minimal required BFD transmit interval, T RTT is the round-trip time, T Trans is the transmission delay, T Prop is the time required to travel a link in which we include delay introduced by routing table lookup, L is the number of links in the path and T Proc is the processing time consumed by the BFD end-points. T i,min = 1.25 T RTT (1) T i,min = (T Trans +L T Prop +T Proc ) (2) It is difficult to optimize for session RTT by improving T Trans and T Proc as those values are configuration independent. However, by using link monitoring (L = 1), the interval time is minimized and smaller failure detection times are possible. great improvement compared to per-path monitoring, where the number of links L is upper-bound by the diameter of the network. n upper-bound for T RTT can easily be determined with packet analysis, where we assume that the process of processing and forwarding is equal for each hop. On high link loads, the round-trip-time (RTT) can vary much and might cause BFD to produce false positives. During the development of TCP [8], a similar problem was identified in [9], where the retransmission interval of lost packets is computed by β T RTT. The constant β accounts for the variation of inter-arrival times, which we implement in equation (3). T i,min = 1.25 β T RTT (3) fixed and conservative value β = 2 is recommended [1]. B. Liveliness monitoring with OpenFlow From OpenFlow protocol version 1.1 onwards Group Table functionality is supported. Group Tables extend OpenFlow ction Output Incoming Port Flow Rule Traffic Watch ction Bucket - 1 Out-port: Port P Liveliness Fast Failover Group Table Monitor Monitor Monitor ction Bucket - 2 Out-port: Port B Liveliness ction Bucket - 3 Out-port: In-port BFDP BFDB BFDI Status Watch Status Watch Fig. 1: OpenFlow Fast Failover Group Table. configuration rules allowing advanced forwarding and monitoring at switch level. In particular, the Fast Failover Group Table can be configured to monitor the status of ports and interfaces and to switch forwarding actions accordingly, independent of the controller. Open vswitch implements the Fast Failover Group Table where the status of the ports is determined by the link status of the corresponding interfaces. Ideally, the status of BFD should be interpreted by the Group Table as suggested in figure 1 and implemented by us in section (IV). III. PROPOSL We propose to divide the recovery process into two steps. The first step consists of a fast switch-initiated recovery based on preconfigured forwarding rules guaranteeing end-to-end connectivity. The second step involves the controller calculating and configuring new optimal paths. Switches initiate their backup path after detecting a link failure, meaning that each switch receives a preconfigured backup path in terms of Fast Failover rules. Link loss is detected by configuring perlink - instead of per-path - BFD sessions. Using per-link BFD sessions introduces several advantages: 1) lower detection time due to a decreased session roundtrip-time (RTT) and thus lower transmit interval. 2) Decreased message complexity and thus network overhead as the number of running BFD sessions is limited to 1 per link, instead of the multiplication of all end-toend sessions and their intermediate links. 3) Removal of false positives. s each session spans a single link, false positives due to network congestion can be easily removed by prioritizing the small stream of control packets. In situations where a switch has no feasible backup path, it will return packets to the previous switch by crankback routing. s the incoming-port is part of the packet-matching filter of OpenFlow, preceding switches have separate rules to process returned packets and forward them across their backup path. This implies a recursive returning of packets to preceding switches until they can be forwarded over a feasible link- or node-disjoint path. Figure 2 shows an example network with primary and backup paths, as well as two failover situations. Status Liveliness
3 H1 H1 B C B C E F G D Primary Path Secondary Path E F G D Primary Path Secondary Path (a) Topology with broken link resulting in usage of a secondary path. (b) Topology with broken link resulting in usage of a backup path by crankback. Fig. 2: topology showing its primary and secondary paths including two backup scenarios in case of specific link-failures. Where the first scenario (a) uses a disjoint backup path, the second scenario (b) relies on crankback routing. By instructing all switches up front with a failover scenario, switches can initiate a failover scenario independent of the SDN controller or connectivity polling techniques. The OpenFlow controller computes primary and secondary paths from every intermediate switch to destination to supply the necessary redundancy and preconfigures switches accordingly. lthough the preconfigured backup-path may not be optimal at the time of activation, as in subfigure 2b, it is a functional path that is applied with low delay. dditionally, once the controller is informed of the malfunction, it can reconfigure the network to replace the current backup path by a more suitable path without traffic interruption as performed in [11]. The transmit interval of BFD is upper-bounded by the RTT between the session endpoints. Since we are configuring per-link sessions, the transmit interval decreases greatly. For example, in our experimental testbeds we have a RTT below.5 ms, thus allowing a BFD transmit interval of 1 ms. lthough this might appear as a large overhead, per-link sessions limit the number of traversing BFD sessions to 1 per link. In comparison, per-path sessions imply O(N N) shortest paths to travel each link. Even though most of them may be forwarded to their endpoint without inspection, each node has to maintain O(N) active sessions. BFD control packet consists of at most 24 bytes with authentication, encapsulation in a UDP, IPv4 and Ethernet datagram results in = 9 bytes = 72 bits. Sent once every 1 ms, this results in an overhead of.67 % and.67 % in, respectively, 1 and 1 Gbps connections. IV. EXPERIMENTL EVLUTION In this section, we will first discuss our experimental setup followed by the used measurement techniques, the different experiments and finally the results.. Testbed environments We have performed our experiments on two types of physical testbeds, being a testbed of software switches and a testbed of hardware switches. Our software switch based testbed consists of 24 physical, general-purpose, servers enhanced with multiple network interfaces and software to run as networking nodes. Each server contains a 64 bit Quad-Core Intel Xeon CPU running at 3.GHz with 4. GB of main memory and has 6 independent 1 Gbps networking interfaces installed and can hence perform the role of a 6-degree networking node. Links between nodes are realized using physical Ethernet connections, hence deleting any measurement inaccuracies introduced by possible intermediate switches or virtual overlays. OpenFlow switch capability is realized using the Open vswitch [7] software implementation, installed on the Ubuntu 13.1 operating system running GNU/Linux kernel version generic. The hardware switch based testbed we used was graciously made available to us by SURFnet, the NREN of the Netherlands. The testbed consists of 5 Pica8 P392 switches, running firmware release The switches run in Open vswitch mode to deliver OpenFlow functionality, meaning that they implement the same interfaces as defined by Open vswitch. B. Recovery and measurement techniques We use two complementary techniques to implement our proposal: (1) We use OpenFlow s Fast Failover Group Tables to quickly select a preconfigured backup path in case of link-failure. The Fast Failover Group Tables continuously monitor a set of ports, while incoming packets destined for a specific Group Table are forwarded to the first active and alive port from its set of monitored ports. Therefore, when link functionality of the primary output port fails, the Group Table will automatically turn to the secondary set of output actions. fter restoring link functionality, the Group Tables revert to the primary path. (2) Link-failure itself is detected using the BFD protocol. We chose a BFD transmit interval conform equation (3), with β = 2 to account for irregularities by queuing. lthough the RTT allowed smaller transmit intervals, the implementation of BFD forced a minimum window of 1 ms. We use a detection multiplier of M = 3 lost control messages to prevent false positives. t the time of writing, both BFD and Group Table functionality are implemented in the most recent snapshots from Open vswitch development repository [12]. lthough both BFD status reports and Fast Failover functionality operate
4 Primary path Primary path H1 B H1 B C D Backup path C Capture stream missing packets Secondary path E Capture stream missing packets (a) Simple Network Topology. (b) Ring Network Topology. (c) USNET Topology. Fig. 3: Topologies used in our experiments, including functional and failover scenarios in case of specific link-failure. correctly, we found the Fast Failover functionality did not include the reported BFD status to initiate the backup sequence and only acts on (administrative) link down events. To resolve the former problem, we wrote a small patch for Open vswitch to include the reported interface BFD status [13]. Furthermore, the patch includes minor changes to allow BFD transmit intervals smaller than 1 ms. In order to simulate traffic on the network we use the pktgen [14] packet generator, which can generate packets with a fixed delay and sequence numbers. We use the missing sequence numbers of captured packets to determine the start and recovery time of the failure. Pktgen operates from kernel space, therefore high timing accuracy is expected and was confirmed at an interval accuracy of.5 ms. In order to get.1 ms timing accuracy, pktgen is configured to transmit packets at a.5 ms interval. C. Experiments This subsection describes the performed experiments. We run baseline experiments using link-failure detection by regular Loss-of-Signal on both testbeds. dditionally, we run experiments using link-failure detection by per-link BFD sessions on the software switch testbed to show improvement. In general, links are disconnected using the Linux ifdown command. To prevent the administrative interface change to influence the experiment, the ifdown command is issued from the switch opposite to the switch performing recovery. The experiments are executed as follows. We start our packet generator and capturer at t =. t t failure, a failure is introduced in the primary path. The packet capture program records missing sequence numbers, while the failover mechanism autonomously detects the failure and converts to the backup path. tt recovery, connectivity is restored, which is detected by a continuation of arriving packets, the recording of missing packets is stopped and the recovery time is computed. Basic functionality: In our first experiment we measure and compare the time needed to regain connectivity in a simple network to prove basic functionality. The network is depicted in figure 3a and consists of two hosts named H 1 and H 2, which connect via paths B C (primary) and C (backup). In this experiment, we stream data from H 1 to H 2 and deliberately break the primary path by disconnecting link B and measure the time needed to regain connectivity. Crankback: fter confirming restoration functionality we need to confirm crankback functionality. To do so, we introduce a slightly more complicated ring topology in figure 3b, in which the primary path is set as B C D E. We break link C D enforcing the largest crankback path to activate, resulting in the path B C B E. Extended experiments: To test scalability, we perform additional experiments in which we simulate a real-world network scenario. To do so, we configure our software-switch testbed in the USNET topology shown in figure 3c. We set up connections from all East-coast routers to all West-coast routers, thereby exploiting 2 shortest paths. We configure each switch to initially forward packets along the shortest path to destination, as well as a backup path from each intermediate switch to destination omitting the protected link. t each iteration we uniformly select a link from the set of links participating in one or more shortest paths and break it. D. Results and analysis Our first set of results, displayed in figure 4, shows baseline measurements without BFD performed for the simple topology on both testbeds. This experiment shows link-failure detection based on Loss-of-Signal implies an infeasibly long recovery time in both our software and hardware switch testbeds. The Open vswitch testbed shows an average recovery time of 4759 ms. lthough the hardware switch testbed performs better with an average recovery time of 1697 ms, the duration of both recoveries are unacceptable in carrier-grade networks. In order to determine the time consumed by administrative processing additional to actual failure detection, we repeated previous experiments with the exception that we administratively brought down the interface at the switch performing the recovery. By doing this, we only trigger and measure the time consumed by the administrative processes managing link status. We found that Open vswitch on average needs 5.9 ms to restore connectivity after detection occurs. Due to this high value, our patch omits the administrative link status update and directly checks BFD status instead. Currently, the firmware of the hardware switches does not support BFD, therefore we only verified recovery using linkfailure detection by BFD on the software switch testbed. Figure 5 shows 1 samples of measured recovery delay for the simple and ring topologies using three different BFD transmit intervals, namely 15 ms, 5 ms and 1 ms. For the
5 Recovery Time (ms) Sample t 6 Open vswitch Pica8 P329 Fig. 4: Baseline measurements using Loss-of-Signal failure detection. 8 Recovery Time (ms) Simple Topology 1 ms 5 ms 15 ms Ring Topology 1 ms 5 ms 15 ms 4 Sample t Fig. 5: Recovery times for selected BFD transmit intervals and topologies. 6 8 Recovery Time (ms) Sample t Plot verage 95% CI Fig. 6: Recovery times for 5 ms transmit interval in the extended topology. simple topology, we measure a recovery time that was a factor between 2.5 and 4.1 ms larger than the configured BFD transmit interval due to the detection multiplier of M = 3. s shown in figure 7, for each BFD transmit interval we measure average and 95% confidence interval recovery times of 3.4±.7 ms, 15.±2.2 ms and 42.1±5. ms. The results show that, especially with links having low RTTs, a great decrease in recovery time can be reached. t a BFD transmit interval of 1 ms, we reach a maximal recovery time of 6.7 ms. For the ring topology, after 1 samples of measuring the recovery time for three different BFD transmit intervals, figure 7 shows averages and 95% confidence intervals of respectively 3.3±.8 ms, 15.4±2.7 ms and 46.2±5.1 ms for the different selected transmit intervals. t a BFD transmit interval of 1 ms, we reach a maximal recovery time of 4.8 ms. Even though the ring topology is almost twice as large as the simple topology, recovery times remain constant due to the per-link discovery of failures and the failover crankback route. In order to verify the scalability of our implementation, we repeated the experiments by configuring our software-switch testbed with the USNET topology, hence simulating a realworld network. For each iteration, we average the recovery times experienced by the affected destinations. Figure 6 shows over34 samples taken at a BFD transmit interval of5ms, resulting in an average and 95% confidence interval of 13.6±2.6 also shown in figure 7. The high frequency of arriving BFD and probe packets congested the software implementation of Open vswitch, showing the necessity for hardware line card support of BFD sessions. s a consequence, we were unable to further decrease the polling frequency and had to decrease the frequency of probe packets to 1 per ms, slightly reducing the accuracy of this set of measurements. lthough we experience software system integration issues showing the need to optimize switches for degree and traffic throughput, recovery times remain constant independent of path length and network size, showing our implementation also scales to larger real-world networks. Prior to implementing BFD in the Group Tables, the Fast Failover Group Tables showed a 2 second packet loss when reverting to the primary path after repair of a failure. With BFD, switch-over is delayed until link status is confirmed to be restored and no packet loss occurs, resulting in a higher stability of the network. Recovery time (ms) ms 5 ms 15 ms Simple Topology Ring Topology USNET Topology Fig. 7: Bar diagram summarizing average recovery times and 95% confidence interval deviation on our Open vswitch testbed using BFD at selected transmit intervals. V. RELTED WORK Sharma et al. [15] show that a controller-initiated recovery needs approximately 1 ms to regain network connectivity. They propose to replace controller-initiated recovery with a path-failure detection using BFD and preconfigured backup paths. The switches executing BFD detect path failure and revert to previously programmed backup paths without controller intervention resulting in a reaction time between 42 and 48 ms. However, the correctness and speed of detecting path failure depends highly on the configuration and switch implementation of BFD. ccording to [4], a detection time of 5 ms can be achieved by using an aggressive session, using a transmit interval of 16.7 ms and window multiplier of 3. However, the authors of [15] do not provide details on their configuration of BFD. The BFD transmit interval is lowerbounded by the propagation delay between the end-points of the BFD sessions, therefore we claim a path-failure detection under-performs compared to a per-link failure detection and protection scheme as presented in this paper. Kempf et al. [16][17] propose an alternative OpenFlow switch design that allows integrated operations, administration and management (OM) execution, foremost connectivity monitoring, in MPLS networks by introducing logical group ports. Introducing ring topologies in Ethernet-enabled networks and applying link-failure detection results to trigger failover forwarding rules, results in an average failover time of 11.5 ms [18] with few peaks between 14 ms and 2 ms. However, the introduced logical group ports remain unstandardized and are hence not part of shipped OpenFlow switches.
6 TBLE I: Summary of results and most important differences compared to related work. Reference vg recovery time Network ±95% CI size (N,L) Detection Mechanism Recovery Mechanism [15] 42 48ms (28, 4) Per-path BFD OpenFlow Fast Failover Group Tables using virtual ports [16][17] 28.2 ±.5 ms N.. Per LSP OM + BFD Custom extension of OpenFlow [18][19] ± 4.17 ms (7, 7) Undocumented Custom auto-reject mechanism This Paper 3.3 ±.8 ms (24, 43) Per-link BFD Commodity OpenFlow Fast Failover Group Tables Finally, Sgambelluri et al. [19] perform segment protection in Ethernet networks depending on OpenFlow s auto-reject function to remove flows of failed interfaces. The largest part of their experimental work involves Mininet emulations, which is considered inaccurate in terms of timing due to its high level of virtualization [2]. The experiment performed on a physical testbed shows a switch-over time of at most 64 ms, however, the authors do not describe the method of link-failure detection, which make the results difficult to reproduce. Table I shows the main differences between the aforementioned proposals and our work. Where the previous best recovery time was close to 3 ms, section IV showed that our configuration regains connectivity within a mere 3.3 ms. VI. CONCLUSION Key to supporting high availability services in a network is the network s ability to quickly recover from failures. In this paper we have proposed a combination of protection in OpenFlow Software-Defined Networks through preconfigured backup paths and fast link failure detection. Primary and secondary path pairs are configured via OpenFlow s Fast Failover Group Tables enabling path protection in case of link failure, deploying crankback routing when necessary. We configure per-link, in contrast to the regular per-path, Bidirectional Forwarding Detection (BFD) sessions to quickly detect link failures. This limits the number of traversing BFD sessions to be linear to the network size and minimizes session RTT, enabling us to further decrease the BFD sessions window intervals to detect link failures faster. By integrating each interface s link status into Open vswitch Fast Failover Group Table implementation, we further optimize the recovery time. fter performing baseline measurements of link failure detection by Loss-of-Signal on both a software and hardware switch testbed, we have performed experiments using BFD on our adapted software switch testbed. In our experiments we have evaluated recovery times by counting lost segments in a data stream. Our measurements show we already reach a recovery time considered necessary for carrier-grade performance at a 15 ms BFD transmit interval, being sub 5 ms. By further decreasing the BFD interval to 1 ms we reach an even faster recovery time of 3.3 ms, which we consider essential as network demands proceed to increase. Compared to average recovery times varying from 28.2 to 48 ms in previous work, we show an enormous improvement. Since we perform per-link failure detection and preconfigure each node on a path with a backup path, the achieved recovery time is independent of path length and network size, which is confirmed by experimental evaluation. CKNOWLEDGMENT We thank SURFnet, and in particular Ronald van der Pol, for allowing us to conduct experiments on their hardware switch based OpenFlow testbed. This research has been partly supported by the EU FP7 Network of Excellence in Internet Science EINS (project no ). REFERENCES [1] N. L. M. van drichem, C. Doerr, and F.. Kuipers, Opennetmon: Network monitoring in openflow software-defined networks, in Network Operations and Management Symposium (NOMS), 214 IEEE. [2] B. Niven-Jenkins, D. Brungard, M. Betts, N. Sprecher, and S. Ueno, Requirements of an MPLS Transport Profile, RFC 5654 (Proposed Standard), Internet Engineering Task Force, Sep. 29. [3] N. McKeown, T. nderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, Openflow: enabling innovation in campus networks, CM SIGCOMM Computer Communication Review, vol. 38, no. 2, 28. [4] D. Katz and D. Ward, Bidirectional Forwarding Detection (BFD), RFC 588 (Proposed Standard), Internet Engineering Task Force, Jun. 21. [5] L. Wang, R.-F. Chang, E. Lin, and J. C.-s. Yik, pparatus for link failure detection on high availability ethernet backplane, ug , us Patent 7,26,66. [6] D. Levi and D. Harrington, Definitions of Managed Objects for Bridges with Rapid Spanning Tree Protocol, RFC 4318 (Proposed Standard), Internet Engineering Task Force, Dec. 25. [7] B. Pfaff, J. Pettit, K. midon, M. Casado, T. Koponen, and S. Shenker, Extending networking into the virtualization layer. in Hotnets, 29. [8] J. Postel, Transmission Control Protocol, RFC 793 (INTERNET STNDRD), Internet Engineering Task Force, Sep [9] V. Jacobson, Congestion avoidance and control, in CM SIGCOMM Computer Communication Review, vol. 18, no. 4. CM, [1] D. D. Clark, Window and acknowledgement strategy in tcp, [11] M. Reitblatt, N. Foster, J. Rexford, and D. Walker, Consistent updates for software-defined networks: Change you can believe in! in Proceedings of the 1th CM Workshop on Hot Topics in Networks, 211. [12] Open vswitch GIT Web Front-End, [13] N. L. M. van drichem, B. J. van sten, and F.. Kuipers, https: //github.com/tudelftns/sdn-openflowrecovery, Jul [14] R. Olsson, pktgen the linux packet generator, in Proceedings of the Linux Symposium, Ottawa, Canada, 25. [15] S. Sharma, D. Staessens, D. Colle, M. Pickavet, and P. Demeester, Openflow: meeting carrier-grade recovery requirements, Computer Communications, 212. [16] J. Kempf, E. Bellagamba,. Kern, D. Jocha,. Takacs, and P. Skoldstrom, Scalable fault management for openflow, in Communications (ICC), 212 IEEE International Conference on, 212. [17] E. Bellagamba, J. Kempf, and P. Skoldstrom, Link failure detection and traffic redirection in an openflow network, May , us Patent pp. 13/111,69. [18]. Sgambelluri,. Giorgetti, F. Cugini, F. Paolucci, and P. Castoldi, Effective flow protection in open-flow rings, in Optical Fiber Communication Conference. Optical Society of merica, 213. [19], Openflow-based segment protection in ethernet networks, Optical Communications and Networking, IEEE/OS Journal of, vol. 5, no. 9, 213. [2] B. Lantz, B. Heller, and N. McKeown, network in a laptop: rapid prototyping for software-defined networks, in Proceedings of the 9th CM SIGCOMM Workshop on Hot Topics in Networks, 21.
Increasing robustness of Software-Defined Networks
Increasing robustness of Software-Defined Networks Fast recovery using link failure detection and optimal routing schemes Master of Science Thesis B.J. van Asten Network Architectures and Services Group
Disaster-Resilient Backbone and Access Networks
The Workshop on Establishing Resilient Life-Space in the Cyber-Physical Integrated Society, March. 17, 2015, Sendai, Japan Disaster-Resilient Backbone and Access Networks Shigeki Yamada ([email protected])
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
Scalable Fault Management for OpenFlow
Scalable Fault Management for OpenFlow James Kempf, Elisa Bellagamba, András Kern, Dávid Jocha, Attila Takacs Ericsson Research [email protected] Pontus Sköldström Acreo AB, Stockholm, Sweden [email protected]
Autonomous Fast Rerouting for Software Defined Network
Autonomous ast Rerouting for Software Defined Network 2012.10.29 NTT Network Service System Laboratories, NTT Corporation Shohei Kamamura, Akeo Masuda, Koji Sasayama Page 1 Outline 1. Background and Motivation
Adopting SCTP and MPLS-TE Mechanism in VoIP Architecture for Fault Recovery and Resource Allocation
Adopting SCTP and MPLS-TE Mechanism in VoIP Architecture for Fault Recovery and Resource Allocation Fu-Min Chang #1, I-Ping Hsieh 2, Shang-Juh Kao 3 # Department of Finance, Chaoyang University of Technology
Software Defined Networking (SDN) - Open Flow
Software Defined Networking (SDN) - Open Flow Introduction Current Internet: egalitarian routing/delivery based on destination address, best effort. Future Internet: criteria based traffic management,
NDNFlow: Software-Defined Named Data Networking
NDNFlow: Software-Defined Named Data Networking Niels L. M. van Adrichem and Fernando A. Kuipers Network Architectures and Services, Delft University of Technology Mekelweg 4, 2628 CD Delft, The Netherlands
IP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview
This module describes IP Service Level Agreements (SLAs). IP SLAs allows Cisco customers to analyze IP service levels for IP applications and services, to increase productivity, to lower operational costs,
Enabling Fast Failure Recovery in OpenFlow Networks
Enabling Fast Failure Recovery in OpenFlow Networks Sachin Sharma, Dimitri Staessens, Didier Colle, Mario Pickavet and Piet Demeester Ghent University - IBBT, Department of Information Technology (INTEC),
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
NQA Technology White Paper
NQA Technology White Paper Keywords: NQA, test, probe, collaboration, scheduling Abstract: Network Quality Analyzer (NQA) is a network performance probe and statistics technology used to collect statistics
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management
Research Paper Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management Raphael Eweka MSc Student University of East London
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Recovery Modeling in MPLS Networks
Proceedings of the Int. Conf. on Computer and Communication Engineering, ICCCE 06 Vol. I, 9-11 May 2006, Kuala Lumpur, Malaysia Recovery Modeling in MPLS Networks Wajdi Al-Khateeb 1, Sufyan Al-Irhayim
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
A Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
Juniper / Cisco Interoperability Tests. August 2014
Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper
Network Simulation Traffic, Paths and Impairment
Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating
A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks
A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks Mohammad HossienYaghmae Computer Department, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Iran [email protected]
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE PANG-WEI TSAI, CHUN-YU HSU, MON-YEN LUO AND CHU-SING YANG NATIONAL CHENG KUNG UNIVERSITY, INSTITUTE OF COMPUTER
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page
Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone.
Anritsu Network Solutions Voice Over IP Application Note MultiFlow 5048 CALL Manager Serv # 10.100.27 255.255.2 IP address 10.100.27.4 OC-48 Link 255 255 25 IP add Introduction Voice communications over
Time-based Updates in OpenFlow: A Proposed Extension to the OpenFlow Protocol
CCIT Report #835, July 2013, EE Pub No. 1792, Technion, Israel 1 Time-based Updates in : A Proposed Extension to the Protocol Tal Mizrahi, Yoram Moses Department of Electrical Engineering Technion Israel
CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS
137 CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS 8.1 CONCLUSION In this thesis, efficient schemes have been designed and analyzed to control congestion and distribute the load in the routing process of
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network
Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network Rohit Katiyar [email protected] Prakash Pawar [email protected] Kotaro Kataoka [email protected] Abhay Gupta [email protected]
MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport
MPLS-TP Future Ready. Today Introduction As data traffic started dominating telecom networks, there was a need for transport data networks, as opposed to transport TDM networks. Traditional transport technologies
Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee
International Journal of Science and Engineering Vol.4 No.2(2014):251-256 251 Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee Yu-Jia Chen, Feng-Yi Lin and Li-Chun Wang Department of
Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師
Lecture 7: Distributed Operating Systems A Distributed System 7.2 Resource sharing Motivation sharing and printing files at remote sites processing information in a distributed database using remote specialized
Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc
(International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan [email protected]
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Influence of Load Balancing on Quality of Real Time Data Transmission*
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 3, December 2009, 515-524 UDK: 004.738.2 Influence of Load Balancing on Quality of Real Time Data Transmission* Nataša Maksić 1,a, Petar Knežević 2,
A collaborative model for routing in multi-domains OpenFlow networks
A collaborative model for routing in multi-domains OpenFlow networks Xuan Thien Phan, Nam Thoai Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology Ho Chi Minh city, Vietnam
Module 15: Network Structures
Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Redundancy & the Netnod Internet Exchange Points
Redundancy & the Netnod Internet Exchange Points The extent to which businesses and consumers use the Internet for critical communication has been recognised for over a decade. Since the rise of the commercial
Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1
Smart Tips Enabling WAN Load Balancing Overview Many small businesses today use broadband links such as DSL or Cable, favoring them over the traditional link such as T1/E1 or leased lines because of the
TCP and UDP Performance for Internet over Optical Packet-Switched Networks
TCP and UDP Performance for Internet over Optical Packet-Switched Networks Jingyi He S-H Gary Chan Department of Electrical and Electronic Engineering Department of Computer Science Hong Kong University
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING. Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk
VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk College of Engineering, Koç University, 34450 Sariyer, Istanbul, Turkey ABSTRACT
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208
ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP
ENSC 427: Communication Networks ANALYSIS OF LONG DISTANCE 3-WAY CONFERENCE CALLING WITH VOIP Spring 2010 Final Project Group #6: Gurpal Singh Sandhu Sasan Naderi Claret Ramos ([email protected]) ([email protected])
An Architecture for the Self-management of Lambda-Connections in Hybrid Networks
An Architecture for the Self-management of Lambda-Connections in Hybrid Networks Tiago Fioreze, Remco van de Meent, and Aiko Pras University of Twente, Enschede, the Netherlands {t.fioreze, r.vandemeent,
Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)
Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including
Network Functions Virtualization in Home Networks
Network Functions Virtualization in Home Networks Marion Dillon Timothy Winters Abstract The current model of home networking includes relatively low- cost, failure- prone devices, requiring frequent intervention
Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering
Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls
ASON for Optical Networks
1/287 01-FGC1010609 Rev B ASON for Optical Networks Ericsson Control Plane for DWDM Optically Switched Networks ASON for MHL3000 Introduction The growing demand for multiple service is changing the network
Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols
Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Purvi N. Ramanuj Department of Computer Engineering L.D. College of Engineering Ahmedabad Hiteishi M. Diwanji
Software Defined Networking: Meeting Carrier Grade Requirements
Software Defined Networking: Meeting Carrier Grade Requirements (invited paper) Dimitri Staessens, Sachin Sharma, Didier Colle, Mario Pickavet and Piet Demeester Department of Information Technology Ghent
Failover Mechanisms for Distributed SDN Controllers
Failover Mechanisms for Distributed SDN Controllers Mathis Obadia, Mathieu Bouet, Jérémie Leguay, Kévin Phemius, Luigi Iannone Thales Communications & Security {firstname.name}@thalesgroup.com Telecom
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Steve Gennaoui, Jianhua Yin, Samuel Swinton, and * Vasil Hnatyshin Department of Computer Science Rowan University
Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)
Chapter 4 VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network) 4.1 Introduction Traffic Engineering can be defined as a task of mapping traffic
File Transfer Protocol (FTP) Throughput Testing by Rachel Weiss
White Paper File Transfer Protocol (FTP) Throughput Testing by Rachel Weiss Introduction In today s complex networks it is often difficult to correlate different measurements that are reported by network
Requirements of Voice in an IP Internetwork
Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.
OPENFLOW-BASED LOAD BALANCING GONE WILD
OPENFLOW-BASED LOAD BALANCING GONE WILD RICHARD WANG MASTER S THESIS IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCE IN ENGINEERING DEPARTMENT OF COMPUTER SCIENCE PRINCETON UNIVERSITY
Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks
Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Guru P.V. Thodime, Vinod M. Vokkarane, and Jason P. Jue The University of Texas at Dallas, Richardson, TX 75083-0688 vgt015000,
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT
STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps
Chapter 14: Distributed Operating Systems
Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Motivation Types of Distributed Operating Systems Network Structure Network Topology Communication Structure Communication
A Fast Path Recovery Mechanism for MPLS Networks
A Fast Path Recovery Mechanism for MPLS Networks Jenhui Chen, Chung-Ching Chiou, and Shih-Lin Wu Department of Computer Science and Information Engineering Chang Gung University, Taoyuan, Taiwan, R.O.C.
packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.
Implementation of an Emulation Environment for Large Scale Network Security Experiments Cui Yimin, Liu Li, Jin Qi, Kuang Xiaohui National Key Laboratory of Science and Technology on Information System
Operating Systems and Networks Sample Solution 1
Spring Term 2014 Operating Systems and Networks Sample Solution 1 1 byte = 8 bits 1 kilobyte = 1024 bytes 10 3 bytes 1 Network Performance 1.1 Delays Given a 1Gbps point to point copper wire (propagation
APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service?
QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS By Thierno Diallo, Product Specialist With the increasing demand for advanced voice and video services, the traditional best-effort delivery model is
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger
POX CONTROLLER PERFORMANCE FOR OPENFLOW NETWORKS. Selçuk Yazar, Erdem Uçar POX CONTROLLER ЗА OPENFLOW ПЛАТФОРМА. Селчук Язар, Ердем Учар
УПРАВЛЕНИЕ И ОБРАЗОВАНИЕ MANAGEMENT AND EDUCATION TOM IX (6) 2013 VOL. IX (6) 2013 POX CONTROLLER PERFORMANCE FOR OPENFLOW NETWORKS Selçuk Yazar, Erdem Uçar POX CONTROLLER ЗА OPENFLOW ПЛАТФОРМА Селчук
PART III. OPS-based wide area networks
PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity
IP Networking. Overview. Networks Impact Daily Life. IP Networking - Part 1. How Networks Impact Daily Life. How Networks Impact Daily Life
Overview Dipl.-Ing. Peter Schrotter Institute of Communication Networks and Satellite Communications Graz University of Technology, Austria Fundamentals of Communicating over the Network Application Layer
H.323 Traffic Characterization Test Plan Draft Paul Schopis, [email protected]
H.323 Traffic Characterization Test Plan Draft Paul Schopis, [email protected] I. Introduction Recent attempts at providing Quality of Service in the Internet2 community have focused primarily on Expedited
Wedge Networks: Transparent Service Insertion in SDNs Using OpenFlow
Wedge Networks: EXECUTIVE SUMMARY In this paper, we will describe a novel way to insert Wedge Network s multiple content security services (such as Anti-Virus, Anti-Spam, Web Filtering, Data Loss Prevention,
Avaya ExpertNet Lite Assessment Tool
IP Telephony Contact Centers Mobility Services WHITE PAPER Avaya ExpertNet Lite Assessment Tool April 2005 avaya.com Table of Contents Overview... 1 Network Impact... 2 Network Paths... 2 Path Generation...
Final for ECE374 05/06/13 Solution!!
1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin
OpenFlow with Intel 82599 Voravit Tanyingyong, Markus Hidell, Peter Sjödin Outline Background Goal Design Experiment and Evaluation Conclusion OpenFlow SW HW Open up commercial network hardware for experiment
Chapter 16: Distributed Operating Systems
Module 16: Distributed ib System Structure, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed Operating Systems Motivation Types of Network-Based Operating Systems Network Structure Network Topology
Rohde & Schwarz R&S SITLine ETH VLAN Encryption Device Functionality & Performance Tests
Rohde & Schwarz R&S Encryption Device Functionality & Performance Tests Introduction Following to our test of the Rohde & Schwarz ETH encryption device in April 28 the European Advanced Networking Test
Path Selection Analysis in MPLS Network Based on QoS
Cumhuriyet Üniversitesi Fen Fakültesi Fen Bilimleri Dergisi (CFD), Cilt:36, No: 6 Özel Sayı (2015) ISSN: 1300-1949 Cumhuriyet University Faculty of Science Science Journal (CSJ), Vol. 36, No: 6 Special
Performance of Host Identity Protocol on Nokia Internet Tablet
Performance of Host Identity Protocol on Nokia Internet Tablet Andrey Khurri Helsinki Institute for Information Technology HIP Research Group IETF 68 Prague March 23, 2007
An Active Packet can be classified as
Mobile Agents for Active Network Management By Rumeel Kazi and Patricia Morreale Stevens Institute of Technology Contact: rkazi,[email protected] Abstract-Traditionally, network management systems
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network Olga Torstensson SWITCHv6 1 Components of High Availability Redundancy Technology (including hardware and software features)
Relationship between SMP, ASON, GMPLS and SDN
Relationship between SMP, ASON, GMPLS and SDN With the introduction of a control plane in optical networks, this white paper describes the relationships between different protocols and architectures. Introduction
Towards an Elastic Distributed SDN Controller
Towards an Elastic Distributed SDN Controller Advait Dixit, Fang Hao, Sarit Mukherjee, T.V. Lakshman, Ramana Kompella Purdue University, Bell Labs Alcatel-Lucent ABSTRACT Distributed controllers have been
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks Faiz Ahmed Electronic Engineering Institute of Communication Technologies, PTCL
4 Internet QoS Management
4 Internet QoS Management Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology [email protected] September 2008 Overview Network Management Performance Mgt QoS Mgt Resource Control
Research on Video Traffic Control Technology Based on SDN. Ziyan Lin
Joint International Mechanical, Electronic and Information Technology Conference (JIMET 2015) Research on Video Traffic Control Technology Based on SDN Ziyan Lin Communication University of China, Beijing
Facility Usage Scenarios
Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to
Opnet Based simulation for route redistribution in EIGRP, BGP and OSPF network protocols
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 9, Issue 1, Ver. IV (Jan. 2014), PP 47-52 Opnet Based simulation for route redistribution
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
