Scalability and Resilience in Data Center Networks: Dynamic Flow Reroute as an Example
|
|
- Dulcie George
- 8 years ago
- Views:
Transcription
1 Scalability and Resilience in Data Center Networks: Dynamic Flow Reroute as an Example Adrian S-W Tam Kang Xi H Jonathan Chao Department of Electrical and Computer Engineering Polytechnic Institute of New York University adriantam@nyuedu, kxi@polyedu, chao@polyedu Abstract The recent literature on data center networks often propose the use of centralized control server to manage resources or for coordination These controllers are always a single omniscient device While we see problems in practice yet as it works for small scale networks, there is a scalability concern This paper proposes the idea of devolved controllers, namely, a number of controllers that each with only limited information about the network, but they together can replace a omniscient controller Thus it permits the controllers workload to scale up We use dynamic flow reroute as an example, to study how we can build a network with devolved controllers, how to configure them, how they operate, and especially how they provide resilience We show that these devolved controllers can remove the scalability concern and provide redundancy to each other; externally, they are as easy to use as a single omniscient controller We describe how these controllers are prepared and used to prove that devolved controllers is feasible idea, and provide a precursor to a new direction on the control aspect of networking I INTRODUCTION The recent years literature on computer networking, data center and enterprise networks in particular, often uses centralized controllers for the correct operation For instance, Ethane [1] proposes a call set-up network architecture for an enterprise network to enforce policy-abiding of flows A controller is used as the decision maker to determine whether a flow is allowed, and how it should be routed Traffic engineering proposals often involve a centralized controller as well Such as [2], a data center network built with OpenFlow [3] switches together with a traffic engineering controller is proposed The controller maintains a global view of the current network and updates route periodically to optimize traffic engineering objectives Similarly, c-through [4] uses a controller in an optical/electrical hybrid network to leverage the total throughput The controller gathers traffic measurements in real-time and deduces the best configuration of the circuit-switched optical network to supplement the packet-switched network A controller to gather network-wide statistics and reroute flows for better load balancing is also mentioned in [5], [6], [7] Besides load balancing, controller is also used in PortLand [8] and VL2 [9] to provide address look-up services, a crucial part of the network operation, so that virtual machines in a data center network can be migrated seamlessly Coincidently, these controllers are omniscient They hold a complete picture of the network in order to function Their redundancy and load balancing can be easily provided by replication, as suggested in many of the above proposals Scalability, however, is an unsolved problem Consider the case that a controller has to respond promptly but the response time is constrained by the problem size: A controller with the very details of the network cant scale as the network scales Devolved controllers is first introduced in [10]: A set of controllers that function as an omniscient controller as a whole but individual is omniscient They can thus replace the centralized controller This paper is a sequel to [10] with a flow reroute example showing devolved controllers t only scale, but also provide redundancy without controller replication It is similar to distributed control, but with a key difference Usually distributed control is individual agents working on their own to hope that their collective intelligence can cover all cases However, devolved control is strategically configured to guarantee that In certain sense, devolved control is centralized in configuration time, but distributed in runtime The idea of devolved controller is favorable to the cases that real-time computation is needed, such as real-time optimization Normally, solving those problems with a large data set is slow Limiting the size of data can therefore ensure the solution can be obtained in a reasonably short time For example, we use a controller to monitor a network and reroute flows whenever it is necessary The rerouted path for a flow is determined dynamically so that it avoids congested spots A single controller handling all the reroute for the whole network is t only heavily loaded, computing the rerouted path for any flow can also be slow as the network is too large If we use devolved controllers instead, each controller only possesses the kwledge for a partial topology, and monitors only part of the network A flow to be rerouted can be handled by some, but t all, controllers So we can reduce the load of each controller Moreover, as only a limited topology data is in a controller, the rerouted path computation 1 can be speedy Using devolved controllers to reroute flows, as a measure to reduce congestion and provide better utilization of the network, is t trivial Besides how we use the controllers, configuration and coordination of the controllers are also important issues Configuration defines the part of the network 1 The reroute path is from in a reduced solution space as full topology is t available It may t hurt the effectiveness of reroute, especially when we mandate only shortest paths to be used The shortest paths can be included in the topology stored in the controller at configuration time See [10] for detail /11/$ IEEE
2 that each controller handles so that we can ensure their responsiveness by limiting the computation complexity and load intensity, as well as to guarantee the redundancy among them Coordination is to maintain resilience among controllers They keep track on each other s status so that they can carry out a failover procedure whenever a controller is offline Note, while this paper is about network routing, we position it in the control aspect of data center networks Particularly, our intent is to give a full picture on how devolved controllers can be used instead of an omniscient controller so as to remove its scalability constraint and provide network resilience In the following, we describe an architecture to provide these features In section II, we describe the flow reroute problem in a data center network which serves as an example on the use of controllers We also introduce the concept of devolved controllers, and outline how the controllers are used Section III addresses how to configure the controllers to meet our requirements The configuration is done by a heuristic algorithm as minimizing the coverage size of a controller is a NP-hard problem The coordination between controllers to maintain resilience is described in section IV Then we discuss some relevant issues in section V and conclude the paper II FLOW REROUTE WITH DEVOLVED CONTROLLERS A Multipath Network Assume a data center network, as in many literatures [5], [9], [11], [12], is a multipath network in the sense that there are more than one possible path connecting the same source and destination Therefore, the network resource can be better used by allowing multipath forwarding, such as ECMP-based Valiant load balancing This load balancing is oblivious to flow sizes If a big flow (large throughput) uses to ather path, the traffic may be more balanced and the network less congested We let the edge routers to detect big flows and report them to a controller The edge routers perform this detection and reporting periodically The controller receiving this information is responsible to reroute the flows in an optimal way so that congestion can be reduced In order to help optimization, the controller has to monitor the link utilization This flow reroute model is consistent with that described in previous literature, such as [7] But as we are using devolved controllers, t all controllers can reroute a particular flow; and a controller does t monitor every link on the network, too B Termilogy We model a network as a connected graph G = (V,E) where a de v V is a router A flow is defined as a fivetuple flow as usual, which is sent from de s to de t, with s, t V are the source and destination edge routers for that flow The source-destination pair is deted as an ordered pair (s, t) for the convenience of discussion We assume there are k viable paths for a flow in (s, t) In this network, there are q controllers such that each of them is managing a portion of the network, represented by a subgraph of G We say a controller that manages G = (V,E ) covers a de v V or a link e E if v V or (s, t) Source-destination pair, which s, t are edge routers flow A five-tuple flow of (s, t) path A path that connects two des multipath A set of paths that each of them connects the same pair q Total number of controllers in the network r Number of controllers covering a particular multipath k Number of paths in a multipath coverage The partial toplogy that a controller manages active controller The controller responsible to reroute a flow primary controller The default active controller in rmal conditions secondary controller The controller in hot stand-by τ Period of sending a heartbeat by a controller T Threshold to determine that a controller fails TABLE I: Definition of terms and symbols used in this paper c s A C Fig 1: A network managed by three controllers e E, respectively A controller can reroute a flow in (s, t) if the k paths are covered by this controller, so that it can determine which path is the best in terms of congestion level at the moment In order to provide redundancy, we configure r out of q controllers to cover the multipath of any (s, t) TABLE I summarizes the terms used in this paper Fig1 shows an example of a network with q =3controllers The part of the network that a controller covers is illustrated by a dotted ellipse in the figure (actual partitioning of a network to controllers is in a complex shape) In this example, ne of the controllers need to monitor every spot in the network but they together cover every de For instance, the k =2paths of (s, t) illustrated in Fig 1 is entirely within the jurisdiction of controller a Soa is the one responsible to reroute the flow between s and t The example in Fig 1 has r =1as we do t have ather controller that also covers the multipath of (s, t) By installing more controllers, we can avoid a controller managing an overly large region as the network grows C Operation Reroute of a flow begins with an edge router The edge routers detect big flows originated from them and report the flows information to the controllers Each edge router contains a look-up table as Fig 2a All the pairs with itself as the source edge router is listed in the table When a big flow is detected, the edge router extracts its five-tuple identifier, and from that, deduce the source-destination pair Then, all the controllers correspond to this pair are tified with this flow s five-tuple The tification is carried by a connectionless protocol such as UDP The edge router does t verify its delivery, r the flow is actually rerouted Note that, because there could be a B t b
3 pair (s, p) (s, q) (s, t) controllers a, b, c e, a, b a, c, d (a) pair paths controllers (n 1,n 2 ) p 1,p 2,p 3 a, b, c (n 2,n 1 ) p 4,p 5,p 6 b, a, c (n 2,n 3 ) p 7,p 8,p 9 a, c, d Fig 2: (a) Look-up table in an edge router s, and (b) A configuration of devolved controllers more than one big flow, the tification sent to a controller may contain several five-tuple identifiers When the controllers are tified of big flow X, the one that is active for that pair decides whether and how to reroute it There is always a unique controller for any pair Reroute is done by installing a route entry for X in all intermediate routers in the network to pin a path Note, even if this path is t pinned, the flow can still be delivered in the network by its default routing mechanism This installed route entry is remembered by the controllers and it is removed later when the flow ends, or updated when a further reroute is performed Although there is only one controller active to reroute flows of a particular pair, the edge routers send the flow information to all related controllers This is to simplify the edge router design so that it does t need to keep track on the status of the controllers This is also why we suggest to use UDP for tifications For simplicity, the edge router does t hold time-varying data Therefore, as long as a big flow persists, the edge router samples it and reports to the controllers regularly The controller then determines if that flow is still using an optimal path That flow is rerouted again if the controller sees it fits Note, in this way, route flapping may occur and it is the responsibility of the controllers to avoid flapping III CONFIGURING DEVOLVED CONTROLLERS As mentioned in section I, we forbid a controller to cover the whole network It would be the best if a controller cover as few links as possible Configuring the devolved controllers is to derive a table like the one in Fig 2b The table relates a source-destination pair to a set of k viable paths and r controllers The paths connect the source edge router to the destination edge router These paths are covered by each of the r controllers on the row The links of these paths are therefore monitored by those controllers A copy of this information is stored in the corresponding controllers For example, all the three rows in Fig 2b are stored in controller a since a appears in the third column With such information, a controller kws whether it is responsible for rerouting a particular flow and to which path it can place the flow The order of controllers in the third column indicates their priority in handling the pairs More detail is described in section IV Our objective in the configuration is that, we have a row on this table for every pair (s, t) to associate it with k paths and r controllers, and at the same time, we minimize the total number of links covered by each controller The total number of links covered is defined as (b) Algorithm 1: Path-partition heuristic algorithm Data: NetworkG =(V,E), q =number of controllers 1 foreach s, t V in random order do /* Retrieving a multipath from s to t */ 2 M := k paths joining s to t; /* Allocate into controllers */ 3 for i := 1 to q do 4 c i := cost of adding multipath M to controller i 5 end 6 Q := {1,,q}; 7 for i := 1 to r do 8 Allocate M to controller j =argmin j Q c j; 9 Remove j from Q; 10 Remove other controllers from Q that violate the resilience constraints; 11 end 12 end the number of unique links among all the paths In the example of controller a in Fig 2b, it is the links in p 1 p 2 p 9 For V = n des on the network, the configuration table has n(n 1) rows as there are such many pairs Allocating these n(n 1) pairs into q controllers optimally is a NPhard problem [10] Algorithm 1 is a heuristic algorithm that approximates the optimal solution It assumes that all the k paths of every (s, t) pair are available before the allocation of pairs into controllers These paths can be a result of OSPF ECMP routes, for example The algorithm then finds the set of r controllers for every pair (s, t), in random order, according to a cost function The cost function helps assigning a multipath to a controller that already monitors most of the links in that multipath, as well as balancing the number of links monitored by each controller Details of these are addressed in [10] Redundancy is provided by lines 7 11 in Algorithm 1 The r controllers to which the multipath is allocated are chosen one after ather in the order of ascending cost Any combination of controllers that violates resilience constraints, such as those with shared risks, are removed by line 10 The output of this algorithm is a table like the one in Fig 2b The order of the r controllers (ie their priority) are determined afterwards As explained by [10], Algorithm 1, the path-partition algorithm, is suitable for an irregular network as we prefer using shortest paths in a data center For a regular topology such as fat-tree, we can reduce the coverage per controller by using the partition-path algorithm, as outlined in Algorithm 2 These two heuristic algorithms are similar in the sense that the allocation of a pair and its multipath is guided by a cost function In Algorithm 2, however, the multipath are computed after a subset of links E i are allocated to each controller As we have the freedom to select a multipath that adapted to a controller, we can reduce the number of links a controller monitors This algorithm is t suitable for a general topology as we cant guarantee a shortest path to be found for any pair In regular topology, however, the number of hops in a shortest path is kwn a priori This makes the use of Algorithm 2 possible TABLE II gives some empirical data
4 Algorithm 2: Partition-path heuristic algorithm Data: NetworkG =(V,E), q =number of controllers /* Partition links to controllers preliminarily */ 1 foreach i := 1 to q do 2 Prepare set of links E i E; 3 end /* Enumerate multipaths and allocate into controllers */ 4 foreach s, t V in random order do 5 foreach i := 1 to q do 6 M i := Find k paths for (s, t) with priority to E i; 7 c i := cost of adding multipath M i to controller i 8 end 9 Q := {1,,q}; 10 for i := 1 to r do 11 Allocate M j to controller j =argmin j Q c j; 12 E j := E j {e : for all links e in M j}; 13 Remove j from Q; 14 Remove other controllers from Q that violate the resilience constraints; 15 end 16 end q Redundancy r TABLE II: No of links covered per controller vs r on an irregular network of 66 links It shows that, for a fixed number of controller, increasing redundancy parameter r does t impact the size of coverage of a controller much In other words, redundancy is almost free when devolved controllers are used One may refer to [10] for numerical evaluations of simplified (r =1) versions of path-partition and partition-path algorithms, including the effectiveness in limiting the number of links monitored by each controller, and the scalability related to terms of number of parallel paths k and number of controllers q IV FAILURE AND RESTORATION OF CONTROLLERS The controllers are ready to operate after configuration The case is very simple for r =1: There is only one controller for any particular pair (s, t), and it is the only one that is responsible to do the reroute upon a flow of this pair is reported by an edge router Coordination between different controllers is needed when r>1, however, so that 1) at any moment, only one controller is active for any pair; 2) when a controller fails, ather controller promptly takes over its job with all the relevant data ready; 3) when a failed controller recovers, the backup controller surrenders its duty back with updated data We devise a heartbeat protocol for such coordination A The Heartbeat Protocol The heartbeat protocol is to let a controller signal ather about its healthiness Similar to the big flows tification in section II-C, we use UDP to deliver the heartbeat An empty packet is sufficient to signal the healthiness of a controller A heartbeat packet carries a payload, however, when a backup controller becomes active Consider the example in Fig 2b, controllers a, b, and c are responsible for the (n 1,n 2 ) Therefore, these three controllers send heartbeat to each other, regularly at frequency 1/τ Since a controller is to reroute a flow, and reroute functions stopped for a few second is acceptable as the network without reroute can still operate sub-optimally, an infrequent heartbeat such as τ =1second should suffice The heartbeat messages form a mesh between these three controllers According to the third row of Fig 2b, controllers a, c, d are responsible for (n 2,n 3 ) Controllers a and c do t send two heartbeats to each other in one period because if a controller is alive, it is ready to handle all the pairs that it is responsible When one or more controllers are failed, a remaining controller takes over their duty as the failover mechanism This controller still sends heartbeat to all other controllers, including the failed ones Reusing the previous example, assume controllers a and b failed, then controller c is the only one responsible for the pair (n 1,n 2 ) The failure of a and b is confirmed by c for t receiving any more heartbeat from them In the meantime, c keeps sending heartbeats to a and b because they may recover from failure at any time The heartbeat sent by c, however, is t an empty packet but has the payload {a, b} to signal that c believes these two controllers are failed therefore their roles on (n 1,n 2 ) are taken over In summary, there are two kinds of heartbeat messages sent by a controller An empty heartbeat signals that a controller is healthy and works in its default state A heartbeat with a list of controllers signals a failover due to failure of those controllers Heartbeat messages are sent only to peering controllers, who both cover the same source-destination pair Failover heartbeat, similarly, are only sent to the group of controllers related to the failover, ie, those cover the same pair as the failed one(s) For other controllers, only rmal heartbeats are sent B Transient Behaviors The priority of a controller with respect to a pair determines the time it shall carry out the reroute The configuration in Fig 2b lists the controllers in descending priority for a pair This priority can be arbitrary, such as putting the r controllers in randomized order after the configuration is generated Among the r controllers for a pair, the highest priority one is the primary controller, and the remaining r 1 are secondary controllers With respect to a source-destination pair, the controller to carry out the reroute is called the active one for that pair Normally, the primary controller of the pair is the active controller If it fails, the second-priority controller takes over If it also fails, the third-priority takes over, and so on Thus, if controller c is active for (n 1,n 2 ) and sending heartbeats containing {a, b}, this implies controllers c is of the third priority among a, b, and c A controller believes ather controller fails if its heartbeat does t arrive for period T Following the rationale of triple
5 Send rmal heartbeat Received failover heartbeat? Primary controller? Send failover heartbeat Data updated? Synchronize data Active for reroute Higher priority controller alive? Send rmal heartbeat Inactive for reroute C Reroute Data A controller maintains a register for all the reroutes it made This is important because, in our design, controllers are responsible for revoking all the expired reroutes, either because the flow ended, the reroute changed, or the reroute became unnecessary The revoke is determined by the active controller with the help of the current data reported by the routers These reroute data, although used by the active controller only, are stored in all controllers, to prepare for a sudden failover When a controller reroutes a flow, the details of this reroute is recorded and disseminated to all other peering controllers for this pair Similarly when the reroute is changed or cancelled To help synchronizing the reroute data to all peering controllers, the data carries a timestamp It serves as a reference to check whether a data synchronization is necessary, especially useful when a controller recovers from failure Fig 3: Flow chart of the failover/restoration algorithm duplicated ACKs in TCP, T = 3τ would be sufficient to differentiate between a mere packet loss and a failure Once a controller finds that, with respect to a pair, it is the highest priority controller available at the moment, it becomes active for that pair It is inactive otherwise The logic of how a controller handles heartbeat is depicted in Fig 3 A primary controller always sends a rmal heartbeat But it is active only if it confirms other peering controllers are sending a failover heartbeat (which is asserting its failure) For a controller that sends a failover heartbeat, it shall send rmal heartbeats after a cycle by confirming the recovery of the primary controller, as the primary controller s heartbeat arrives Once the secondary controller stops sending failover heartbeats, these two controllers synchronize the data on the pair The data to be synchronized responds to any reroutes made or revoked After that, the primary controller is active for that pair and performs reroute thereafter A secondary controller sends failover heartbeat if and only if it finds that, for that pair all the peering controllers with a higher priority are failed Otherwise, a rmal heartbeat is sent and this secondary controller is inactive for rerouting Consider that a failover heartbeat shall be sent When a higher-priority controller recovers from previous failure, a lower-priority controller may be claiming to be the active controller The lower-priority controller shall become inactive after a cycle when the rmal heartbeat from the higherpriority one arrives Until this happens, the controller in higher priority shall t be active for reroute but keeps sending failover heartbeats (or rmal heartbeats if it is the primary controller) Afterwards, any updated reroute data shall be synchronized as the final step of handing over the reroute duty from a lower priority controller to a higher priority one D Robustness Against Packet Loss The algorithm in Fig 3 is executed by each controller on every heartbeat cycle This makes the operation among controller robust against occasional loss of heartbeat packets If a higher-priority controller works well, losing a lowerpriority controller s heartbeat does t cause any problem as that controller is inactive Losing heartbeats of the active controller, however, makes a secondary controller active as the consequence of the failover procedure To prevent frequent change of the active controllers, we therefore set a threshold T, as mentioned above, to capture delayed or lost packets Even if an active controller s heartbeat is lost for longer than period T, and ather controller took over its duty wrongly, this scenario can be corrected once the heartbeats arrive at that backup controller Since from the point of view of the backup controller, it is a recovery Note that, the controllers can still function rmally anyhow We prevent more than one controller to be active for a pair s reroute, thus avoids potential conflicting reroute for the same flow This is because we separate the reroute function from the heartbeat: Even if a primary controller, for example, sends rmal heartbeat, it may be inactive for reroute, until it confirms that it is safe to do so This confirmation is done by ensuring other controllers are claiming themselves active for the same pair and synchronizing any updated reroute data V DISCUSSION A Protocol overhead The dynamic reroute solution proposed involves four protocols, namely, the unidirectional reporting of large flows from edge routers to the controllers, the mutual exchange of heartbeat among peering controllers, the synchronization of reroute data among controllers, and the install/remove of reroutes to/from the routers by the controllers Besides the heartbeat, all other traffics mentioned above are sent reactively, eg, edge routers report thing if big flow is found (determined by the actual bandwidth used) Their overhead is determined by how often a big flow appears in the network and how frequent a reroute shall be made or changed
6 The heartbeat traffic, however, is recurrent and periodical In a network with q controllers, this heartbeat traffic is sent at a total rate of more than q(q 1)/τ As one can see, the shorter the heartbeat frequency 1/τ, the higher the heartbeat traffic overhead, but this effectively shortens the time to detect a controller failure, or to correct a mistaken failover Nevertheless, each heartbeat is short A rmal heartbeat is merely a protocol header and a failover heartbeat contains at most q 1 controller IDs The actual overhead, compares to the total bandwidth in a data center network, is negligible The largest amount of data involved in a single communication would be the synchronization of reroute data Reroute data involves the detail of every rerouted flows so that their reroute can be revoked in the future To reduce the overhead on this and speed up the synchronization process, a differential algorithm such as rsync [13] can be used so that only the part changed are sent over the network B Network failure So far, our discussion assumes the network does t fail In a large data center network, link or de failures are common In fact, we can make use of the controllers to help mitigating network failures as well When a link fails, the routers report the detected failure to the controllers Then, the controllers compute the best reroute path to detour the failed link and install the reroute to the network for all traffics Node failures can also be done similarly The detail on using controllers to handle network failures is part of the future research A network failure is a potential problem for controller operation The heartbeat protocol supports intermittent packet loss only If a network failure occurs, the heartbeat between two controllers may be lost permanently Imagine that the highest and second highest priority controllers are disconnected from each other due to a network failure, but they are still reachable by the edge routers Then they both assume they are the active controller Once a big flow is reported by the edge router, they both install a reroute to some routers and cause a conflict or even a route loop This problem can be solved in many ways We can avoid a route loop by enforcing different controllers to yield the same rerouted path for a flow given the same input data This means we have a strategy to break a tie in choosing between two equally good reroute path Then even two controllers installed reroute, the new route for the flow is consistent Ather way to do is to use a tunnel, such as MPLS LSP for reroute Only one tunnel can be used for a flow, even if many are installed So a route loop can be avoided C Alternative ways of reroute operation We assume that reroute is done by installing a route into the routers to match the flow exactly (ie the five-tuple) This is supported in many modern routers, and is an obvious solution to carry out reroute This means of reroute becomes much easier in a network comprises OpenFlow routers [3] Reroute can also be done by creating a dedicated path for a flow, such as a MPLS LSP This is a widely used method of reroute [14] A benefit of using LSP, besides the potential to avoid route loop as mentioned above, is to allow to use arbitrary path instead of shortest path We can also avoid routers to be involved in the reroute operations by making the hosts to report big flows Similar to the logic of [9], we can modify the OS of the hosts so that it detects and reports big flows Routers forward packets using ECMP and reroute is done by instructing the host to change the packet header that affects the ECMP computation This is a less flexible solution because we cant make a flow to use arbitrary path It is also t trivial to deduce how to change the packet header to make a flow to use a particular path However, this is the way that we can do if a controller cant modify routers VI CONCLUSION The objective of this paper is to prove the feasibility of using multiple small independent controllers instead of a single centralized omniscient controller to manage resources, using dynamic flow reroute as an example The main reason to avoid a single centralized controller is because of scalability concern Hence we forbid our controllers to have the complete network topology information in run time We also propose heuristic algorithms that aim at limiting the network topology information stored in controllers, with redundancy planned A failover procedure assisted by a heartbeat protocol is also proposed with details on how to handover the duty from one controller to ather, in case of controller failure or the restoration from a previous failure This paper is a precursor to a new design direction on the use of controllers in computer networks such as data center networks or compute clouds, so that they can scale out horizontally, rather than just vertically REFERENCES [1] M Casado et al, Ethane: Taking control of the enterprise, in Proc SIGCOMM, 2007 [2] T Benson et al, The case for fine-grained traffic engineering in datacenters, in Proc INM/WREN, 2010 [3] N McKeown et al, OpenFlow: Enabling invation in campus networks, SIGCOMM CCR, vol 38, 2008 [4] G Wang et al, c-through: Part-time optics in data centers, in Proc SIGCOMM, 2010 [5] M Al-Fares et al, A scalable, commodity data center network architecture, in Proc SIGCOMM, 2008 [6] J Mudigonda et al, SPAIN: COTS data-center Ethernet for multipathing over arbitrary topologies, in Proc NSDI, 2010 [7] M Al-Fares et al, Hedera: Dynamic flow scheduling for data center networks, in Proc NSDI, 2010 [8] R N Mysore et al, PortLand: A scalable fault-tolerant layer 2 data center network fabric, in Proc SIGCOMM, 2009 [9] A Greenberg et al, VL2: A scalable and flexible data center network, in Proc SIGCOMM, 2009 [10] A S-W Tam, K Xi, and H J Chao, Use of devolved controllers in data center networks, in Proc INFOCOM WKSHPS, 2011 [11] C Guo et al, DCell: a scalable and fault-tolerant network structure for data centers, SIGCOMM CCR, vol 38, 2008 [12], BCube: A high performance, server-centric network architecture for modular data centers, in Proc SIGCOMM, 2009 [13] A Tridgell and P Mackerras, The rsync algorithm, ANU Computer Science, Tech Rep TR-CS-96-05, 1996 [14] P Pan et al, Fast reroute extensions to RSVP-TE for LSP tunnels, IETF RFC 4090, 2005
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
More informationVirtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
More informationA Network Recovery Scheme for Node or Link Failures using Multiple Routing Configurations
A Network Recovery Scheme for Node or Link Failures using Multiple Routing Configurations Suresh Babu Panatula Department of Computer Science and Engineering Sri Sai Aditya Institute of Science and Technology,
More informationLoad Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
More informationA Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
More informationData Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
More informationDATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch
DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby
More informationAxon: A Flexible Substrate for Source- routed Ethernet. Jeffrey Shafer Brent Stephens Michael Foss Sco6 Rixner Alan L. Cox
Axon: A Flexible Substrate for Source- routed Ethernet Jeffrey Shafer Brent Stephens Michael Foss Sco6 Rixner Alan L. Cox 2 Ethernet Tradeoffs Strengths Weaknesses Cheap Simple High data rate Ubiquitous
More informationAnalysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks Faiz Ahmed Electronic Engineering Institute of Communication Technologies, PTCL
More informationComparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
More informationAdvanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
More informationOpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
More informationAdopting SCTP and MPLS-TE Mechanism in VoIP Architecture for Fault Recovery and Resource Allocation
Adopting SCTP and MPLS-TE Mechanism in VoIP Architecture for Fault Recovery and Resource Allocation Fu-Min Chang #1, I-Ping Hsieh 2, Shang-Juh Kao 3 # Department of Finance, Chaoyang University of Technology
More informationA Fast Path Recovery Mechanism for MPLS Networks
A Fast Path Recovery Mechanism for MPLS Networks Jenhui Chen, Chung-Ching Chiou, and Shih-Lin Wu Department of Computer Science and Information Engineering Chang Gung University, Taoyuan, Taiwan, R.O.C.
More informationInternational Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
More informationScaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
More informationDisaster-Resilient Backbone and Access Networks
The Workshop on Establishing Resilient Life-Space in the Cyber-Physical Integrated Society, March. 17, 2015, Sendai, Japan Disaster-Resilient Backbone and Access Networks Shigeki Yamada (shigeki@nii.ac.jp)
More informationRecovery Modeling in MPLS Networks
Proceedings of the Int. Conf. on Computer and Communication Engineering, ICCCE 06 Vol. I, 9-11 May 2006, Kuala Lumpur, Malaysia Recovery Modeling in MPLS Networks Wajdi Al-Khateeb 1, Sufyan Al-Irhayim
More informationMENTER Overview. Prepared by Mark Shayman UMIACS Contract Review Laboratory for Telecommunications Science May 31, 2001
MENTER Overview Prepared by Mark Shayman UMIACS Contract Review Laboratory for Telecommunications Science May 31, 2001 MENTER Goal MPLS Event Notification Traffic Engineering and Restoration Develop an
More informationDisaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
More informationSoftware Defined Networking
Software Defined Networking Richard T. B. Ma School of Computing National University of Singapore Material from: Scott Shenker (UC Berkeley), Nick McKeown (Stanford), Jennifer Rexford (Princeton) CS 4226:
More informationTRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
More informationEthernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
More informationMPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport
MPLS-TP Future Ready. Today Introduction As data traffic started dominating telecom networks, there was a need for transport data networks, as opposed to transport TDM networks. Traditional transport technologies
More informationTesting Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
More informationTE in action. Some problems that TE tries to solve. Concept of Traffic Engineering (TE)
1/28 2/28 TE in action S-38.3192 Verkkopalvelujen tuotanto S-38.3192 Network Service Provisioning Networking laboratory 3/28 4/28 Concept of Traffic Engineering (TE) Traffic Engineering (TE) (Traffic Management)
More informationHow To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
More informationMPLS/BGP Network Simulation Techniques for Business Enterprise Networks
MPLS/BGP Network Simulation Techniques for Business Enterprise Networks Nagaselvam M Computer Science and Engineering, Nehru Institute of Technology, Coimbatore, Abstract Business Enterprises used VSAT
More informationCHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS
137 CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS 8.1 CONCLUSION In this thesis, efficient schemes have been designed and analyzed to control congestion and distribute the load in the routing process of
More informationAutonomous Fast Rerouting for Software Defined Network
Autonomous ast Rerouting for Software Defined Network 2012.10.29 NTT Network Service System Laboratories, NTT Corporation Shohei Kamamura, Akeo Masuda, Koji Sasayama Page 1 Outline 1. Background and Motivation
More informationData Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
More informationMikroTik RouterOS Workshop Load Balancing Best Practice. Warsaw MUM Europe 2012
MikroTik RouterOS Workshop Load Balancing Best Practice Warsaw MUM Europe 2012 MikroTik 2012 About Me Jānis Meģis, MikroTik Jānis (Tehnical, Trainer, NOT Sales) Support & Training Engineer for almost 8
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationTRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
More informationIP Traffic Engineering over OMP technique
IP Traffic Engineering over OMP technique 1 Károly Farkas, 1 Zoltán Balogh, 2 Henrik Villför 1 High Speed Networks Laboratory Department of Telecommunications and Telematics Technical University of Budapest,
More informationReal-Time Traffic Engineering Management With Route Analytics
Real-Time Traffic Engineering Management With Route Analytics Executive Summary Increasing numbers of service providers and mobile operators are using RSVP-TE based traffic engineering to provide bandwidth
More informationRohde & Schwarz R&S SITLine ETH VLAN Encryption Device Functionality & Performance Tests
Rohde & Schwarz R&S Encryption Device Functionality & Performance Tests Introduction Following to our test of the Rohde & Schwarz ETH encryption device in April 28 the European Advanced Networking Test
More informationTesting Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
More informationData Center Network Topologies: VL2 (Virtual Layer 2)
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted
More informationMultihoming and Multi-path Routing. CS 7260 Nick Feamster January 29. 2007
Multihoming and Multi-path Routing CS 7260 Nick Feamster January 29. 2007 Today s Topic IP-Based Multihoming What is it? What problem is it solving? (Why multihome?) How is it implemented today (in IP)?
More informationData Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
More informationRestorable Logical Topology using Cross-Layer Optimization
פרויקטים בתקשורת מחשבים - 236340 - סמסטר אביב 2016 Restorable Logical Topology using Cross-Layer Optimization Abstract: Today s communication networks consist of routers and optical switches in a logical
More informationOVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight
More informationLoad Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
More information2004 Networks UK Publishers. Reprinted with permission.
Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET
More informationBuilding MPLS VPNs with QoS Routing Capability i
Building MPLS VPNs with QoS Routing Capability i Peng Zhang, Raimo Kantola Laboratory of Telecommunication Technology, Helsinki University of Technology Otakaari 5A, Espoo, FIN-02015, Finland Tel: +358
More informationMPLS. Packet switching vs. circuit switching Virtual circuits
MPLS Circuit switching Packet switching vs. circuit switching Virtual circuits MPLS Labels and label-switching Forwarding Equivalence Classes Label distribution MPLS applications Packet switching vs. circuit
More informationDHCP Failover: Requirements of a High-Performance System
DHCP Failover: Requirements of a High-Performance System A white paper by Incognito Software April, 2006 2006 Incognito Software Inc. All rights reserved. Page 1 of 6 DHCP Failover: Requirements of a High-Performance
More informationVMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
More informationExterior Gateway Protocols (BGP)
Exterior Gateway Protocols (BGP) Internet Structure Large ISP Large ISP Stub Dial-Up ISP Small ISP Stub Stub Stub Autonomous Systems (AS) Internet is not a single network! The Internet is a collection
More informationFast Reroute Techniques in MPLS Networks. George Swallow swallow@cisco.com
Fast Reroute Techniques in MPLS Networks George Swallow swallow@cisco.com Agenda What are your requirements? The solution space U-turns Traffic Engineering for LDP Traffic Engineering Some Observations
More informationDefinition. A Historical Example
Overlay Networks This lecture contains slides created by Ion Stoica (UC Berkeley). Slides used with permission from author. All rights remain with author. Definition Network defines addressing, routing,
More informationMPLS - A Choice of Signaling Protocol
www.ijcsi.org 289 MPLS - A Choice of Signaling Protocol Muhammad Asif 1, Zahid Farid 2, Muhammad Lal 3, Junaid Qayyum 4 1 Department of Information Technology and Media (ITM), Mid Sweden University Sundsvall
More informationPART III. OPS-based wide area networks
PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity
More informationA New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks
A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks Mohammad HossienYaghmae Computer Department, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Iran hyaghmae@ferdowsi.um.ac.ir
More informationPurpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,
More informationMPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned
MPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned Richard A Steenbergen nlayer Communications, Inc. 1 MPLS RSVP-TE Quick Review MPLS Traffic Engineering 101 Classically, IGPs used
More informationFlorian Liers, Thomas Volkert, Andreas Mitschele-Thiel
Florian Liers, Thomas Volkert, Andreas Mitschele-Thiel The Forwarding on Gates architecture: Flexible placement of QoS functions and states in internetworks Original published in: International Journal
More informationFlexible SDN Transport Networks With Optical Circuit Switching
Flexible SDN Transport Networks With Optical Circuit Switching Multi-Layer, Multi-Vendor, Multi-Domain SDN Transport Optimization SDN AT LIGHT SPEED TM 2015 CALIENT Technologies 1 INTRODUCTION The economic
More informationMPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned. Richard A Steenbergen <ras@gt-t.net>
MPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned Richard A Steenbergen 1 MPLS RSVP-TE Quick Review MPLS Traffic Engineering 101 Classically, IGPs used only link cost to select a best
More informationDistributed Explicit Partial Rerouting (DEPR) Scheme for Load Balancing in MPLS Networks
Distributed Eplicit Partial Rerouting (DEPR) Scheme for Load Balancing in MPLS Networks Sherif Ibrahim Mohamed shf_ibrahim@yahoo.com Khaled M. F. Elsayed, senior member IEEE khaled@ieee.org Department
More informationOverview of Luna High Availability and Load Balancing
SafeNet HSM TECHNICAL NOTE Overview of Luna High Availability and Load Balancing Contents Introduction... 2 Overview... 2 High Availability... 3 Load Balancing... 4 Failover... 5 Recovery... 5 Standby
More informationEmpowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
More informationCentral Control over Distributed Routing fibbing.net
Central Control over Distributed Routing fibbing.net Stefano Vissicchio UCLouvain SIGCOMM 8th August 205 Joint work with O. Tilmans (UCLouvain), L. Vanbever (ETH Zurich) and J. Rexford (Princeton) SDN
More informationInternet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering
Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls
More informationCROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING
CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility
More informationSDN IN WAN NETWORK PROGRAMMABILITY THROUGH CENTRALIZED PATH COMPUTATION. 1 st September 2014
SDN IN WAN NETWORK PROGRAMMABILITY THROUGH CENTRALIZED PATH COMPUTATION st September 04 Aaron Tong Senior Manager High IQ Networking Centre of Excellence JUNIPER S AUTOMATION HORIZON SDN IS A JOURNEY NOT
More informationHigh Availability Solutions & Technology for NetScreen s Security Systems
High Availability Solutions & Technology for NetScreen s Security Systems Features and Benefits A White Paper By NetScreen Technologies Inc. http://www.netscreen.com INTRODUCTION...3 RESILIENCE...3 SCALABLE
More informationSoftware Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
More informationMPLS: Key Factors to Consider When Selecting Your MPLS Provider Whitepaper
MPLS: Key Factors to Consider When Selecting Your MPLS Provider Whitepaper 2006-20011 EarthLink Business Page 1 EXECUTIVE SUMMARY Multiprotocol Label Switching (MPLS), once the sole domain of major corporations
More informationCS 91: Cloud Systems & Datacenter Networks Networks Background
CS 91: Cloud Systems & Datacenter Networks Networks Background Walrus / Bucket Agenda Overview of tradibonal network topologies IntroducBon to soeware- defined networks Layering and terminology Topology
More informationCHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs
CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs).
More informationFault-Tolerant Framework for Load Balancing System
Fault-Tolerant Framework for Load Balancing System Y. K. LIU, L.M. CHENG, L.L.CHENG Department of Electronic Engineering City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong SAR HONG KONG Abstract:
More informationOpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
More informationNetwork Layer. Introduction Datagrams and Virtual Circuits Routing Traffic Control. Data delivery from source to destination.
Layer Introduction Datagrams and Virtual ircuits Routing Traffic ontrol Main Objective Data delivery from source to destination Node (Router) Application Presentation Session Transport Data Link Data Link
More informationChapter 3. Enterprise Campus Network Design
Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This
More informationMulti-Chassis Trunking for Resilient and High-Performance Network Architectures
WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture
More informationOpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu boweixu@umich.edu [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
More informationMulti Protocol Label Switching (MPLS) is a core networking technology that
MPLS and MPLS VPNs: Basics for Beginners Christopher Brandon Johnson Abstract Multi Protocol Label Switching (MPLS) is a core networking technology that operates essentially in between Layers 2 and 3 of
More informationBrocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
More informationIntroduction to LAN/WAN. Network Layer
Introduction to LAN/WAN Network Layer Topics Introduction (5-5.1) Routing (5.2) (The core) Internetworking (5.5) Congestion Control (5.3) Network Layer Design Isues Store-and-Forward Packet Switching Services
More informationNetwork Level Multihoming and BGP Challenges
Network Level Multihoming and BGP Challenges Li Jia Helsinki University of Technology jili@cc.hut.fi Abstract Multihoming has been traditionally employed by enterprises and ISPs to improve network connectivity.
More informationIntroducing Reliability and Load Balancing in Mobile IPv6 based Networks
Introducing Reliability and Load Balancing in Mobile IPv6 based Networks Jahanzeb Faizan Southern Methodist University Dallas, TX, USA jfaizan@engr.smu.edu Hesham El-Rewini Southern Methodist University
More informationRSVP- A Fault Tolerant Mechanism in MPLS Networks
RSVP- A Fault Tolerant Mechanism in MPLS Networks S.Ravi Kumar, M.Tech(NN) Assistant Professor Gokul Institute of Technology And Sciences Piridi, Bobbili, Vizianagaram, Andhrapradesh. Abstract: The data
More informationExperiences with Class of Service (CoS) Translations in IP/MPLS Networks
Experiences with Class of Service (CoS) Translations in IP/MPLS Networks Rameshbabu Prabagaran & Joseph B. Evans Information and Telecommunications Technology Center Department of Electrical Engineering
More informationMPLS Part II - Recovery
MPLS Part II - Recovery Outline Introduction MPLS Recovery Framework MPLS Mechanism for Protection/Restoration Shared Backup LSP Restoration Fast reroute RSVP-TE Recovery A Heuristic Restoration Approach
More informationA Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
More information[Sathish Kumar, 4(3): March, 2015] ISSN: 2277-9655 Scientific Journal Impact Factor: 3.449 (ISRA), Impact Factor: 2.114
IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY HANDLING HEAVY-TAILED TRAFFIC IN QUEUEING NETWORKS USING MAX WEIGHT ALGORITHM M.Sathish Kumar *, G.Sathish Kumar * Department
More informationA Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
More informationAdvanced IPSec with GET VPN. Nadhem J. AlFardan Consulting System Engineer Cisco Systems nalfarda@cisco.com
Advanced IPSec with GET VPN Nadhem J. AlFardan Consulting System Engineer Cisco Systems nalfarda@cisco.com 1 Agenda Motivations for GET-enabled IPVPN GET-enabled IPVPN Overview GET Deployment Properties
More informationA Catechistic Method for Traffic Pattern Discovery in MANET
A Catechistic Method for Traffic Pattern Discovery in MANET R. Saranya 1, R. Santhosh 2 1 PG Scholar, Computer Science and Engineering, Karpagam University, Coimbatore. 2 Assistant Professor, Computer
More informationConfiguring a Load-Balancing Scheme
This module contains information about Cisco Express Forwarding and describes the tasks for configuring a load-balancing scheme for Cisco Express Forwarding traffic. Load-balancing allows you to optimize
More informationAnalysis of QoS Routing Approach and the starvation`s evaluation in LAN
www.ijcsi.org 360 Analysis of QoS Routing Approach and the starvation`s evaluation in LAN 1 st Ariana Bejleri Polytechnic University of Tirana, Faculty of Information Technology, Computer Engineering Department,
More informationPROTECTION ALGORITHMS FOR BANDWIDTH GUARANTEED CONNECTIONS IN MPLS NETWORKS WONG SHEK YOON
PROTECTION ALGORITHMS FOR BANDWIDTH GUARANTEED CONNECTIONS IN MPLS NETWORKS WONG SHEK YOON (B.Eng.(Hons), NUS) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF ENGINEERING DEPARTMENT OF ELECTRICAL & COMPUTER
More informationImplementing MPLS VPN in Provider's IP Backbone Luyuan Fang luyuanfang@att.com AT&T
Implementing MPLS VPN in Provider's IP Backbone Luyuan Fang luyuanfang@att.com AT&T 1 Outline! BGP/MPLS VPN (RFC 2547bis)! Setting up LSP for VPN - Design Alternative Studies! Interworking of LDP / RSVP
More informationWHITEPAPER MPLS: Key Factors to Consider When Selecting Your MPLS Provider
WHITEPAPER MPLS: Key Factors to Consider When Selecting Your MPLS Provider INTRODUCTION Multiprotocol Label Switching (MPLS), once the sole domain of major corporations and telecom carriers, has gone mainstream
More informationRouting in packet-switching networks
Routing in packet-switching networks Circuit switching vs. Packet switching Most of WANs based on circuit or packet switching Circuit switching designed for voice Resources dedicated to a particular call
More informationEfficient and low cost Internet backup to Primary Video lines
Efficient and low cost Internet backup to Primary Video lines By Adi Rozenberg, CTO Table of Contents Chapter 1. Introduction... 1 Chapter 2. The DVP100 solution... 2 Chapter 3. VideoFlow 3V Technology...
More informationCHAPTER 6 SECURE PACKET TRANSMISSION IN WIRELESS SENSOR NETWORKS USING DYNAMIC ROUTING TECHNIQUES
CHAPTER 6 SECURE PACKET TRANSMISSION IN WIRELESS SENSOR NETWORKS USING DYNAMIC ROUTING TECHNIQUES 6.1 Introduction The process of dispersive routing provides the required distribution of packets rather
More information