Cross-layer Cooperation to Boost Multipath TCP Performance in Cloud Networks
|
|
|
- Chastity Holmes
- 10 years ago
- Views:
Transcription
1 Cross-layer Cooperation to Boost Multipath TCP Performance in Cloud Networks Matthieu Coudron, Stefano Secci, Guy Pujolle, Patrick Raad, Pascal Gallard LIP6, UPMC, 4 place Jussieu 75005, Paris, France. [email protected] Non Stop Systems (NSS), 27 rue de la Maison Rouge, Lognes, France. {praad, pgallard}@nss.fr Abstract Cloud networking imposes new requirements in terms of connection resiliency and throughput among virtual machines, hypervisors and users. A promising direction is to exploit multipath communications, yet existing protocols have a so limited scope that performance improvements are often unreachable. Generally, multipathing adds signaling overhead and in certain conditions may in fact decrease throughput due to packet arrival disorder. At the transport layer, the most promising protocol is Multipath TCP (MPTCP), a backward compatible TCP extension allowing to balance the load on several TCP subflows, ideally following different physical paths, to maximize connection throughput. Current implementations create a full mesh between hosts IPs, which can be suboptimal. For situation when at least one end-point network is multihomed, we propose to enhance its subflow creation mechanism so that MPTCP creates an adequate number of subflows considering the underlying path diversity offered by an IP-in-IP mapping protocol, the Location/Identifier Separation Protocol (LISP). We defined and implemented a cross-layer cooperation module between MPTCP and LISP, leading to an improved version of MPTCP we name Augmented MPTCP (A-MPTCP). We evaluated A-MPTCP for a realistic Cloud access use-case scenario involving one multi-homed data-center. Results from a large-scale test bed show us that A-MPTCP can halve the transfer times with the simple addition of one additional LISPenabled MPTCP subflow, hence showing promising performance for Cloud communications between multi-homed users and multihomed data-centers. I. INTRODUCTION In Internet-working research, multipath communication capabilities are becoming an increasingly popular subject as Wide Area Network (WAN) multipath protocol features and implementations become more and more tangible. Indeed, to reap the benefit of multipath forwarding - that is in our perspective to gain in resilience and throughput - one needs at some point to have different data flows following physically disjoint paths. From a physical network perspective, the competition between telecommunication operators resulted in the construction of parallel physical networks till the home, be it wired (xdsl, optical fiber links) or wireless (3G, 4G, etc...). At the same time, enduser laptops and cellphones got equipped with concurrent interfaces such as Wi-Fi, Bluetooth, 4G. On the server side, data-centers tend to connect to the Internet through several ISPs to improve their resiliency, i.e. data-centers are more often multihomed. In the end, networks may well provide physically disjoint paths between the servers and its endusers, both ends still need to be able to use these different paths concurrently to improve the performance. The exploitation by the transport layer of this path diversity available end-to-end at the IP layer is actually not easily attainable. What is left to use these paths simultaneously is an adequate protocol stack, flexible and compatible enough with the existing environment, able to stitch transport-layer multipath capabilities to different IP-layer paths. A significant amount of work has already been done into devising the Stream Control Transmission Protocol (SCTP) [1], as transport-layer multipath protocol. Yet its later equivalent Multipath TCP (MPTCP) [2] is more retro-compatible and hence is more likely to be largely deployed (there are already many MPTCP servers connected to the Internet). Indeed MPTCP is compatible with TCP, both at the header and at the congestion mechanism levels; more precisely: if one of the endhost is not MPTCP compliant or if there are middleboxes preventing MPTCP use, then the TCP connection falls back to legacy TCP; an MPTCP connection can coexist with other TCP connections, i.e., the MPTCP connection does not get more capacity than the TCP connections. The congestion mechanism objectives set in the specifications also state that MPTCP must get at least as much capacity its TCP equivalent would get. Thus, the congestion mechanism and the subflow management system appear tightly intertwined; when to create new subflows or to delete them become critical questions. The MPTCP stack has a poor knowledge of the underlying network to adopt efficient decisions, but lower layers may share useful knowledge to support MPTCP for an efficient subflow selection. For instance, creating several subflows for a data-center communication with a layer 2 bottleneck, e.g., due to a shared spanning tree segment, might prove counterproductive in terms of processing overhead vs increase in throughput; similarly, an Internet Cloud-access multipath communication with a server in a datacenter through a single external path (single provider) could offer no performance gain. The purpose of our study concerns the establishment of additional subflows between MPTCP servers and users in a Cloud network where, at least at one side, path diversity is available at the IP layer, but not usable with native protocols. More precisely, we propose a specific cross-layer signaling to allow a MPTCP endpoint to profit from path diversity offered by a Locator/Identifier Separation Protocol (LISP) [4] network.
2 Fig. 2. LISP communications example Fig. 1. Simplified representation of MPTCP signaling. Source: [5] LISP can give hints to MPTCP about the WAN paths diversity between two MPTCP endpoints. By allowing sharing of such an information among endpoints, we influence the number of subflows to create. Our proposition leads to an improved version of MPTCP we name Augmented MPTCP (A-MPTCP). A preliminary version of this work was presented as a poster at IEEE ICNP 2013 [3]. The paper is organized as follows. In Section II we present the MPTCP and LISP protocols. Our cross-layer cooperation proposal is described in Section III. Then we present the experimentation framework and the simulation results in Section IV. Section V concludes the paper drawing future work in the area. II. PRESENTATION OF MPTCP AND LISP We describe in this section the MPTCP and LISP protocols highlighting the basic features our proposal relies on. A. Multipath TCP As already mentioned in the introduction, MPTCP is a TCP extension enabling end-to-end multipath communications, with an emphasis on backward compatibility, leveraging on multiple connection subflows concurrently used for connection data forwarding. Without entering excessively into technical details, the MPTCP signaling relies on the use of one TCP option in the three-way handshake, and of another TCP option during the connection to open and close subflows as a function of connection, subflow and network states, as depicted in Figure 1. As explained in [5], nowadays, many middleboxes (e.g., TCP Optimizers, firewalls, PAT/NAT) are able to inspect packets either to modify their content (e.g., change sequence number) or to drop them (e.g., if it detects unknown protocols), and thus can hinder TCP extensions deployement [2]. As a consequence, MPTCP refers to IPs with an identifier rather than by the IP in order to prevent the confusion induced by Address Translators (PAT/NAT). If any problem of the kind is detected by MPTCP, it falls back to legacy TCP, as it is the case if the remote endhost is not MPTCP compliant. Once an MPTCP connection is established, endhosts can advertise their IPs, add or remove MPTCP subflows anytime. These subflows, which we could define as TCP connections children of a more comprehensive parent TCP connection, can be used to achieve greater throughput or resiliency (indeed with the backup flag, MPTCP can create a subflow used only in case other subflows stop transmitting). It is worth noting that a host can create/advertise subflows with a same IP address, but with a different port number. The main challenge of such a protocol is the congestion control mechanism. It should not be more aggressive than MPTCP, but at the same time it should use the unused capacity; it should balance the load over different paths, but without causing too much packet disordering so that TCP buffering can reorder them. Adequate path discovery is part of the solution and that is where LISP can help. B. Locator/Identifier Separation Protocol (LISP) IP addresses assume today two functions: localization and identification of its owner, which induces a few problems one of which is scalability. IP addresses need to be distributed according to the network topology, which conflicts with ease of use. Provider independent addresses as well as the Internet growth tend to increase global Border Gateway Protocols forwarding information databases, thus slowing lookups and increasing device costs. The Locator/Identifier Separation Protocol (LISP) [4] splits up localization and identification functions into two IP addresses. In this way, only the localization IP addresses, named Routing LOCators (RLOCs), depend on the topology; RLOCs typically belong to border routers of the endpoint network. The IP address used to identify an endpoint is named Endpoint Identifier (EID). LISP is a network-level solution, i.e., there is no need to change the endhost configuration in a LISP network. Upon reception of a packet from the local network to an outer EID, the border router acts as an Ingress Tunnel Router (ITR): it retrieves the EID-to-RLOC mapping from a mapping system (similar to DNS), then it prepends to the packet a LISP header and an outer IP header with the destination RLOC as destination IP address. As the outer packet is a traditional IP packet, it can be routed on the legacy Internet. One should pay attention to the Maximum Transfer Unit (MTU) though, since the LISP encapsulation adds a 36 bytes overhead in IPv4 (56 bytes with IPv6), i.e., 8 bytes for
3 Fig. 3. Example of a multihomed data center and associated MPTCP subflows the LISP header, 8 bytes for the UDP header and 20 (IPv4) or 40 (IPv6) bytes for the outer IP header. The packet should reach at some point its destination RLOC acting as an Egress Tunnel Router (ETR), i.e., it decapsulates the prepended LISP header and forwards the inner packet to the destination EID. Consider the example in Figure 2: the traffic sent to the host is encapsulated by the source s ITR toward one of the two destination s RLOCs. The one with the best (lowest) priority metric is selected, which at reception acts as ETR and decapsulates the packet, before sending it to the destination. On the way back to , RLOC4 queries a mapping system and gets two RLOCs with equal priorities, hence performs load-balancing as suggested by the weight metric (RLOC1 is selected in the example s packet). In order to guarantee EID reachability, LISP uses a mapping system that includes a Map Resolver (MR) and a Map Server (MS). Typically merged as a single MS/MR node as depicted in Figure 2, a Map Resolver holds a mapping database, accepts MAP-REQUESTs from xtrs and handles EID-to-RLOC lookups. A Map Server receives MAP-REGISTERs from ITRs and registers EID-to-RLOC in the mapping database. In practice, many MRs are geographically distributed and relay MAP-REQUEST messages using a specific protocol called LISP Delegated Database Tree (LISP-DDT) protocol. This is the case of the LISP Beta Network testbed 1. As a backward compatibility feature, if a source (or destination) site is not yet LISP compliant, the traffic might get encapsulated (or decapsulated) by a Proxy ITR (or ETR). Furthermore, the usage of RLOC priorities and weights in the mapping system allows inbound traffic engineering, suggesting a best RLOC or an explicit load-balancing. In short, LISP routers are border nodes that advertise their RLOCs so that a remote LISP site knows it needs to tunnel packets to those routers to reach hosts in that remote site (i.e. an EID). LISP allows IP mobility (i.e., a machine keeps its EID but updates its RLOCs, for both mobile users and mobile virtual machines [6]), and also give hints if the site is multihomed or not, i.e., if the site is reachable by different providers. In the case the RLOCs of a site belong to different prefixes, they could belong to different providers, and the site is said to be multihomed. In this case, it would be interesting to have additional MPTCP subflows, whose path is splitted at the LISP 1 LISP Beta Network testbed (website): site border, since they could follow different Internet paths, if not end-to-end, at least along a segment of the Internet path. III. PROPOSED CROSS-LAYER MPTCP-LISP COOPERATION The current MPTCP path discovery does not explicitly limit the number of subflows to create, so current implementations create by default a mesh of subflows between two hosts IPs. Most of the times, the more the subflows, the higher connection throughput, under appropriate congestion control. (note that there are also cases in which this default mechanism would prevent MPTCP from increasing the throughput, and cases where fewer subflows could provide the same gain by using fewer network resources). We target the specific case where more subflows could be created if intermediate multipath forwarding nodes (at the border between the local site and the WAN) can be used to split subflows over different WAN paths, under the hypothesis that the connection throughput is limited by WAN capacity rather than by LAN capacity. In the following, we describe how the MPTCP path discovery can be augmented in this sense in a LISP network. Then we present the required signaling between MPTCP and LISP network elements, and the possible methods to perform the required multipath subflow forwarding. A. Augmented MPTCP path discovery The MTPCP specification see the path management section in [2] states that the path discovery mechanism should remain modular so that it can be changed with no significant changes to the other functional components. We leverage on this specific part of the MPTCP protocol to define an augmented subflow discovery module taking profit of LISP capabilities, to augment MPTCP performance while preserving endhost resources. Figure 3 presents a Cloud access situation where MPTCP could perform better if its path discovery module were informed of LISP path diversity and forwarding capability when creating subflows. In the example situation, there is one IP on the client host and one IP on the server; as such, the legacy MPTCP mechanism creates only one subflow. Under the assumption that commonly connection throughput is limited by WAN capacity rather than by LAN capacity, limiting to a single subflow prevents from benefiting of the WAN
4 path diversity and the likely greater throughput achievable if forms of multipath forwarding at intermediate nodes exist. A LISP network offers the signaling capabilities to retrieve path diversity information, and the switching mechanisms to ensure multipath forwarding. In the Fig. 3 example, this is possible establishing two subflows instead of one, assuming each subflow is forwarded to a different IP transit path (guaranteed as explained next) thanks to the LISP-capable border router. It is worth highlighting that as of the specifications - and as implemented in the MPTCP Linux implementation [9] - different MPTCP subflows can share the same source IP provided that they use different TCP ports. This subflow identification mode should be used to create the additional LISP-enabled subflows. More generally than the Fig. 3 example, the number of MPTCP subflows can be significantly increased thanks to LISP capabilities in the case the endpoints dispose of multiple interfaces. We assume communications between hosts are strongly asymmetric, the largest volume being from server to client, so that the client-to-server flow essentially consists in acknowledgments. Let l 1 and l 2 be the number of interfaces of server endpoint (or of the hosting hypervisor in case of VM server) and client endpoints, respectively. A LISP site can be served by one or many LISP routers, each disposing of one or many RLOCs. Let L r 1 and L r 2 be the number of locators of the r th LISP border router at site 1 (Server side) and 2 (Client side), respectively. Therefore, the maximum number of subflows that can be opened by legacy MPTCP is l 1 l 2. Following the design choice to route only one subflow over one RLOC-to-RLOC inter-site path to avoid bottlenecks and other management issue in the WAN segment, the number of maximum number of subflows that could be opened thanks to LISP multipath forwarding is ( r Lr 1)( r Lr 2). Then we can distinguish two cases: 1) if the LISP site network allows full reachability between endpoints and corresponding ITRs, and endpoint s traffic reaches any corresponding ITR in a non deterministic way, then the maximum number of subflows that shall be opened is: N a = max{l 1 l 2 ; ( r L r 1)( r L r 2)}. (1) LISP-based augmentation would likely be effective only if the second term is higher than the first. The non deterministic interconnection between the endpoint and its ITRs can be due, for instance, to adaptive L1/L2/L3 traffic engineering changing routes and egress points from time to time even for a same application flow, or load-balancing such as equal-cost multipath L2/L3 routing so that different subflows may exit by different ITRs. It is worth highlighting that (1) is only a suggested upper bound; in the case the right term is the maximum, in such non deterministic situations there is no guarantee the N a WAN paths will be used, especially if l 1 < r Lr 1 or l 2 < r Lr 2. 2) if each of the endpoint s interfaces can have its traffic routed via one single RLOC each, then we can take a much larger profit from the LISP path diversity. The best situation is when l 1 r Lr 1 and l 2 r Lr 2. So, if the number of the interfaces is at least equal to the number of the RLOCs, then the desired feature is to explicitly link each interface to the right ITR. Such a setting is the de-facto setting when the user is a LISP mobile node MPTCP-capable user; for the Cloud server side, it is technically relatively easy to create a number of virtual interfaces and bind them to different ITRs via virtual LAN mechanisms and the like. In such configurations, the maximum number of subflows that shall be opened is: N b = ( L r 1)( L r 2) (2) r r The situations described in the previous points can hold only at one site or even only partially within each site. (1) and (2) are indeed upper bounds. Nevertheless, it does not hurt creating a number of subflows higher than the number of paths than N a or N b, at the expense of a higher processing and signaling burden on the network, with a probably limited and in any case non-deterministic gain. Therefore, (1) or (2) can be used to compute the number of subflows for the MPTCP-LISP augmentation, depending on the situation, upon appropriate customization of the path discovery module settings. B. Signaling requirements and implementation aspects From a mere signaling perspective, the requirement is therefore that the MPTCP endpoint be able to query the LISP mapping system to get the RLOCs of the other endpoint (the number of interfaces being already transported via MPTCP options). Standard LISP control-plane messages can natively assume this task (i.e., MAP-REQUEST and MAP-REPLY messages). Once this information is obtained, the number of subflows to be opened can be computed, and the MPTCP endpoint can either advertise these possible subflows to the other endpoint or initiate the required subflows following a procedure such as the one we propose hereafter. Let us describe in higher detail the different steps, depicted in Figure 4, needed by the MPTCP host to retrieve the adequate number of local and remote RLOCs, allowing to compute the number of subflows to create. In our implementation, these communications are managed by a specific module at the kernel level. In the case of servers hosted in VMs, the MPTCP logic and our path discovery module could be implemented via an MPTCP proxy (see [8]) at the hypervisor level. In the following, we also give necessary hints on implementation aspects related to each step. 1) endpoint C (Client) first establishes an MPTCP connection with endpoint S (Server). Once endpoint S is assessed as MPTCP compliant, the path discovery process can start. 2) The kernel of endpoint C calls a function of the previously loaded MPTCP kernel module, with as parameters the current MPTCP connection identifier and the EID we want to find the RLOCs associated to.
5 Fig. 4. Signaling process chronology 3) Upon reception, the kernel module translates it into a Netlink message and sends it to the MPTCP Netlink multicast group (Netlink is the networking protocol used by the linux kernel space to communicate with user space and vice-versa). 4) A specific daemon in the user space is registered with the MPTCP netlink group. Upon reception of the Netlink message, it sends a MAP-REQUEST to the LISP router C, i.e., the daemon asks for the RLOCs responsible for the EID (i.e., the IP of Server S). Note that normally one should send a MAP-REQUEST to a Map Resolver, which in existing vendors implementations can be colocated in a LISP router; in our implementation, we modified an open source LISP router for this purpose. Also note that an ad-hoc DC network controller could do this job too. 5) The LISP router C queries its configured map-resolver if the answer is not in its cache. It then sends to that user space daemon a MAP-REPLY listing RLOCs responsible for the requested EID, thus retrieving the number of RLOCs. 6) Upon reception of the MAP-REPLY, the userspace daemon forwards the answer via Netlink to the kernel module. 7) Finally the module relays the information to the kernel, triggering the creation of new TCP subflows. It is worth mentioning that other strategies could be adopted according to the level of control we have on the different nodes. In case only the DC-side is under control, the server could advertise the same IP until the client creates subflows with adequate source ports. Or it could accept enough communications to ensure that some of them will go through different paths; the MPTCP congestion mechanism would then send data only on the best subflows, which will likely follow different paths. Furthermore, it is worth noting that the described process is expected to be efficient only for long-lived flows. Indeed, on the one hand short flows may finish before the procedure; on the other hand, the different requests sent from the hosts to the routers consume bandwidth and add an avoidable load to the router. Caching associations between endpoint identifiers and the number of subflows to create mappings in the source host could alleviate signaling traffic. To avoid applying our proposed cross-layer interaction to any flow, a system that triggers the procedure only for long-lived flows would be very reasonable, for example based on a whitelist or a dynamic recognition system such as the one described in [24]. In the case of servers hosted in a VM, this could be implemented at the hypervisor level. C. LISP multipath forwarding requirements In order to ensure that subflows indeed follow different WAN paths, it is possible to implement both stateful and stateless load balancing mechanisms. A possible stateful solution would be to resort to the LISP-Traffic Engineering (LISP-TE) feature described in [10]. This feature allows enforcing an Explicit Locator Path via the LISP control-plane, i.e., with little or no impact on data-plane packet crafting. For a given EID, the mapping systems can build a LISP overlay using intermediate LISP routers between the ITR and the ETR, as re-encapsulating tunnel routers (RTRs), which decapsulate the incoming LISP packet and re-encapsulate it toward the next RTR (or the ETR) in a xtr path. In the context of all subflows being addressed to the same destination EID, the usage of options in the packet header (e.g., using the LISP Canonical Address Format, LCAF, features [11]) can allow routing concurrently multiple subflows via different xtr paths. It is worth mentioning
6 that LISP-TE extensions are not yet implemented in vendor routers (e.g., Cisco), and in the control-plane of open source routers. However, in the case LISP is implemented at the hypervisor level, hence without touching the source host (VM), the LISP-TE encapsulation with LCAF options can be defined as a switching rule in virtual switches such as OpenVSwitch knowing that Open- VSwitch already implements basic LISP encapsulation as switching rule since a few months 2. A stateless solution could be to carefully choose IP or TCP header fields so that IP packets follow a foreseen IP path. The computation of these fields (for instance the TTL value in IP packets or source port in TCP packets) require the knowledge of both the topology and the load-balancing algorithms running in the nodes to be able to correctly choose the path. Nodes usually use an hash function over IP and some TCP fields to compute a unique flow identifier that decides the egress interface. Hashing functions, by construction, make it hard to foresee the egress interface of a packet. On the contrary, invertible functions can prove helpful as explained in [12]. In our implementation detailed hereafter, we opted for the second option as the first is not yet available in any LISP implementation to date. At the last step in the previous subsection, subflows source ports are chosen so that their port number modulo the number of disjoint paths are all different. The LISP router can then deterministically route each subflow to a specific path. For instance, in the case of 2 paths, we need to have one subflow whose (source + destination port number) % 2 equals 0 and the second subflow (source + destination port number) % 2 equals 1. This mechanism could be replaced with a more scalable solution as those described in [12], yet it was easier to implement. The load balancing is fully effective when installed on both remote and local sites (as in the experiment). Still, in case communications are asymmetric, it can perform well even if installed only on the side sending data (in this case the server side). The stateless approach is not limited to L3 routing; it can also apply to L2 routing protocols disposing of additional fields (e.g., the hop-count) such as the Transparent Interconnection of a Lot of Links (TRILL) [14] and Shortest Path Bridging (SPB) [15] protocols. In DC controlled environments, for packet-based intra-dc communications, the computation of the different fields could be left to an advanced layer-2 fabrics allowing DC traffic engineering such as TRILL, SPB or OpenFlow. As for the extra-dc segment, stateless solutions do not allow enough control to choose paths. The LISP-TE features could thus prove helpful since it explicitly encodes some LISP nodes to go through, but requires the usage of a wide enough WAN operational LISP network to prevent packets from making a prejudicial detour because they have to go through LISP routers. 2 See: IV. EXPERIMENTAL RESULTS In this section, we report and discuss the results obtained performing several MPTCP communications over the experimental test bed. First, we present the test bed setting and the developed node features we publish as open source code, then we assess the achievable gain with our Augmented MPTCP (A-MPTCP) solution. Finally we qualify the dataplane overhead of the cross-layer interaction. A. Network test bed Let us illustrate the experimental test bed we used for the experimentation of our solution, displayed in Fig. 5. It implements our basic reference augmented TCP scenario described in Fig. 3. We used a MPTCP-capable virtual machine hosted in the Paris-Est DC of the French NU@GE project [20], disposing of two Internet Service Providers, Neo Telecom (AS 8218) and Alphalink (AS 25540). On the other side, we use a single-homed and MPTCP-capable Cloud user. The Internet-working between the VM and user is such that two disjoint Internet paths are used down to the user s ISPs, and such that the two intra-dc paths between the VM s hypervisor and the DC border LISP router are disjoint ones. In such a configuration, the highest improvements can be reached when the Cloud user s provider access and transit links are not congested. The test bed scenario can be considered as quite representative for real cases, where typically Cloud users are not multihomed and the DC has a few ISPs. Moreover, by targeting the more basic configuration with only two subflows we can more precisely demonstrate the benefit of our LISP- MPTCP cross-layer cooperation solution. B. Open Source Nodes In terms of open source software, we used the open source MPTCP Linux implementation [9] and the LISPmob [19] implementation (preferred over the faster BSD OpenLISP router [22] [23] because more easily customizable). We then applied these necessary modifications to the open source nodes. We published the patches in an open source repository [25]. More precisely, to the MPTCP kernel, to add our path discovery feature that retrieves the number of local and remote RLOCs for each destination it is talking to as previously described; to the LISPmob router so that: (i) it acts as a Mapping Resolver too; (ii) it balances the two subflows deterministically between its two RLOCs. For (ii), instead of resorting to a hash of the tuple (source IP, destination IP, Source port, Destination Port, Time To Live), we replaced the load balancing mechanism by a single modulo (number of remote RLOCs) on the TCP source port and force MPTCP to start TCP subflows with certain source port numbers. To ease the development and to decorrelate the path discovery mechanism from other MPTCP mechanisms a stated in [13], we minimized the modifications to the kernel and instead created a kernel module called by the kernel each time MPTCP establishes a new connection. We also integrated
7 Fig. 5. Cloud network test bed scenario an existing user space program named LIG (LISP Internet Groper, [16], [17]) to request EID-to-RLOC mappings (equivalent of the Domain Information Groper DIG but for LISP networks), using MAP-REQUEST and MAP-REPLY controlplane messages. The LISP nodes were connected to the LISP Beta Network test bed [18]. In order to retrieve a list of RLOCs responsible for an EID (and implicitly their number), we created a user space python daemon listening to requests transmitted by the kernel module, which then interacts with the LIG program. In the following, we do not differentiate between the two user space programs since it is just a matter of implementation. C. Transfer times In order to demonstrate the previously described architecture, we worked on the same topology as in Figure 3. A single homed client (with one link to AS3215) downloads 30 files from a single homed server (one link to a LISP software router) in a dual-homed data center (both links active, one towards AS25540, another one towards AS6453). Each transferred file is set bigger than the previous one by an increment of 256 KB to assess the performance for different volumes. We record 20 transfer completion times for each file, and plot in the following maximum-minimum error bars. We conduct the experimentations using four different configurations: 1) legacy TCP: with no cooperation with LISP, and a single TCP flow. 2) MPTCP: with no cooperation with LISP, and a single MPTCP subflow. 3) A-MPTCP: with cross-layer cooperation, creating as many subflows as the product of remote and local RLOCs, i.e., 2 subflows. 4) A-MPTCP overridden: we manually override the daemon result to create 3 subflows instead of 2 One can reasonable expect the cases 1 and 2 to be very similar in the provided configuration since MPTCP should use one flow only. During our tests, the server upload speed could fill the client download speed (8 Mbit/sec, corresponds to RLOC 3 in Figure5) with a single TCP connection. In order to exhibit an increase in throughput via the use of several subflows, we limited RLOC 1 and 2 throughput via the linux QoS utilities so that each each RLOC could send no more then 4 Mbit/sec. On Figure 6, we see that unassisted MPTCP (i.e., MPTCP without LISP support) and TCP transfer times are quite close, as expected, MPTCP being a little slower than TCP: the primary cause is the additional 20 bytes overhead per packet introduced by MPTCP. More importantly, we can see that our A-MPTCP solution combining MPTCP and LISP performs significantly better. For almost all file sizes, we get a gain close to 90%, i.e., with the additional LISP-enabled subflow we can nearly halve the transfer time. This very clearly shows the important benefit we can reach with our Augmented MPTCP - LISP cross-layer cooperation module. We also get an interesting result by forcing the creation of three MPTCP subflows even if there are only two RLOCs at the server side. We notice that the throughput is on average worse than for the suggested A-MPTCP behavior with two subflows, but it sometimes does perform better than with two subflows only. The decreased throughput makes sense as having three subflows going out two RLOCs means that at least two of the subflows compete for bandwidth in some part of the WAN. The improvement is likely due to some further load-balancing occurring on the WAN (on the outer header) and of transient bottleneck free subpaths, hence not as deterministic as for the the two subflows case. It is worth noting that due to the use of linux QoS utilities to cap the bandwidth, it could induce from times to times some latency (the RTT changing between 60ms to 300ms on each path) during the transfers without much impact on MPTCP. D. Data-plane overhead From a data-plane forwarding perspective, we recall that LISP performs an IP-in-IP encapsulation with shim UDP and LISP headers. In order to get an idea of the LISP header
8 Fig. 6. Completion times for different file sizes Fig. 7. Transfer times for two subflows with and without LISP overhead, we recorded the transfer times in two cases. A first case with two MPTCP subflows enabled via LISP, and a second case with the same two MPTCP subflows, but manually configured, without LISP, hence avoiding data-plane encapsulation overheads. The results are shown in Figure 7. At a first look one can immediately notice that the overhead is most of the time negligible. It is worth noting that neither our connection or LISPmob allowed us to test higher rates. Nevertheless, we suspect that at higher rates, we might see a more important processing overhead of LISP traffic since the UDP encapsulation prevents the system from offloading TCP segmentation to the NIC. V. CONCLUSIONS We have shown that the current MPTCP specification can be refined, adding features to its path discovery, and that the resulting Augmented MPTCP (A-MPTCP) can achieve better performance thanks to a better knowledge of the underlying IP topology gathered via LISP protocol in a LISP-based Cloud network. Our experimentations on a real large-scale test bed involving one data-center network show that the throughput, hence the transfer time, can be greatly improved, thanks to the crosslayer A-MPTCP protocol cooperation we propose in this paper. Our proposition consists in allowing an MPTCP endpoint to gather information about the IP Routing Locators using LISP control-plane messages to query the LISP mapping system. We show that with just one additional LISP-enabled subflow, a transport-layer connection spanning the Internet via disjoint AS paths can terminate file transfers twice faster. It is therefore reasonable to generalize these results stating that, in absence of middle-boxes filtering the MPTCP signaling, the achievable gain is directly proportional to the number of additional LISPenabled subflows. Hence the higher the multi-homing degree of Cloud clients and data-centers, the higher the achievable gain. It is worth stressing that one data-center does not require a worldwide pervasive LISP deployment to offer this performance to its clients, but just need to enable LISP (and MPTCP) at user s endpoints (e.g., using LISPmob [19]) for mobile users, or client s border routers (e.g., using OpenLISP [22] [23]) for enterprise/business customers. It is also worth noting that the proposed A-MPTCP solution does readily apply to inter-dc communications spanning an IP WAN and also the Internet, provided that at least one DC is multi-homed, with no modification to the signaling process between MPTCP and LISP nodes. As a future work we plan to extend this cross-layer protocol architecture and the related prototype to take into account Local Area Network (LAN) and intra-dc path diversity information. This would allow the solution to cover intra-dc MPTCP communications as well. Indeed, instead of consulting a LISP mapping resolver, one could interrogate a Link State Protocol database or mapping directory (such as ISIS in TRILL or SPB), or ad-hoc DC controllers. ACKNOWLEDGEMENT This article was partially supported by the NU@GE project [20], funded by the French Investissement d avenir research programme, and the ANR LISP-Lab project (contract nb: ANR-13-INFR-0009) [21]. The authors would like to thank Nicolas Turpault and Alain Duvivier from Alphalink for their support in the Data-Center interconnect needed for the test bed. REFERENCES [1] L. Ong, J. Yoakum, An introduction to the stream control transmission protocol (SCTP), RFC 3286, May [2] A. Ford, C. Raiciu, M. Handley, S. Barre, J. Iyengar, Architectural guidelines for multipath TCP development, RFC 6182, March [3] M. Coudron, S. Secci, G. Pujolle, Augmented Multipath TCP Communications, in Proc. of 2013 IEEE Int. Conference on Network Protocols (IEEE ICNP 2013), Poster session, Oct. 7-11, 2013, Gottingen, Germany. [4] D. Farinacci, V. Fuller, D. Meyer, D. Lewis, The Locator/ID Separation Protocol, RFC 6830, Feb [5] O. Bonaventure Multipath TCP, Tutorial, IEEE CloudNet [6] P. Raad, G. Colombo, D. Phung Chi, S. Secci, A. Cianfrani, P. Gallard, G. Pujolle, Achieving Sub-Second Downtimes in Internet Virtual Machine Live Migrations with LISP, in Proc. of IEEE/IFIP IM 2013, May 2013, Ghent, Belgium.
9 [7] S.C. Nguyen, T.M.T. Nguyen, G. Pujolle, S. Secci, Strategic Evaluation of Performance-Cost Trade-offs in a Multipath TCP Multihoming Context, in Proc. of 2012 IEEE Int. Conference on Communications (IEEE ICC 2012), June 2012, Ottawa, Canada. [8] K.Sriganesh, Multipath Transmission Control Protocol Proxy, WIPO Patent No Apr [9] C. Raiciu, C. Paasch, S. Barr, Alan Ford, Michio Honda, F. Duchne, O. Bonaventure and Mark Handley. How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP, Proc. of USENIX Symposium of Networked Systems Design and Implementation (NSDI 12), San Jose (CA), [10] D. Farinacci, P. Lahiri, M. Kowal, LISP Traffic Engineering Use- Cases, draft-farinacci-lisp-te-02, Jan [11] D. Farinacci, D. Meyer, J. Snijders, LISP Canonical Address Format (LCAF), draft-ietf-lisp-lcaf-02, March [12] G. Detal et al., Revisiting Flow-Based Load Balancing: Stateless Path Selection in Data Center Networks, Computer Networks (in press). [13] A. Ford, C. Raiciu, M. Handley, O. Bonaventure, TCP Extensions for Multipath Operation with Multiple Addresses, RFC 6824, Jan [14] R. Perlman, RBridges: Transparent Routing, in Proc. of INFOCOM 2005, Mar [15] D. Allan et al., Shortest path bridging: efficient control of larger ethernet networks, IEEE Communications Magazine, (2010): [16] D. Farinacci, D. Meyer, The LISP Internet Groper, RFC [17] The LISP Internet Groper (LIG) open source version (website): [18] LISP Beta Network testbed (website): [19] LISPmob open source LISP node (website): [20] NU@GE project (website): [21] ANR LISP-Lab project (website): [22] D. Saucez, L. Iannone, O. Bonaventure, OpenLISP: An open source implementation of the locator/id separation protocol, Proc. of ACM SIGCOMM 2009, Demo paper. [23] D. Phung Chi, S. Secci, G. Pujolle, P. Raad, P. Gallard, An Open Control-Plane Implementation for LISP networks, in Proc. of IEEE IC-NIDC 2012, Beijing, China, Dec., [24] A. R. Curtis, W. Kim, P. Yalagandula, Mahout: Low-Overhead Datacenter Traffic Management using End-Host-Based Elephant Detection, in Proc. of IEEE INFOCOM [25] MPTCP & LISPMOB open source patches:
LISP Functional Overview
CHAPTER 2 This document assumes that the reader has prior knowledge of LISP and its network components. For detailed information on LISP components, their roles, operation and configuration, refer to http://www.cisco.com/go/lisp
An Overview of Multipath TCP
An Overview of Multipath TCP Olivier Bonaventure, Mark Handley, and Costin Raiciu Olivier Bonaventure is a Professor at Catholic University of Louvain, Belgium. His research focus is primarily on Internet
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
Boosting mobility performance with Multi-Path TCP
Boosting mobility performance with Multi-Path TCP Name SURNAME 1, Name SURNAME 2 1 Organisation, Address, City, Postcode, Country Tel: +countrycode localcode number, Fax: + countrycode localcode number,
Simplify Your Route to the Internet:
Expert Reference Series of White Papers Simplify Your Route to the Internet: Three Advantages of Using LISP 1-800-COURSES www.globalknowledge.com Simplify Your Route to the Internet: Three Advantages of
Preserve IP Addresses During Data Center Migration
White Paper Preserve IP Addresses During Data Center Migration Configure Cisco Locator/ID Separation Protocol and Cisco ASR 1000 Series Aggregation Services Routers 2015 Cisco and/or its affiliates. All
IPv4 and IPv6 Integration. Formation IPv6 Workshop Location, Date
IPv4 and IPv6 Integration Formation IPv6 Workshop Location, Date Agenda Introduction Approaches to deploying IPv6 Standalone (IPv6-only) or alongside IPv4 Phased deployment plans Considerations for IPv4
Cloud Computing and the Internet. Conferenza GARR 2010
Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Stretched Active- Active Application Centric Infrastructure (ACI) Fabric
Stretched Active- Active Application Centric Infrastructure (ACI) Fabric May 12, 2015 Abstract This white paper illustrates how the Cisco Application Centric Infrastructure (ACI) can be implemented as
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
TRILL for Data Center Networks
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview
Network Level Multihoming and BGP Challenges
Network Level Multihoming and BGP Challenges Li Jia Helsinki University of Technology [email protected] Abstract Multihoming has been traditionally employed by enterprises and ISPs to improve network connectivity.
Experiences with MPTCP in an intercontinental multipathed OpenFlow network
Experiences with MP in an intercontinental multipathed network Ronald van der Pol, Michael Bredel, Artur Barczyk SURFnet Radboudkwartier 273 3511 CK Utrecht, The Netherlands Email: [email protected]
IP Networking. Overview. Networks Impact Daily Life. IP Networking - Part 1. How Networks Impact Daily Life. How Networks Impact Daily Life
Overview Dipl.-Ing. Peter Schrotter Institute of Communication Networks and Satellite Communications Graz University of Technology, Austria Fundamentals of Communicating over the Network Application Layer
Internet Protocol: IP packet headers. vendredi 18 octobre 13
Internet Protocol: IP packet headers 1 IPv4 header V L TOS Total Length Identification F Frag TTL Proto Checksum Options Source address Destination address Data (payload) Padding V: Version (IPv4 ; IPv6)
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
IMPLEMENTATION OF LOCATION IDENTIFIER SEPARATION PROTOCOL (LISP) ROUTING PROTOCOL IN NETWORK SIMULATOR 2. A Thesis by.
IMPLEMENTATION OF LOCATION IDENTIFIER SEPARATION PROTOCOL (LISP) ROUTING PROTOCOL IN NETWORK SIMULATOR 2 A Thesis by Prithvi Manduva B.Tech, Progressive Engineering College, JNTU 2008 Submitted to the
Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.
Load Balancing Un article de Le wiki des TPs RSM. PC Final Network Exam Sommaire 1 LSNAT 1.1 Deployement of LSNAT in a globally unique address space (LS-NAT) 1.2 Operation of LSNAT in conjunction with
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Architecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern
Computer Networking Networks
Page 1 of 8 Computer Networking Networks 9.1 Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office
Understanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX
APPENDIX A Introduction Understanding TCP/IP To fully understand the architecture of Cisco Centri Firewall, you need to understand the TCP/IP architecture on which the Internet is based. This appendix
CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs
CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs).
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
IP address format: Dotted decimal notation: 10000000 00001011 00000011 00011111 128.11.3.31
IP address format: 7 24 Class A 0 Network ID Host ID 14 16 Class B 1 0 Network ID Host ID 21 8 Class C 1 1 0 Network ID Host ID 28 Class D 1 1 1 0 Multicast Address Dotted decimal notation: 10000000 00001011
Extending Networking to Fit the Cloud
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN)
A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN) Adeyinka A. Adewale, Samuel N. John, and Charles Ndujiuba 1 Department of Electrical and Information Engineering, Covenant
Definition. A Historical Example
Overlay Networks This lecture contains slides created by Ion Stoica (UC Berkeley). Slides used with permission from author. All rights remain with author. Definition Network defines addressing, routing,
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING
CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility
Scaling the Internet with LISP
Scaling the Internet with LISP Olivier Bonaventure Department of Computing Science and Engineering Université catholique de Louvain (UCL) Place Sainte-Barbe, 2, B-1348, Louvain-la-Neuve (Belgium) http://inl.info.ucl.ac.be
How Network Transparency Affects Application Acceleration Deployment
How Network Transparency Affects Application Acceleration Deployment By John Bartlett and Peter Sevcik July 2007 Acceleration deployments should be simple. Vendors have worked hard to make the acceleration
Using LISP for Secure Hybrid Cloud Extension
Using LISP for Secure Hybrid Cloud Extension draft-freitasbellagamba-lisp-hybrid-cloud-use-case-00 Santiago Freitas Patrice Bellagamba Yves Hertoghs IETF 89, London, UK A New Use Case for LISP It s a use
Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond
Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond Ananda Rajagopal Product Line Manager Service Provider Solutions Foundry Networks [email protected] Agenda 2 Why Load
Allocating Network Bandwidth to Match Business Priorities
Allocating Network Bandwidth to Match Business Priorities Speaker Peter Sichel Chief Engineer Sustainable Softworks [email protected] MacWorld San Francisco 2006 Session M225 12-Jan-2006 10:30 AM -
Requirements of Voice in an IP Internetwork
Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.
Communications and Computer Networks
SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the
Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services
White Paper Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services What You Will Learn IT departments are increasingly relying on best-in-class
The proliferation of the raw processing
TECHNOLOGY CONNECTED Advances with System Area Network Speeds Data Transfer between Servers with A new network switch technology is targeted to answer the phenomenal demands on intercommunication transfer
Avaya VENA Fabric Connect
Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer
Introduction to IP v6
IP v 1-3: defined and replaced Introduction to IP v6 IP v4 - current version; 20 years old IP v5 - streams protocol IP v6 - replacement for IP v4 During developments it was called IPng - Next Generation
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Networking Topology For Your System
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Open vswitch and the Intelligent Edge
Open vswitch and the Intelligent Edge Justin Pettit OpenStack 2014 Atlanta 2014 VMware Inc. All rights reserved. Hypervisor as Edge VM1 VM2 VM3 Open vswitch Hypervisor 2 An Intelligent Edge We view the
TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing
TRUFFLE Broadband Bonding Network Appliance A Frequently Asked Question on Link Bonding vs. Load Balancing 5703 Oberlin Dr Suite 208 San Diego, CA 92121 P:888.842.1231 F: 858.452.1035 [email protected]
Internet Working 5 th lecture. Chair of Communication Systems Department of Applied Sciences University of Freiburg 2004
5 th lecture Chair of Communication Systems Department of Applied Sciences University of Freiburg 2004 1 43 Last lecture Lecture room hopefully all got the message lecture on tuesday and thursday same
Project 4: IP over DNS Due: 11:59 PM, Dec 14, 2015
CS168 Computer Networks Jannotti Project 4: IP over DNS Due: 11:59 PM, Dec 14, 2015 Contents 1 Introduction 1 2 Components 1 2.1 Creating the tunnel..................................... 2 2.2 Using the
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
W H I T E P A P E R A p p l i c a t i o n D e l i v e r y f o r C l o u d S e r v i c e s : C u s t o m i z i n g S e r v i c e C r e a t i o n i n V i r t u a l E n v i r o n m e n t s Sponsored by: Brocade
Multi-site Datacenter Network Infrastructures
Multi-site Datacenter Network Infrastructures Petr Grygárek rek 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity) Disaster recovery Easier handling of planned outages
VLANs. Application Note
VLANs Application Note Table of Contents Background... 3 Benefits... 3 Theory of Operation... 4 IEEE 802.1Q Packet... 4 Frame Size... 5 Supported VLAN Modes... 5 Bridged Mode... 5 Static SSID to Static
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
REDUCING PACKET OVERHEAD IN MOBILE IPV6
REDUCING PACKET OVERHEAD IN MOBILE IPV6 ABSTRACT Hooshiar Zolfagharnasab 1 1 Department of Computer Engineering, University of Isfahan, Isfahan, Iran [email protected] [email protected] Common Mobile
DEPLOYMENT GUIDE Version 1.1. DNS Traffic Management using the BIG-IP Local Traffic Manager
DEPLOYMENT GUIDE Version 1.1 DNS Traffic Management using the BIG-IP Local Traffic Manager Table of Contents Table of Contents Introducing DNS server traffic management with the BIG-IP LTM Prerequisites
Tomás P. de Miguel DIT-UPM. dit UPM
Tomás P. de Miguel DIT- 15 12 Internet Mobile Market Phone.com 15 12 in Millions 9 6 3 9 6 3 0 1996 1997 1998 1999 2000 2001 0 Wireless Internet E-mail subscribers 2 (January 2001) Mobility The ability
References and Requirements for CPE Architectures for Data Access
Technical Report TR-018 References and Requirements for CPE Architectures for Data Access March 1999 '1999 Asymmetric Digital Subscriber Line Forum. All Rights Reserved. ADSL Forum technical reports may
Improving DNS performance using Stateless TCP in FreeBSD 9
Improving DNS performance using Stateless TCP in FreeBSD 9 David Hayes, Mattia Rossi, Grenville Armitage Centre for Advanced Internet Architectures, Technical Report 101022A Swinburne University of Technology
Mobility Management Framework in Software Defined Networks
, pp. 1-10 http://dx.doi.org/10.14257/ijseia.2014.8.8,01 Mobility Management Framework in Software Defined Networks Kyoung-Hee Lee Department of Computer Engineering, Pai Chai University, Korea [email protected]
Chapter 5. Data Communication And Internet Technology
Chapter 5 Data Communication And Internet Technology Purpose Understand the fundamental networking concepts Agenda Network Concepts Communication Protocol TCP/IP-OSI Architecture Network Types LAN WAN
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks Faiz Ahmed Electronic Engineering Institute of Communication Technologies, PTCL
Disaster-Resilient Backbone and Access Networks
The Workshop on Establishing Resilient Life-Space in the Cyber-Physical Integrated Society, March. 17, 2015, Sendai, Japan Disaster-Resilient Backbone and Access Networks Shigeki Yamada ([email protected])
White Paper Abstract Disclaimer
White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification
Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
APPLICATION NOTE 211 MPLS BASICS AND TESTING NEEDS. Label Switching vs. Traditional Routing
MPLS BASICS AND TESTING NEEDS By Thierno Diallo, Product Specialist Protocol Business Unit The continuing expansion and popularity of the Internet is forcing routers in the core network to support the
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Bandwidth Management in MPLS Networks
School of Electronic Engineering - DCU Broadband Switching and Systems Laboratory 1/17 Bandwidth Management in MPLS Networks Sanda Dragos & Radu Dragos Supervised by Dr. Martin Collier email: [email protected]
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
19531 - Telematics. 9th Tutorial - IP Model, IPv6, Routing
19531 - Telematics 9th Tutorial - IP Model, IPv6, Routing Bastian Blywis Department of Mathematics and Computer Science Institute of Computer Science 06. January, 2011 Institute of Computer Science Telematics
Multihoming and Multi-path Routing. CS 7260 Nick Feamster January 29. 2007
Multihoming and Multi-path Routing CS 7260 Nick Feamster January 29. 2007 Today s Topic IP-Based Multihoming What is it? What problem is it solving? (Why multihome?) How is it implemented today (in IP)?
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
CS 457 Lecture 19 Global Internet - BGP. Fall 2011
CS 457 Lecture 19 Global Internet - BGP Fall 2011 Decision Process Calculate degree of preference for each route in Adj-RIB-In as follows (apply following steps until one route is left): select route with
Final for ECE374 05/06/13 Solution!!
1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -
VPN. Date: 4/15/2004 By: Heena Patel Email:[email protected]
VPN Date: 4/15/2004 By: Heena Patel Email:[email protected] What is VPN? A VPN (virtual private network) is a private data network that uses public telecommunicating infrastructure (Internet), maintaining
Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain
Praveen Bhaniramka, Wei Sun, Raj Jain Department of Computer and Information Science The Ohio State University 201 Neil Ave, DL39 Columbus, OH 43210 USA Telephone Number: +1 614-292-3989 FAX number: +1
IP and Mobility. Requirements to a Mobile IP. Terminology in Mobile IP
IP and Mobility Chapter 2 Technical Basics: Layer Methods for Medium Access: Layer 2 Chapter Wireless Networks: Bluetooth, WLAN, WirelessMAN, WirelessWAN Mobile Telecommunication Networks: GSM, GPRS, UMTS
Deploying IPv6 Service Across Local IPv4 Access Networks
Deploying IPv6 Service Across Local IPv4 Access Networks ALA HAMARSHEH 1, MARNIX GOOSSENS 1, RAFE ALASEM 2 1 Vrije Universiteit Brussel Department of Electronics and Informatics ETRO Building K, Office
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
Content Distribution Networks (CDN)
229 Content Distribution Networks (CDNs) A content distribution network can be viewed as a global web replication. main idea: each replica is located in a different geographic area, rather then in the
Software Defined Network (SDN) for Service Providers
Software Defined Network (SDN) for Service Providers Santanu Dasgupta Sr. Consulting Engineer Global Service Provider HQ SANOG 21 January 28th, 2013 2011 2010 Cisco and/or its affiliates. All rights Cisco
Network Address Translation (NAT) Adapted from Tannenbaum s Computer Network Ch.5.6; computer.howstuffworks.com/nat1.htm; Comer s TCP/IP vol.1 Ch.
Network Address Translation (NAT) Adapted from Tannenbaum s Computer Network Ch.5.6; computer.howstuffworks.com/nat1.htm; Comer s TCP/IP vol.1 Ch.20 Long term and short term solutions to Internet scalability
How To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
DESIGN AND VERIFICATION OF LSR OF THE MPLS NETWORK USING VHDL
IJVD: 3(1), 2012, pp. 15-20 DESIGN AND VERIFICATION OF LSR OF THE MPLS NETWORK USING VHDL Suvarna A. Jadhav 1 and U.L. Bombale 2 1,2 Department of Technology Shivaji university, Kolhapur, 1 E-mail: [email protected]
Scalable Linux Clusters with LVS
Scalable Linux Clusters with LVS Considerations and Implementation, Part I Eric Searcy Tag1 Consulting, Inc. [email protected] April 2008 Abstract Whether you are perusing mailing lists or reading
Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters
Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior
Multihoming Management for Future Networks
Mobile Network and Applications manuscript No. (will be inserted by the editor) Multihoming Management for Future Networks Bruno Sousa Kostas Pentikousis Marilia Curado Received: 2010-11-15 / Accepted:
Communication Systems Internetworking (Bridges & Co)
Communication Systems Internetworking (Bridges & Co) Prof. Dr.-Ing. Lars Wolf TU Braunschweig Institut für Betriebssysteme und Rechnerverbund Mühlenpfordtstraße 23, 38106 Braunschweig, Germany Email: [email protected]
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
BCS THE CHARTERED INSTITUTE FOR IT BCS HIGHER EDUCATION QUALIFICATIONS. BCS Level 5 Diploma in IT SEPTEMBER 2014. Computer Networks EXAMINERS REPORT
BCS THE CHARTERED INSTITUTE FOR IT BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 5 Diploma in IT SEPTEMBER 2014 Computer Networks EXAMINERS REPORT General Comments This session is again like the April
LISP-TREE: A DNS Hierarchy to Support the LISP Mapping System
LISP-TREE: A DNS Hierarchy to Support the LISP Mapping System Loránd Jakab, Albert Cabellos-Aparicio, Florin Coras, Damien Saucez and Olivier Bonaventure 1 Abstract During the last years several operators
