[Yawen comments]: The first author Ruoyan Liu is a visting student coming from our collaborator, A/Prof. Huaxi Gu s research group, Xidian

Size: px
Start display at page:

Download "[Yawen comments]: The first author Ruoyan Liu is a visting student coming from our collaborator, A/Prof. Huaxi Gu s research group, Xidian"

Transcription

1 [Yawen comments]: The first author Ruoyan Liu is a visting student coming from our collaborator, A/Prof. Huaxi Gu s research group, Xidian Universtity. He stays in the University of Otago for 2 mongths under the supervison of Yawen Chen and Haibo Zhang working on a collabrated research topic of data center networks.

2 Analyzing Packet-level Routing in Data Centers Ruoyan Liu and Huaxi Gu State Key Laboratory of Integrated Service Networks Xidian University, China Yawen Chen and Haibo Zhang Department of Computer Science University of Otago, New Zealand Abstract Data centers host diverse applications with stringent QoS requirements. The key issue is to eliminate network congestions which severely degrade application performance. One effective solution is to load balance the traffic in the datacenter regular topologies. Many previous strategies focused on optimized flow routing. Limited by practicality, these solutions can hardly achieve ideal load balance while guaranteeing QoS of different traffic flows. In this paper, we discuss packet-level routing and analyze its merit for fine-grained load balance in data centers. Though packet-level routing interacts poorly with TCP in traditional network settings, we prove that it suits datacenter environment. Motived by the work by Dixit, we assert that packet-level routing is the right choice for data centers. We also make a minor change to TCP for better compatibility with packet-level routing in data centers. Finally, we demonstrate by simulations that packet-level routing better fulfills datacenter requirements. Index Terms data center, routing, TCP I. INTRODUCTION In the last few years, 10 Gigabit Ethernet has proven its capability to provide a unified infrastructure for data centers comprising LAN, SAN and clustering network. Low cost, high compatibility and easy management have made Ethernet the first choice for constructing large scale data centers. It can be expected that the impending 40 and 100 Gigabit Ethernet will further reinforce the status of Ethernet in the future. However, one inherent drawback of traditional Ethernet is that packet delivery follows a best-effort manner, which gives rise to packet losses when network traffic overwhelms switch buffers. This is not tolerated in data centers, since Fiber Channel (typically for constructing SAN) consolidated over Ethernet requires reliable packet transmission. As one popular solution for guaranteeing packet lossless, link-level flow control (LL-FC) like credit-based or on/off flow control is widely employed. However, this leads to another undesirable side-effect that network congestion cannot be detected by TCP until the timer of one packet expires. What is worse, LL-FC may rapidly spread local congestion to neighboring switches which are typically commodity components with shallow buffers. Since the rapid growth of data centers brings booming traffic with different stringent QoS requirements, congestion avoidance is increasingly important to datacenter performance. One big challenge is to design a well-behaved routing strategy which perfectly balances the traffic evenly in the datacenter network. Many previous proposals focused on optimized flow routing [3][4][5]. However, limited by overhead and scalability, these proposals can only effectively manage a small number of flows, achieving undesirable load balance while leaving the rest of flows unguaranteed. In this paper, we move to packet-level routing for seeking better solutions. Per-packet load balance in data centers is firstly researched by Dixit et al. [10], who analyzed three simple strategies splitting traffic evenly among all available paths. These strategies achieve fine-grained load balance and eliminate network congestion. However, packet-level routing has long been abandoned in practice (especially in Internet), since it will induce severe packet disorder which drastically degrades TCP throughput. Despite this, packet-level routing is proved by [10] to perform unexpectedly well in data centers. However, Dixit et al. did not perform full analysis of the reason why packet-level routing specially suits datacenter environment. Motived by the work of [10], we continue researching packet-level routing in data centers. We aim to minimize the adverse impact of packet disorder while maximize the advantage of fine-grained load balance. Since multi-rooted topologies begin to be widely employed by data centers, packet-level routing can largely utilize the path diversity of the topologies and eliminate network congestions. Also, packetlevel routing is static with low overhead, it is easy to be implemented in current switch. The two main contributions of this paper are: 1. We analyze the advantages and practicality of packet-level routing in data centers. We assert that packetlevel routing is the right choice for implementing fine-grained load balance. 2. We make a minor change to TCP for better compatibility with packet-level routing in data centers. Finally, our simulations demonstrate that packet-level routing with modified TCP better fulfill datacenter requirements. II. BACKGROUND AND RELATED WORK A. Traffic pattern and network congestion Data centers have been dominating computing services such as web searching, social networking, data mining and scientific computing. The heterogeneous applications require strictly high QoS for either low latency or high throughput [6]. For example, soft-real-time applications (discussed by [6] and [7]) like web searching and social working, produce query traffic which stems from partition-aggregate pattern. An aggregator partitions queries to sub-aggregators which in turn partition each query over a number of worker nodes. The responses from the workers have tight deadlines which are typically between 10ms to 100ms. Responses after the deadline will be deleted, violating service SLA and thus revenue. Besides

3 query traffic, the applications also create a mix of background flows, which are typically used for updating data structures. These flows are much larger, requiring high throughput and burst tolerance. With the mixture of the variety of traffic in data centers, network congestions can be caused by many reasons. Incast [8], for example, is a common problem in partition-aggregate traffic pattern: servers respond almost simultaneously to the aggregator, easily causing a sharp rise of buffer occupancy at the aggregators port. Incast may happen even each response is very small, since a number of responses in a short time may quickly exhaust the shallow buffer of a commodity switch. Another major reason of network congestions comes from large flows collisions: two or more bandwidth-thirsty flows happen to compete for the same output port on a switch. When their bandwidth demands exceed the link capacity, packets will soon saturate the switch buffer. When network congestion happens, the performance of either latency-sensitive or throughput-sensitive traffic will degrade drastically. In addition, since datacenter topologies are always designed with sufficient bandwidth for the expected peak load, the traffic imbalance will cause a lot of network resources underutilized. Therefore, load balance is always one important concern in data centers. B. Datacenter topology As intra-datacenter traffic begins to dominate, todays data centers are organized in the form of multi-rooted tree. Fattree [1] and BCube [2] are typical multi-rooted topologies which are dense interconnect structures. These topologies use identical, commodity components for providing high bisection bandwidth with high scalability and fault tolerance. More importantly, multi-rooted topologies are abundant in path diversity, which endows the network the potential of load balance. For example, in a fat-tree (as shown in Fig. 1) with K pods (a complete bipartite graph of edge and aggregation switches), each edge switch or aggregation switch has K/2 equal-cost uplinks, which translate to (K/2) 2 equal-cost paths between any two server pairs in different pods. C. Flow-level routing Routing in data centers have long been based on flow level: packets in each traffic flow can only follow one route, ensuring that packets will arrive in order. Todays data centers typically use hashed based flow routing like Equal-Cost Multi-Path (ECMP) [9] which forwards each flow according to the TCP 5-tuples in the packet header. This strategy achieves equivalent performance to per-flow random load balancing (RLB), which randomly selects one of the equal-cost paths for each flow. However, neither ECMP nor per-flow RLB achieves ideal performance, for large flows may happen to collide and result in congestions. Recently, several optimized flow scheduling strategies have been proposed. For example, in [3], a central controller with global visibility greedily schedules each large flow to the current least-congested path. This strategy yields much higher throughput than ECMP. However, too much overhead is generated during large flow detection, link state Fig. 1. Equal-cost paths between communicating servers in a 4-pod fat-tree. collection and path probing. Limited by current controller with NOX system [11], this strategy achieves poor scalability. III. PACKET-LEVEL ROUTING Motivated by the work of [10], we discover that packetlevel routing can better fulfill datacenter requirements. The following part will analyze the details of advantages and implementation of packet-level routing. A. Disadvantage of flow-level routing The root problem of flow-level routing is that only one of the equal-cost paths is selected for forwarding each flow, resulting traffic imbalance inevitably due to diversity of flows. As far as we know, all current optimized flow routing strategies can schedule only large flows, since it is far from trivial to schedule each of the numerous short, latency-sensitive flows in large scale data centers. However, even buffer overflows are avoided by these sophisticated strategies, the latency-sensitive flows can still be deadly delayed if they are in queue behind packets from large flows, or there is little space buffering them when large flows occupy too much shared buffer of the switch. The two phenomena are termed as queue-buildup and buffer pressure [6] respectively, which are inherent problems of flow routing and cannot be avoided. Another problem stems from the link-level flow control (LL-FC) which prevents packet losses in data centers. Packet dropping is an effective means for TCP rapidly detecting and reacting to network congestions, however, this function is blocked in data centers. Since flow routing can easily result network congestions which may spread quickly to neighboring nodes and cause disastrous performance degradation. B. Packet-level routing in Internet and data centers Compared with flow-level routing, packet-level routing can achieve per-packet load balance which fully utilizes path diversity. By fully dispersing the packets from each flow to multiple available paths, bandwidth contention between large flows will be eliminated. The problems of queue-buildup, buffer pressure and congestion spreading caused by LL-FC will also be solved, thus the QoS of short while latencysensitive flows can be ensured. Despite the obvious advantages, packet-level routing is long abandoned in Internet. This is because the severe packet disorder caused by packet-level routing interacts poorly with TCP (TCP must transfer in-order segments to upper-layer

4 applications) in Internet where diverse subnets have large differences in latency. Therefore, flow routing is set by default in Internet and TCP is designed by assuming the packets from a flow will arrive at the receiver in-order under normal circumstances. Previous researches have demonstrated the differences between datacenter network and Internet. Despite unwelcomed in Internet, packet-level routing is possible for datacenter setting for the reasons below: 1. Data centers consist of homogeneous components. As shown in Fig. 1, all the switches and links are identical in a 3-tier fat-tree. Parallel paths between any server pair contain same number of switches with same processing capabilities. Therefore, they are equal-cost paths with no difference with each other. It is quite different with the structure of Internet where packet disorder is much more severe caused by packet-level routing. 2. Multi-rooted topologies in data centers are high in scalability but low in network diameter. In a fat-tree of an arbitrary scale, the hops between any server pair are no more than five. This means that it is possible to control packet disorder at a low level. 3. Link level protocols and standards have evolved in data centers. For example, group has defined standards of flow control and congestion control for data center bridging, overlapping some similar mechanisms of TCP. Since these TCP mechanisms interact poorly with packetlevel routing, they can be modified and even cancelled in datacenter environment. The following part will discuss the details. C. Two packet-level routing strategies for data centers In this part, we introduce two simple strategies of packetlevel routing for data centers: per-packet random load balancing (per-packet RLB) and cycle-based round robin (cyclebased RR). 1. Per-packet random load balancing This simple strategy is also introduced in [10]. At each layer, per-packet RLB randomly selects one of the equal-cost next hops for forwarding each packet. From the perspective of probability, packets from a single flow will evenly distribute on each equal-cost path. In a fat-tree, for example, packets are fully dispersed on the upward half of the route. Even the downward half of the route is deterministic, it also achieves fine-grained load balance. 2. Cycle-based round robin In per-packet RLB, packets from two or more input ports may happen to contend the same output port simultaneously. This results minor latency that may cause packet disorder. Cycle-based RR is designed for alleviating this problem. Assume the situation that n input ports share k equal-cost output ports in a switch. For each polling cycle of the switch, cycle-based RR changes the mapping relationship between the input and output ports in a round-robin fashion. For an arbitrary input port C i, the output port S i assigned to C i in the current cycle is: S i =(S 0 i + 1) modulo k, where Si 0 is the assigned output port in the last cycle. The strategy initially lets the input and output ports form the mapping relationship: S init i = i modulo k, for each i 2 [0,n 1]. Obviously, if n is larger than k, it will be a many-toone mapping and packet contention is evitable when there are packets waiting to be allocated in each input buffer. Some topologies, such as fat-tree where n equals to k, this strategy will form a one-to-one mapping, eliminating packet contention. As a result, each new-arriving packet will be transferred immediately. D. TCP modification For packet-level routing, packet disorder cannot be completely eliminated. Packets may contend the same determined output port (typically when destination-based routing is used for packet forwarding) and result in packet delay. As traditionally designed, when receiving a packet whose sequence number is not expected, the TCP receiver will return a duplicate ACK of the last received in-order packet to the TCP sender. TCP will regard three consecutive duplicate ACKs acknowledging the same packet as a sign of packet loss. The considered lost packet will be retransmitted immediately (fast retransmission stage) and the congestion window will be halved. This will in turn reduce the sending rate to alleviate network congestion. Though the TCP reactions make little influence to latency-sensitive flows (typically very short), large flows will suffer from throughput degradation severely. As packet-level routing fully utilizes high bandwidth in data centers, it is not likely to create network congestion unless the aggregate traffic demand from one server surpasses link or NIC capacity. This traffic imbalance seldom happens in data centers, since virtual machines can migrate to other physical servers for a balanced traffic matrix [12]. Therefore, traditional congestion mechanism of TCP improperly handles disordered packets in data centers and causes unnecessary throughput degradation. As a simple modification, we propose that TCP tolerates appropriate more disordered packets before triggering fast retransmission. Note that we dont abandon the congestion control mechanism for considering the situation: a packet may be unexpectedly delayed for a relative long time or dropped for some unknown reasons. If fast retransmission is not allowed for this case, the sending window will grow very slowly until the ACK of the packet returns or the timer expires. As a consequence, the sending rate will also degrade severely. Therefore, an appropriate number of disordered packets that TCP can tolerate (we call it TCP-DP) is needed in real implementation. In the following section, we experiment different TCP-DP with simulations.

5 A. Simulation model IV. SIMULATION Our simulations are based on OPNET We use a 6- pod fat-tree hosting 54 servers as our testbed, and employ the traffic pattern of common soft-real-applications. In our testbed, we simulate aggregate-partition workload by designating one fixed server as the aggregator, and 18 other servers which are scattered in each pod as the worker nodes. The rest servers produce background traffic and the targets are uniformly distributed. Our traffic parameters follow the measurement by [6]: each query flow is 2KB, comprising two packets (we set fixed packet size of 1KB), while the size of each background flow is 25MB. For simplicity, each server generates only one flow at one time and receives at most three background flows simultaneously. We use an identical 10Gbps interconnection with shallow buffered switches (buffer of 160 packets for each port). The polling cycle of each switch is initially set to 50ns. TCP Reno is used as the transport protocol and credit-based flow control is implemented to guarantee packet lossless. Also, each server is configured with infinite storage for buffering delayed packets. B. Performance comparison We compare flow-level ECMP, per-packet RLB and cyclebased RR for query traffic completion and network goodput. During each simulation, the statistics are collected after the system becomes totally stable. 1) Query traffic completion: We simplify the process of query partition by letting the aggregator inform each worker node through remote interruption. A timer is set up when queries are sent out. After getting informed, each worker node immediately generates a response of two packets. We set a tight deadline at the aggregator of 10ms, the responses arriving after the deadline will be shipped out. The aggregator repeats sending queries after each deadline and we count the number of completed responses for each time. As shown in Fig. 2 (left), ECMP only completely finishes 3 times out of 15 independent experiments, while per-packet RLB and cycle-based RR finish each time. This proves that packet-level routing has good and stable performance in delivering latency-sensitive responses. In contrast, flow-level routing unexpectedly delays some of the responses, for bursty flows easily cause buildup which deadly blocks latency-sensitive flows. 2) Network goodput: For this evaluation, we use goodput instead of throughput, since goodput reflects the actual rate the servers receiving valid data. As shown in Fig. 2 (right), either per-packet RLB or cycle-based RR outperforms ECMP even TCP is not modified. At low TCP-DP, cycle-based RR achieves higher goodput than per-packet RLB, since it reduces the degree of packet contention. As we increase TCP-DP, the two packet-level routing strategies achieve higher goodput, for unnecessary retransmission and throughput degradation are prevented. When the number of tolerated disordered packets is bigger than 7, the two packet-level routing strategies achieve similar performance. This is because the discrepancy of packet disorder in two strategies has already been eliminated by Fig. 2. Comparison of query traffic completion and network goodput between flow-level and packet-level routing tolerating more disordered packets. Enlarging TCP-DP over 9 does little improvement, since most disordered packets are within a small range and the performance has reached its limit. However, if the network unexpected delays or drops a packet, a too big TCP-DP may be very harmful to the network performance. In our testbed, we recommend setting TCP-DP to 9. V. FUTURE WORK We give a simple analysis and evaluation of how modified TCP upgrades the performance of packet-level routing in data centers. As for this paper, we still need an accurate mathematical analysis to seek an optimal number of tolerated disordered packets when taking diverse topologies, traffic patterns etc. into consideration. We believe packet-level routing is promising in data center setting, and a dynamic mechanism may better promote the network performance. Also, more sophisticated modifications to TCP or other transport protocols are necessary for better compatibility with packet-level routing in data centers. REFERENCES [1] M. Al-Fares, A. Loukissas, and A. Vahdat, A Scalable, Commodity Data Center Network Architecture, SIGCOMM, [2] Chuanxiong, Guohan Lu and Dan Li, BCube: A High Performance, Server-centric Network Achitecture for Modular Data Centers, SIG- COMM, [3] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat, Hedera: Dynamic Flow Scheduling for Data Center Networks, NSDI, [4] A. R. Curtis, W. Kim, and P. Yalagandula, Mahout: Low-Overhead Datacenter Traffic Management using End-Host-Based Elephant Detection, INFOCOM, [5] Xin Wu, Xiaowei Yang, DARD: Distributed Adaptive Routing for Datacenter Networks, ICDCS, [6] Mohammad Alizadeh, Albert Greenberg, Data Center TCP (DCTCP), SIGCOMM, [7] Christo Wilson, Hitesh Ballani, Thomas Karagiannis, Ant Rowstron,Better Never than Late: Meeting Deadlines in Datacenter Networks, SIGCOMM, [8] V. Vasudevan, Amar Phanishayee, Hiral Shah et al., Safe and Effective Fine-Grained TCP Retransmissions for Datacenter Communication, SIGCOMM, [9] C. Hopps. Analysis of an Equal-Cost Multi-Path Algorithm. RFC 2992 (Informational), Nov [10] Advait Dixit, Pawan Prakash, Ramana Rao Kompella, On the Efficacy of Fine-Grained Traffic Splitting Protocols in Data Center Networks, SIGCOMM, [11] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker, Data Center TCP (DCTCP)Applying NOX to the datacenter, HotNets-VIII, [12] Navendu Jain, Ishai Menache, Joseph Naor and F. Bruce Shepherd, Topology-Aware VM Migration in Bandwidth Oversubscribed Datacenter Networks, ICALP, 2012.

Load Balancing Mechanisms in Data Center Networks

Load Balancing Mechanisms in Data Center Networks Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider

More information

MMPTCP: A Novel Transport Protocol for Data Centre Networks

MMPTCP: A Novel Transport Protocol for Data Centre Networks MMPTCP: A Novel Transport Protocol for Data Centre Networks Morteza Kheirkhah FoSS, Department of Informatics, University of Sussex Modern Data Centre Networks FatTree It provides full bisection bandwidth

More information

TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks

TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks Hong Xu, Baochun Li henry.xu@cityu.edu.hk, bli@ece.toronto.edu Department of Computer Science, City University of Hong Kong Department

More information

International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.

International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014. IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam

More information

Data Center Network Topologies: FatTree

Data Center Network Topologies: FatTree Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously

More information

Data Center Networking with Multipath TCP

Data Center Networking with Multipath TCP Data Center Networking with Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley University College London, Universite Catholique de Louvain ABSTRACT Recently

More information

Hedera: Dynamic Flow Scheduling for Data Center Networks

Hedera: Dynamic Flow Scheduling for Data Center Networks Hedera: Dynamic Flow Scheduling for Data Center Networks Mohammad Al-Fares Sivasankar Radhakrishnan Barath Raghavan * Nelson Huang Amin Vahdat UC San Diego * Williams College - USENIX NSDI 2010 - Motivation!"#$%&'($)*

More information

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data

More information

Computer Networks COSC 6377

Computer Networks COSC 6377 Computer Networks COSC 6377 Lecture 25 Fall 2011 November 30, 2011 1 Announcements Grades will be sent to each student for verificagon P2 deadline extended 2 Large- scale computagon Search Engine Tasks

More information

Data Center Load Balancing. 11.11.2015 Kristian Hartikainen

Data Center Load Balancing. 11.11.2015 Kristian Hartikainen Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple

More information

Data Center Networking with Multipath TCP

Data Center Networking with Multipath TCP Data Center Networking with Multipath TCP Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley Hotnets 2010 報 告 者 : 莊 延 安 Outline Introduction Analysis Conclusion

More information

Load Balancing in Data Center Networks

Load Balancing in Data Center Networks Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for

More information

Data Center Netwokring with Multipath TCP

Data Center Netwokring with Multipath TCP UCL DEPARTMENT OF COMPUTER SCIENCE Research Note RN/1/3 Data Center Netwokring with 1/7/21 Costin Raiciu Christopher Pluntke Sebastien Barre Adam Greenhalgh Damon Wischik Mark Handley Abstract Data centre

More information

Datacenter Network Large Flow Detection and Scheduling from the Edge

Datacenter Network Large Flow Detection and Scheduling from the Edge Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou rui_zhou@brown.edu Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter

More information

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi 1. Introduction Ad hoc wireless networks pose a big challenge for transport layer protocol and transport layer protocols

More information

OpenFlow Based Load Balancing

OpenFlow Based Load Balancing OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple

More information

Advanced Computer Networks. Datacenter Network Fabric

Advanced Computer Networks. Datacenter Network Fabric Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking

More information

Adaptive Routing for Layer-2 Load Balancing in Data Center Networks

Adaptive Routing for Layer-2 Load Balancing in Data Center Networks Adaptive Routing for Layer-2 Load Balancing in Data Center Networks Renuga Kanagavelu, 2 Bu-Sung Lee, Francis, 3 Vasanth Ragavendran, Khin Mi Mi Aung,* Corresponding Author Data Storage Institute, Singapore.E-mail:

More information

A Reliability Analysis of Datacenter Topologies

A Reliability Analysis of Datacenter Topologies A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br

More information

Multipath TCP in Data Centres (work in progress)

Multipath TCP in Data Centres (work in progress) Multipath TCP in Data Centres (work in progress) Costin Raiciu Joint work with Christopher Pluntke, Adam Greenhalgh, Sebastien Barre, Mark Handley, Damon Wischik Data Centre Trends Cloud services are driving

More information

Decentralized Task-Aware Scheduling for Data Center Networks

Decentralized Task-Aware Scheduling for Data Center Networks Decentralized Task-Aware Scheduling for Data Center Networks Fahad R. Dogar, Thomas Karagiannis, Hitesh Ballani, Ant Rowstron Presented by Eric Dong (yd2dong) October 30, 2015 Tasks in data centers Applications

More information

Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP

Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York

More information

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet Joint ITU-T/IEEE Workshop on Carrier-class Ethernet Quality of Service for unbounded data streams Reactive Congestion Management (proposals considered in IEE802.1Qau) Hugh Barrass (Cisco) 1 IEEE 802.1Qau

More information

TRILL Large Layer 2 Network Solution

TRILL Large Layer 2 Network Solution TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Data Center Network Architectures

Data Center Network Architectures Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology juha.salo@aalto.fi Abstract Data centers have become increasingly essential part of

More information

On implementation of DCTCP on three tier and fat tree data center network topologies

On implementation of DCTCP on three tier and fat tree data center network topologies DOI 10.1186/s40064-016-2454-4 RESEARCH Open Access On implementation of DCTCP on three tier and fat tree data center network topologies Saima Zafar 1*, Abeer Bashir 1 and Shafique Ahmad Chaudhry 2 *Correspondence:

More information

IP ETHERNET STORAGE CHALLENGES

IP ETHERNET STORAGE CHALLENGES ARISTA SOLUTION GUIDE IP ETHERNET STORAGE INSIDE Oveview IP Ethernet Storage Challenges Need for Massive East to West Scalability TCP Incast Storage and Compute Devices Interconnecting at Different Speeds

More information

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman

More information

Multipath TCP design, and application to data centers. Damon Wischik, Mark Handley, Costin Raiciu, Christopher Pluntke

Multipath TCP design, and application to data centers. Damon Wischik, Mark Handley, Costin Raiciu, Christopher Pluntke Multipath TCP design, and application to data centers Damon Wischik, Mark Handley, Costin Raiciu, Christopher Pluntke Packet switching pools circuits. Multipath pools links : it is Packet Switching 2.0.

More information

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Wireless Networks 6 (2000) 375 379 375 An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Jian Ma a, Jussi Ruutu b and Jing Wu c a Nokia China R&D Center, No. 10, He Ping Li Dong Jie,

More information

Empowering Software Defined Network Controller with Packet-Level Information

Empowering Software Defined Network Controller with Packet-Level Information Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet

More information

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying

More information

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Advanced Computer Networks. Scheduling

Advanced Computer Networks. Scheduling Oriana Riva, Department of Computer Science ETH Zürich Advanced Computer Networks 263-3501-00 Scheduling Patrick Stuedi, Qin Yin and Timothy Roscoe Spring Semester 2015 Outline Last time Load balancing

More information

On the Impact of Packet Spraying in Data Center Networks

On the Impact of Packet Spraying in Data Center Networks On the Impact of Packet Spraying in Data Center Networks Advait Dixit, Pawan Prakash, Y. Charlie Hu, and Ramana Rao Kompella Purdue University Abstract Modern data center networks are commonly organized

More information

DiFS: Distributed Flow Scheduling for Adaptive Routing in Hierarchical Data Center Networks

DiFS: Distributed Flow Scheduling for Adaptive Routing in Hierarchical Data Center Networks : Distributed Flow Scheduling for Adaptive Routing in Hierarchical Data Center Networks ABSTRACT Wenzhi Cui Department of Computer Science The University of Texas at Austin Austin, Texas, 78712 wc8348@cs.utexas.edu

More information

Depth-First Worst-Fit Search based Multipath Routing for Data Center Networks

Depth-First Worst-Fit Search based Multipath Routing for Data Center Networks Depth-First Worst-Fit Search based Multipath Routing for Data Center Networks Tosmate Cheocherngngarn, Hao Jin, Jean Andrian, Deng Pan, and Jason Liu Florida International University Miami, FL Abstract

More information

Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone.

Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone. Anritsu Network Solutions Voice Over IP Application Note MultiFlow 5048 CALL Manager Serv # 10.100.27 255.255.2 IP address 10.100.27.4 OC-48 Link 255 255 25 IP add Introduction Voice communications over

More information

Per-Flow Queuing Allot's Approach to Bandwidth Management

Per-Flow Queuing Allot's Approach to Bandwidth Management White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth

More information

PACE Your Network: Fair and Controllable Multi- Tenant Data Center Networks

PACE Your Network: Fair and Controllable Multi- Tenant Data Center Networks PACE Your Network: Fair and Controllable Multi- Tenant Data Center Networks Tiago Carvalho Carnegie Mellon University and Universidade de Lisboa Hyong S. Kim Carnegie Mellon University Pittsburgh, PA,

More information

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control

More information

Effects of Interrupt Coalescence on Network Measurements

Effects of Interrupt Coalescence on Network Measurements Effects of Interrupt Coalescence on Network Measurements Ravi Prasad, Manish Jain, and Constantinos Dovrolis College of Computing, Georgia Tech., USA ravi,jain,dovrolis@cc.gatech.edu Abstract. Several

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network

More information

Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches

Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches Introduction For companies that want to quickly gain insights into or opportunities from big data - the dramatic volume growth in corporate

More information

ICTCP: Incast Congestion Control for TCP in Data Center Networks

ICTCP: Incast Congestion Control for TCP in Data Center Networks ICTCP: Incast Congestion Control for TCP in Data Center Networks Haitao Wu, Zhenqian Feng, Chuanxiong Guo, Yongguang Zhang {hwu, v-zhfe, chguo, ygz}@microsoft.com, Microsoft Research Asia, China School

More information

TCP and UDP Performance for Internet over Optical Packet-Switched Networks

TCP and UDP Performance for Internet over Optical Packet-Switched Networks TCP and UDP Performance for Internet over Optical Packet-Switched Networks Jingyi He S-H Gary Chan Department of Electrical and Electronic Engineering Department of Computer Science Hong Kong University

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

Multipath and Dynamic Queuing base load balancing in Data Centre Network

Multipath and Dynamic Queuing base load balancing in Data Centre Network Multipath and Dynamic Queuing base load balancing in Data Centre Network Sadhana Gotiya Pursuing M-Tech NIIST, Bhopal,India Nitin Mishra Asst.Professor NIIST, Bhopal,India ABSTRACT Data Centre Networks

More information

Friends, not Foes Synthesizing Existing Transport Strategies for Data Center Networks

Friends, not Foes Synthesizing Existing Transport Strategies for Data Center Networks Friends, not Foes Synthesizing Existing Transport Strategies for Data Center Networks Ali Munir Michigan State University Ghufran Baig, Syed M. Irteza, Ihsan A. Qazi, Alex X. Liu, Fahad R. Dogar Data Center

More information

Improving Flow Completion Time for Short Flows in Datacenter Networks

Improving Flow Completion Time for Short Flows in Datacenter Networks Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca

More information

Unified Fabric: Cisco's Innovation for Data Center Networks

Unified Fabric: Cisco's Innovation for Data Center Networks . White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness

More information

TCP in Wireless Mobile Networks

TCP in Wireless Mobile Networks TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer

More information

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742. Vol 2 No 3 (May-2015) Active Queue Management

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742. Vol 2 No 3 (May-2015) Active Queue Management Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742 Vol 2 No 3 (May-2015) Active Queue Management For Transmission Congestion control Manu Yadav M.Tech Student

More information

Providing Reliable Service in Data-center Networks

Providing Reliable Service in Data-center Networks Providing Reliable Service in Data-center Networks A.Suresh 1, S. Jaya Kumar 2 1 M.Tech (CSE) Student, Department of CSE, SRM University, Ramapuram, Chennai, India suresh_hce2004@yahoo.co.in 2 Assistant

More information

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP) TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page

More information

How To Provide Qos Based Routing In The Internet

How To Provide Qos Based Routing In The Internet CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013 CSE 473 Introduction to Computer Networks Jon Turner Exam Solutions Your name: 0/3/03. (0 points). Consider a circular DHT with 7 nodes numbered 0,,...,6, where the nodes cache key-values pairs for 60

More information

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:

More information

Data Center Network Topologies: VL2 (Virtual Layer 2)

Data Center Network Topologies: VL2 (Virtual Layer 2) Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted

More information

Dahu: Commodity Switches for Direct Connect Data Center Networks

Dahu: Commodity Switches for Direct Connect Data Center Networks Dahu: Commodity Switches for Direct Connect Data Center Networks Sivasankar Radhakrishnan, Malveeka Tewari, Rishi Kapoor, George Porter, Amin Vahdat University of California, San Diego Google Inc. {sivasankar,

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Resolving Packet Loss in a Computer Centre Applications

Resolving Packet Loss in a Computer Centre Applications International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data

More information

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory

More information

T. S. Eugene Ng Rice University

T. S. Eugene Ng Rice University T. S. Eugene Ng Rice University Guohui Wang, David Andersen, Michael Kaminsky, Konstantina Papagiannaki, Eugene Ng, Michael Kozuch, Michael Ryan, "c-through: Part-time Optics in Data Centers, SIGCOMM'10

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Giving life to today s media distribution services

Giving life to today s media distribution services Giving life to today s media distribution services FIA - Future Internet Assembly Athens, 17 March 2014 Presenter: Nikolaos Efthymiopoulos Network architecture & Management Group Copyright University of

More information

A Passive Method for Estimating End-to-End TCP Packet Loss

A Passive Method for Estimating End-to-End TCP Packet Loss A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se

More information

Three Key Design Considerations of IP Video Surveillance Systems

Three Key Design Considerations of IP Video Surveillance Systems Three Key Design Considerations of IP Video Surveillance Systems 2012 Moxa Inc. All rights reserved. Three Key Design Considerations of IP Video Surveillance Systems Copyright Notice 2012 Moxa Inc. All

More information

Deconstructing Datacenter Packet Transport

Deconstructing Datacenter Packet Transport Deconstructing Datacenter Packet Transport Mohammad Alizadeh, Shuang Yang, Sachin Katti, Nick McKeown, Balaji Prabhakar, and Scott Schenker Stanford University U.C. Berkeley / ICSI {alizade, shyang, skatti,

More information

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.

More information

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Steve Gennaoui, Jianhua Yin, Samuel Swinton, and * Vasil Hnatyshin Department of Computer Science Rowan University

More information

# % # % & () # () + (, ( + + + ( (. /(. 0 + + ( (. /(. 0!12!3 &! 1. 4 ( /+ ) 0

# % # % & () # () + (, ( + + + ( (. /(. 0 + + ( (. /(. 0!12!3 &! 1. 4 ( /+ ) 0 ! # % # % & () # () + (, ( + + + ( (. /(. 0 + + ( (. /(. 0!12!3 &! 1. 4 ( /+ ) 0 5 M21TCP: Overcoming TCP Incast Congestion in Data Centres Akintomide Adesanmi Lotfi Mhamdi School of Electronic & Electrical

More information

ARISTA WHITE PAPER Why Big Data Needs Big Buffer Switches

ARISTA WHITE PAPER Why Big Data Needs Big Buffer Switches ARISTA WHITE PAPER Why Big Data Needs Big Buffer Switches ANDREAS BECHTOLSHEIM, LINCOLN DALE, HUGH HOLBROOK, AND ANG LI ABSTRACT Today s cloud data applications, including Hadoop, Big Data, Search or Storage,

More information

Mixed-Criticality Systems Based on Time- Triggered Ethernet with Multiple Ring Topologies. University of Siegen Mohammed Abuteir, Roman Obermaisser

Mixed-Criticality Systems Based on Time- Triggered Ethernet with Multiple Ring Topologies. University of Siegen Mohammed Abuteir, Roman Obermaisser Mixed-Criticality s Based on Time- Triggered Ethernet with Multiple Ring Topologies University of Siegen Mohammed Abuteir, Roman Obermaisser Mixed-Criticality s Need for mixed-criticality systems due to

More information

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012 VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation

More information

Longer is Better? Exploiting Path Diversity in Data Centre Networks

Longer is Better? Exploiting Path Diversity in Data Centre Networks Longer is Better? Exploiting Path Diversity in Data Centre Networks Fung Po (Posco) Tso, Gregg Hamilton, Rene Weber, Colin S. Perkins and Dimitrios P. Pezaros University of Glasgow Cloud Data Centres Are

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Broadcom Smart-Buffer Technology in Data Center Switches for Cost-Effective Performance Scaling of Cloud Applications

Broadcom Smart-Buffer Technology in Data Center Switches for Cost-Effective Performance Scaling of Cloud Applications Broadcom Smart-Buffer Technology in Data Center Switches for Cost-Effective Performance Scaling of Cloud Applications Sujal Das Product Marketing Director Network Switching Rochan Sankar Associate Product

More information

DARD: Distributed Adaptive Routing for Datacenter Networks

DARD: Distributed Adaptive Routing for Datacenter Networks : Distributed Adaptive Routing for Datacenter Networks TR-2- Xin Wu Xiaowei Yang Dept. of Computer Science, Duke University {xinwu, xwy}@cs.duke.edu ABSTRACT Datacenter networks typically have many paths

More information

# % # % & &( ) & & + ) ),,, ) & & ## )&&. ),,, ) & & ## )&&. / 012 3 2/1 4 ) (.

# % # % & &( ) & & + ) ),,, ) & & ## )&&. ),,, ) & & ## )&&. / 012 3 2/1 4 ) (. ! # % # % & &( ) & & + ) ),,, ) & & ## )&&. ),,, ) & & ## )&&. / 012 3 2/1 4 ) (. 5 Controlling TCP Incast Congestion in Data Centre Networks Akintomide Adesanmi Lotfi Mhamdi School of Electronic and Electrical

More information

Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments

Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Yueping Zhang NEC Labs America, Inc. Princeton, NJ 854, USA Email: yueping@nec-labs.com

More information

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility

More information

A Review on Quality of Service Architectures for Internet Network Service Provider (INSP)

A Review on Quality of Service Architectures for Internet Network Service Provider (INSP) A Review on Quality of Service Architectures for Internet Network Service Provider (INSP) Herman and Azizah bte Abd. Rahman Faculty of Computer Science and Information System Universiti Teknologi Malaysia

More information

Low-rate TCP-targeted Denial of Service Attack Defense

Low-rate TCP-targeted Denial of Service Attack Defense Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu

More information

MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet

MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet TR-11-09 Yu Chen Xin Wu Xiaowei Yang Department of Computer Science, Duke University {yuchen, xinwu, xwy}@cs.duke.edu ABSTRACT

More information

LOAD BALANCING MECHANISMS IN DATA CENTER NETWORKS

LOAD BALANCING MECHANISMS IN DATA CENTER NETWORKS LOAD BALANCING Load Balancing Mechanisms in Data Center Networks Load balancing vs. distributed rate limiting: an unifying framework for cloud control Load Balancing for Internet Distributed Services using

More information

Energy Optimizations for Data Center Network: Formulation and its Solution

Energy Optimizations for Data Center Network: Formulation and its Solution Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

Demand-Aware Flow Allocation in Data Center Networks

Demand-Aware Flow Allocation in Data Center Networks Demand-Aware Flow Allocation in Data Center Networks Dmitriy Kuptsov Aalto University/HIIT Espoo, Finland dmitriy.kuptsov@hiit.fi Boris Nechaev Aalto University/HIIT Espoo, Finland boris.nechaev@hiit.fi

More information

PART III. OPS-based wide area networks

PART III. OPS-based wide area networks PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity

More information

TCP ISSUES IN DATA CENTER SYSTEM- SURVEY

TCP ISSUES IN DATA CENTER SYSTEM- SURVEY TCP ISSUES IN DATA CENTER SYSTEM- SURVEY Raj Kumar Yadav Assistant Professor DES s College of Engineering, Dhamangaon Rly(MS), India, rajyadav.engg@gmail.com A B S T R A C T Data center systems are at

More information

Enabling Flow-level Latency Measurements across Routers in Data Centers

Enabling Flow-level Latency Measurements across Routers in Data Centers Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee, Sagar Kumar, Ramana Rao Kompella Purdue University Abstract Detecting and localizing latency-related

More information