Load Balancing Mechanisms in Data Center Networks
|
|
|
- Gloria White
- 10 years ago
- Views:
Transcription
1 Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 Abstract We consider network traffic load balancing mechanisms in data center networks with scale-out topologies such as folded Clos or fat-tree networks. We investigate the flow-level Valiant Load Balancing (VLB) technique that uses randomization to deal with highly volatile data center network traffics. Through extensive simulation experiments, we show that flow-level VLB can be significantly worse than the ideal packet-level VLB for non-uniform traffics in data center networks. We propose two alternate load balancing mechanisms that utilize the realtime network state information to achieve load balancing. Our experimental results indicate that these techniques can improve the performance over flow-level VLB in many situations. I. INTRODUCTION The emerging cloud computing paradigm has driven the creation of data centers that consist of hundreds of thousands of servers and that are capable of supporting a large number of distinct services. Since the cost of deploying and maintaining a data center is extremely high [5], achieving high utilization is essential. Hence, supporting agility, that is, the ability to assign any service to any server, is critical for a successful deployment of a data center. Many networking issues in data center networks must be reexamined in order to support agility. In particular, the network must have layer- semantics to allow any server to be assigned to any service. In addition, the network must provide uniform high capacity between any pair of servers and support high bandwidth server-to-server traffics. Several proposals that can achieve these design goals have been reported recently [1], [], [7]. All of these proposals advocate the use of scale-out topologies, such as folded Clos or fat-tree networks [3], to provide high bandwidth between servers. Although fat-tree topologies have high bisection bandwidth and can theoretically support high bandwidth between any pair of servers, load balancing is essential for a fat-tree topology to achieve high performance. Balancing network traffic in a fat-tree based data center network poses significant challenges since data center traffic matrices are highly divergent and unpredictable []. The current solution addresses this problem through randomization [5], [7] using Valiant Load Balancing (VLB) [3] that performs destination independent random traffic spreading across intermediate switches. VLB can achieve near optimal performance when (1) the traffics are spread uniformly at the packet level; and () the offered traffic patterns do not violate edge constraints. Under these conditions, the packet-level VLB technique is ideal for balancing network traffics and has been shown to have many nice load balancing properties [3]. However, although the fattree topology is well suited for VLB [], there are restrictions in applying VLB in data center networks. In particular, to avoid the out-of-order packet issue in a data center network with TCP/IP communications, VLB can only be applied at the flow level (all packets in one flow use the same path) []. Given that data center traffics typically contain many large flows [], it is unclear whether flow-level VLB can achieve similar performance as packet-level VLB, and whether flowlevel VLB is more effective in dealing with traffic volatility than other load balancing techniques that utilize the network state information. These are the research questions that we try to answer. In this work, through extensive simulation experiments, we show that flow-level VLB can be significantly worse than packet-level VLB for non-uniform traffics in data center networks. We propose two alternate load balancing mechanisms that utilize the real-time network state information to achieve load balancing. Our experimental results indicate that these techniques can improve network performance over flowlevel VLB in many situations. The rest of the paper is structured as follows. Section II briefly introduces fat-tree based data center networks and the existing flow-level VLB load balancing mechanism. Section III compares the performance of flow-level VLB with that of packet-level VLB for different system configurations and traffic patterns. The results indicate that flow-level VLB can be significantly worse than packet-level VLB for non-uniform traffics. Section IV presents the proposed load balancing mechanisms that are based on real-time network state information. Section V studies the performance of the proposed schemes. Finally, Section VI concludes the paper. II. FAT-TREE BASED DATA CENTER NETWORKS We model fat-tree based data center networks based on VL []. A data center network consists of racks of servers. The servers in each rack are connected to a Top of Rack (ToR) switch. Each ToR switch is connected to several aggregation switches, which further connect to top tier intermediate switches. Fig 1 shows an example fat-tree based data center network. The system has four layers: the top layer intermediate switches, the second layer aggregation switches, the third layer top of rack switches, and the fourth level servers. Switch-toswitch links are typically faster than server-to-switch links.
2 In the example, server-to-switch links are 1Gbps and switchto-switch links are 1Gbps. As shown in the figure, each aggregation switch has a bidirectional link connecting to each of the intermediate switches: such a -level folded Clos or fattree topology provides a richly-connected backbone and offers large aggregate capacity among servers. Each ToR switch is connected to two aggregation switches for load balancing and fault-tolerance purposes. Fig. 1. 1Gbps ToR Servers Inte. Inte. Inte. Aggr. Aggr. Aggr. 1Gbps ToR Servers 1Gbps Internet Intermediate switches Aggregation switches ToR Servers Top of Rack switches An example fat-tree network between aggr. and intermediate switches The topology of a fat-tree based data center network can be defined by the following parameters: the number of ports in intermediate switches (d i ), the number of ports in aggregation switches (d a ), the number of servers per rack (npr), the number of aggregation switches that each ToR switch connects to ( ), the switch-to-switch bandwidth (bw sw ), and the server-to-switch bandwidth (bw ms ). Given these parameters, one can derive the following: the ratio of the switch-to-switch bandwidth to server-to-switch bandwidth R = bwsw bw ms ; the number of intermediate switches is da ; the number of aggregation switches is d i ; the number of ToR switches is di da ; the number of servers is di da npr; the numbers of links between intermediate switches and aggregation switches, and between aggregation switches and ToR switches are da d i; the number of links between ToR switches and servers is d i da npr. Hence, in a balanced design where the links in all levels in the system provide the same aggregate bandwidth, we have d i da npr = d a d i R npr = R. Hence, we will call a system with npr = R a balanced system. When npr > R ( di da npr > d a d i R), we will call such a system an over-subscribed system; when npr < R ( di da npr < da d i R), we will call such a system an under-subscribed system. When the system is fully loaded, in an over-subscribed system, upperlevel links cannot handle all traffics generated by the servers. While new designs use balanced systems to support high capacity server-to-server traffics [1], [], over-subscription is common in practical data center networks for cost-saving purposes [5]. A. Valiant Load Balancing (VLB) In the traditional packet-level VLB, routing consists of two stages. In the first stage, the source node splits its traffic randomly to an intermediate node. The intermediate node is randomly selected for each packet and does not depend on the destination. In the second stage, the intermediate node forwards the packet to the destination. Valiant load balancing, which is commonly known as two-stage load balancing, is an oblivious routing scheme that achieves close to optimal performance for arbitrary traffic matrices [], []. The fat-tree topology is well suited for VLB in that VLB can be achieved by indirectly forwarding traffic through a random intermediate switch: packet-level VLB can provide bandwidth guarantees for any traffic matrices that satisfy the edge constraint []. As discussed earlier, the main issue is that in a TCP/IP network, VLB can only be applied at the flow level to avoid the out-of-order packet issue that can cause performance issues in TCP. VLB can have performance problems: when large flows are present, the random placement of flows can potentially lead to persistent congestion on some links while others are under-utilized. Since a fairly large percentage of flows in data center traffics are large [], this may be a significant problem. III. PERFORMANCE OF FLOW-LEVEL VLB In this section, we report our study of the relative performance of flow-level VLB and packet-level VLB. To study the performance, we develop a data center network simulator that is capable of simulating data center networks of different configurations. The network topologies are specified using the following parameters: the number of ports in intermediate switches (d i ), the number of ports in aggregation switches (d a ), the number of servers per rack (npr), the number of aggregation switches that each ToR switch connects to ( ), the switch-to-switch bandwidth (bw sw ), and the server-toswitch bandwidth (bw ms ). The bandwidth ratio R = bwsw bw ms. The simulator supports many traffic patterns including random uniform traffics and random non-uniform traffics, it simulates TCP (including connection management and TCP flow control and congestion control mechanisms) as the underlying communication mechanism with the assumption of no transmission errors and infinite buffers in switches. In studying packet-level VLB, we assume that the end-points can buffer all out-of-order packets and do not cause TCP packet retransmissions when out-of-order packets arrive. We use average packet latency as the performance metric to evaluate the schemes. A better load balancing scheme will result in a smaller average packet latency. Each of the (random) experiments is repeated 3 times. We compute the mean packet latency and 95% confidence interval based on the 3 random samples. For all experiments that we perform, the 95% confidence intervals are less than % of the corresponding mean values, which indicates that the mean values obtained are with high confidence levels. Thus, we will report the mean values and use the mean values to derive other metrics such as the performance improvement percentage.
3 In the experiments, we find that flow-level VLB offers similar performance to packet-level VLB in many situations, for example, when the traffic patterns are uniform random traffics, when the system is under-subscribed, and when the message size is not very large. Fig. and Fig. 3 show two representative cases for uniform traffic patterns. In these patterns, each source-destination pair has a uniform probability to communicate (or not to communicate). The message size for each communication in the experiments is 1Mbits. Fig. shows the case for a balanced system with d i =, d a =, =, npr =, bw sw = Gbps, bw ms = 1Gbps, while Fig. 3 shows the case for an over-subscribed system with d i =, d a =, =, npr =, bw sw = 1Gbps, bw ms = 1Gbps. As can be seen from the figures, the performance of flow-level VLB and packet-level VLB is very similar for both cases. generated as follows. First, the servers in the system are randomly partitioned into clusters of a certain size. After that, communications are only performed between servers in the same cluster. This random clustered traffic pattern will be used throughout this paper as a representative non-uniform traffic pattern. In the experiments, the cluster size is 3 and the message size is bits. Other parameters are =, npr =, bw sw = Gbps, bw ms = 1Gbps. The numbers of ports in intermediate switches and aggregation switches are the same and vary as shown in the figure to allow different sized systems to be investigated. For example, the system with d i = d a = supports 1 = 3 servers, which are partitioned into 7 3-server clusters. As can be seen in the figure, for the random clustered patterns, packet-level VLB is noticeably better than flow-level VLB, and the performance gap is larger as the system size increases Probability..5 Improv. percentage (%) # of switch ports Fig.. Performance for uniform traffic patterns on a balanced system (d i =, d a =, =, npr =, bw sw = Gbps, bw ms = 1Gbps) Probability..5 Fig. 3. Performance for uniform traffic patterns on an over-subscribed system (d i =, d a =, =, npr =, bw sw = 1Gbps, bw ms = 1Gbps) While flow-level VLB performs well on uniform traffics, it does not deliver high performance for non-uniform traffics. Fig. shows the improvement percentage of packetlevel VLB over flow-level VLB for random clustered traffics on different sized balanced systems. The clustered traffic is Fig.. Improvement percentage of packet-level VLB over flow-level VLB for clustered traffics on different sized balanced systems ( =, npr =, bw sw = Gbps, bw ms = 1Gbps) The results in Fig. are for the relatively small message size of bits. Fig. 5 shows the impact of message sizes. Other parameters used in this experiment are: d i = d a = 1, =, npr =, bw sw = Gbps, bw ms = 1Gbps, and cluster size =. As the message size increases, packetlevel VLB becomes more effective in comparison to flowlevel VLB. VLB has performance issues when it routes multiple flows into one link and causes congestion. The congestion will not be resolved until the messages are completed. When the message size is larger, the system becomes more congested for a longer period of time and the packet delay increases. Such a problem will not occur with packet-level VLB where each packet is routed randomly. This result indicates that flow-level VLB can have serious problems in practical data center networks with many large flows and each large flow easily having more than 1MB data []. Notice that using average packet delay as the performance metric can somewhat under-estimate the severity of congestion since packets that are not congested are also counted in the average. Fig. shows the impact of the number of servers per rack. Other parameters in the experiments are: d i = d a = 1, =
4 Improv. percentage (%) K 1M M 5M 1M Message size M M Fig. 5. Impact of message size (d i = d a = 1, =, npr =, bw sw = Gbps, bw ms = 1Gbps), bw sw = Gbps, bw ms = 1Gbps, and cluster size = 3. With the same R = /1 = in the experiments, when npr =, the system is balanced; when npr <, the system is under-subscribed; when npr >, the system is oversubscribed. When the system is under-subscribed, flow-level VLB has the same performance as packet level VLB. This is because higher level links are not congested with either packet-level or flow-level VLB scheme. When the system is balanced or over-subscribed, effective load balancing becomes more critical, and packet-level VLB performs better than flowlevel VLB. It is interesting to see that flow-level VLB has the worst relative performance when the system is balanced. Improv. percentage (%) Servers per rack Fig.. Impact of the number of servers per rack (d i = d a = 1, =, bw sw = Gbps, bw ms = 1Gbps) Fig. 7 shows the performance for different bandwidth ratios on balanced systems. In this experiment, npr increases with R to maintain system balance. Other parameters are: d i = d a = 1, =, bw ms = 1Gbps, message size is 1 bits, cluster size is 3. VLB is significantly worse than packet-level VLB when R is small. The performance gap decreases as R increases. This is because for a small R, a small number of servers can saturate the upper level links and the chance for this to happen is higher than the chance for many servers to saturate an upper level link in cases when R is large. These results indicate that flow-level VLB can be effective for data center networks with large bandwidth ratios Fig. 7. Performance for different bandwidth ratios (R) (d i = d a = 1, =, bw sw = 1Gbps, msize = 1) In summary, flow-level VLB works well for many situations such as uniform random traffics, under-subscribed systems and relatively small message sizes. It does not provide high performance for non-uniform traffics, especially for large messages, large systems, or small bandwidth ratios. These results motivate us to search for alternative load balancing mechanisms that can achieve high performance in these situations. IV. PROPOSED LOAD BALANCING SCHEMES We propose two alternate load balancing schemes to overcome the problems in the flow-level VLB. Instead of relying on randomization to deal with traffic volatility, our schemes explicitly take the link load information into consideration. Since the hardware and software for data center networks are still evolving, we do not limit our design by the current technology constraints; and some of our assumptions may not be supported by the current hardware. Our first scheme is called queue-length directed adaptive routing. This scheme achieves load balancing by routing flows based on the queuelengths of the output ports of switches at the time when the flow starts communicating. The second scheme is called probing based adaptive routing, which probes the network before transmitting a large flow. A. Queue-length directed adaptive routing The key objective of any load balancing scheme is to direct traffic to avoid congested links. The queue-length directed adaptive routing scheme adopts the adaptive routing scheme in high performance interconnects to data center networks. It works by selecting the output port with the smallest queue length that can reach the destination. The first packet of a flow (e.g. the SYN packet of a TCP flow) adaptively constructs the path towards the destination based on the queue-lengths of output ports. Once the path is determined, all other packets will follow the same path to avoid the out-of-order packet problem. Like flow-level VLB, the load balancing granularity
5 of our scheme is also at the flow level. This scheme guarantees that at the time when a flow starts, the least load links will be selected to form the path. This scheme is similar to traditional adaptive routing schemes that are used in high performance interconnects, which are proven technology. The main difference between our scheme and the traditional scheme is that the granularity of adaptivity is at the flow level in our scheme, which makes it more suitable for TCP/IP communications. B. Probing based adaptive routing This scheme assumes that the source node (either the server or the network interface card) can decide the path of a packet and send a probing packet following the path. This can be done with the traditionally source routing scheme. In probing based adaptive routing, when the source node needs to send a large flow, it first probes the network by sending a number of probe packets following different paths to the destination. The number of paths to be probed is a parameter of the scheme. The receiving node replies with an acknowledgment packet for the first probe packet received and drops the other probe packets for the flow. The acknowledgment packet carries the path information, and the source will then use the selected path for the communication. Probing the network information for large flows ensures that large flows are not routed over congested links, and thus achieves load balancing. This scheme allows the current network state to be probed before a large flow is routed and decreases the chance of network congestion. Note that probing based adaptive routing can be built over the traditional TCP/IP communication mechanism without additional support as long as the system supports source routing. V. PERFORMANCE OF THE PROPOSED SCHEMES This section reports our performance study for the proposed load balancing schemes. The experimental setting in this section is similar to that in Section III. We first compare the performance of the two proposed schemes. These two schemes have very similar performance in all the experiments that we performed with different traffic patterns and system configurations. Fig. and Fig. 9 show two representative cases. Fig. has the same system configurations and traffic patterns as those for Fig. ; and Fig. 9 has the same system configurations and traffic patterns as those for Fig. 7. In the probing based scheme, paths are probed to select the path for each flow. As in many other experiments that we carried out, the two proposed schemes almost have the same packet delay. The results are not unexpected since both schemes use the network state information to direct traffic to underloaded links with different mechanisms. Fundamentally, both queue length directed adaptive routing and probing based adaptive routing use the same heuristic, that is, scheduling each flow to the least congested links, and have a very similar resulting performance. We note that queue-length directed adaptive routing will require more system support than probing based adaptive routing. Since these two schemes have similar performance, we will only report the results for the probing based adaptive routing scheme. In the experiments in the rest of this section, probing based adaptive routing probes paths to select the path for each flow Probing () Queue length # of switch ports Fig.. Performance of the proposed load balancing schemes for clustered traffics on different sized balanced systems ( =, npr =, bw sw = Gbps, bw ms = 1Gbps) Probing () Queue length Fig. 9. Performance of the proposed load balancing schemes for clustered traffics on balanced systems with different bandwidth ratios (d i = d a = 1, =, bw ms = 1Gbps, msize=1) Fig. 1 shows the performance improvement percentage of the probing based adaptive routing scheme over flow-level VLB for clustered traffics on balanced systems. The traffic patterns and system configurations are the same as those for Fig.. For different network configurations, the probing based adaptive routing scheme consistently improves the packet latency by about %. Fig. 11 shows the performance improvement of probing based adaptive routing over flow-level VLB for clustered traffics on balanced systems with different bandwidth ratios. The traffic patterns and system configurations are the same as those for Fig. 7. As can be seen from the figure, the probing based scheme consistently offers better performance over flow-level VLB for different configurations with better improvements for smaller R s. Fig. 1 shows the relative performance of flow-level VLB, the probing based scheme, and packet-level VLB for the same configuration. Packet-
6 Improv. percentage (%) # of switch ports Probing() Fig. 1. Improvement percentage of probing based adaptive routing over flow-level VLB for clustered traffics on different sized balanced systems ( =, npr =, bw sw = Gbps, bw ms = 1Gbps) Fig. 1. Packet delays for clustered traffics on balanced systems with different bandwidth ratios (d i = d a = 1, =, bw ms = 1Gbps, msize=1) level VLB significantly out-performs the other two flow-level schemes in all configurations. Improv. percentage (%) Fig. 11. Improvement percentage of probing based adaptive routing over flow-level VLB for clustered traffics on balanced systems with different bandwidth ratios (d i = d a = 1, =, bw ms = 1Gbps, msize=1) The proposed schemes achieve better performance than flow-level VLB for most situations where flow-level VLB performs noticeably worse than packet-level VLB. This indicates that the proposed schemes can be good complements to flowlevel VLB since they can better handle the situations when flow-level VLB is not effective. However, packet-level VLB performs significantly better than the proposed schemes. This is due to the inherent limitation of flow-level load balancing schemes that require all packets in a flow to follow the same path, which limits the load balancing capability of the techniques. that explicitly consider the real-time network condition. The simulation results indicate that both schemes can achieve higher performance than flow-level VLB in the cases when flow-level VLB is not effective. The results also indicate that flow-level load balancing schemes include our newly proposed schemes can be significantly worse than packet-level VLB, which raises a question: how effective any flow-level load balancing scheme can be in data center networks with many large flows? REFERENCES [1] M. Al-Fares, A. Loukissas, and A. Vahdat, A Scalable Commodity Data Center Network Architecture, ACM SIGCOMM, pages 3-7, August. [] T. Benson, A. Anand, A. Akella, M. Zhang, Understanding Data Center Traffic Characteristics, ACM SIGCOMM Computer Communication Review, (1):9-99, Jan. 1. [3] W. J. Dally and B. Towles, Principles and Practices of Interconnection Networks, Morgan Kaufmann Publisher,. [] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, S. Sengupta, VL: A Scalable and Flexible Data Center Networks, ACM SIGCOMM 9, pages 51-, August 9. [5] A. Greenberg, J. Hamilton, D. A. Maltz, P. Patel, The Cost of a Cloud: Research Problems in Data Center Networks, ACM SIGCOMM Computer Communication Review, 39(1):-73, Jan. 9. [] M. Kodialam, T. V. Lakshman,and S. Sengupta, Maximum Throughput Routing of Traffic in the Hose Model, IEEE Infocom,. [7] R. N. Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, A. Vahdat, PortLand: a Scalable Fault- Tolerant Layer Data Center Network Fabric, ACM SIGCOMM 9, pages 39-5, August 9. [] R. Zhang-Shen and N. McKeown, Design a Predictable Internet Backbone Network, In Thirteenth International Workshop on Quality of Service, 5. VI. CONCLUSION We investigate techniques for balancing network traffic in fat-tree based data center networks. We demonstrate that flowlevel VLB can be significantly worse than packet-level VLB in balancing loads. We propose two methods, queue-length directed adaptive routing and probe-based adaptive routing,
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Data Center Network Topologies: VL2 (Virtual Layer 2)
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted
Data Center Network Architectures
Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of
LOAD BALANCING MECHANISMS IN DATA CENTER NETWORKS
LOAD BALANCING Load Balancing Mechanisms in Data Center Networks Load balancing vs. distributed rate limiting: an unifying framework for cloud control Load Balancing for Internet Distributed Services using
Chapter 6. Paper Study: Data Center Networking
Chapter 6 Paper Study: Data Center Networking 1 Data Center Networks Major theme: What are new networking issues posed by large-scale data centers? Network Architecture? Topology design? Addressing? Routing?
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Yueping Zhang NEC Labs America, Inc. Princeton, NJ 854, USA Email: [email protected]
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
MMPTCP: A Novel Transport Protocol for Data Centre Networks
MMPTCP: A Novel Transport Protocol for Data Centre Networks Morteza Kheirkhah FoSS, Department of Informatics, University of Sussex Modern Data Centre Networks FatTree It provides full bisection bandwidth
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya,
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Applying NOX to the Datacenter
Applying NOX to the Datacenter Arsalan Tavakoli UC Berkeley Martin Casado and Teemu Koponen Nicira Networks Scott Shenker UC Berkeley, ICSI 1 Introduction Internet datacenters offer unprecedented computing
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Computer Networks COSC 6377
Computer Networks COSC 6377 Lecture 25 Fall 2011 November 30, 2011 1 Announcements Grades will be sent to each student for verificagon P2 deadline extended 2 Large- scale computagon Search Engine Tasks
Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012
VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks Renuga Kanagavelu, 2 Bu-Sung Lee, Francis, 3 Vasanth Ragavendran, Khin Mi Mi Aung,* Corresponding Author Data Storage Institute, Singapore.E-mail:
The Case for Fine-Grained Traffic Engineering in Data Centers
The Case for Fine-Grained Traffic Engineering in Data Centers Theophilus Benson, Ashok Anand, Aditya Akella and Ming Zhang University of Wisconsin-Madison; Microsoft Research Abstract In recent years,
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical
A Comparative Study of Data Center Network Architectures
A Comparative Study of Data Center Network Architectures Kashif Bilal Fargo, ND 58108, USA [email protected] Limin Zhang Fargo, ND 58108, USA [email protected] Nasro Min-Allah COMSATS Institute
Wireless Link Scheduling for Data Center Networks
Wireless Link Scheduling for Data Center Networks Yong Cui Tsinghua University Beijing, 10084, P.R.China [email protected] Hongyi Wang Tsinghua University Beijing, 10084, P.R.China wanghongyi09@mails.
Portland: how to use the topology feature of the datacenter network to scale routing and forwarding
LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control
Experimental Framework for Mobile Cloud Computing System
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 00 (2015) 000 000 www.elsevier.com/locate/procedia First International Workshop on Mobile Cloud Computing Systems, Management,
Demand-Aware Flow Allocation in Data Center Networks
Demand-Aware Flow Allocation in Data Center Networks Dmitriy Kuptsov Aalto University/HIIT Espoo, Finland [email protected] Boris Nechaev Aalto University/HIIT Espoo, Finland [email protected]
Data Center Networking with Multipath TCP
Data Center Networking with Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley University College London, Universite Catholique de Louvain ABSTRACT Recently
On Tackling Virtual Data Center Embedding Problem
On Tackling Virtual Data Center Embedding Problem Md Golam Rabbani, Rafael Esteves, Maxim Podlesny, Gwendal Simon Lisandro Zambenedetti Granville, Raouf Boutaba D.R. Cheriton School of Computer Science,
Longer is Better? Exploiting Path Diversity in Data Centre Networks
Longer is Better? Exploiting Path Diversity in Data Centre Networks Fung Po (Posco) Tso, Gregg Hamilton, Rene Weber, Colin S. Perkins and Dimitrios P. Pezaros University of Glasgow Cloud Data Centres Are
Multipath TCP in Data Centres (work in progress)
Multipath TCP in Data Centres (work in progress) Costin Raiciu Joint work with Christopher Pluntke, Adam Greenhalgh, Sebastien Barre, Mark Handley, Damon Wischik Data Centre Trends Cloud services are driving
Data Center Netwokring with Multipath TCP
UCL DEPARTMENT OF COMPUTER SCIENCE Research Note RN/1/3 Data Center Netwokring with 1/7/21 Costin Raiciu Christopher Pluntke Sebastien Barre Adam Greenhalgh Damon Wischik Mark Handley Abstract Data centre
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
Scafida: A Scale-Free Network Inspired Data Center Architecture
Scafida: A Scale-Free Network Inspired Data Center Architecture László Gyarmati, Tuan Anh Trinh Network Economics Group Department of Telecommunications and Media Informatics Budapest University of Technology
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN
BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN Abhinava Sadasivarao, Sharfuddin Syed, Ping Pan, Chris Liou (Infinera) Inder Monga, Andrew Lake, Chin Guok Energy Sciences Network (ESnet) IEEE
Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers
Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers Ho Trong Viet, Yves Deville, Olivier Bonaventure, Pierre François ICTEAM, Université catholique de Louvain (UCL), Belgium.
Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks
Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Guru P.V. Thodime, Vinod M. Vokkarane, and Jason P. Jue The University of Texas at Dallas, Richardson, TX 75083-0688 vgt015000,
Hedera: Dynamic Flow Scheduling for Data Center Networks
Hedera: Dynamic Flow Scheduling for Data Center Networks Mohammad Al-Fares Sivasankar Radhakrishnan Barath Raghavan * Nelson Huang Amin Vahdat UC San Diego * Williams College - USENIX NSDI 2010 - Motivation!"#$%&'($)*
DARD: Distributed Adaptive Routing for Datacenter Networks
DARD: Distributed Adaptive Routing for Datacenter Networks Xin Wu Dept. of Computer Science, Duke University Durham, USA [email protected] Xiaowei Yang Dept. of Computer Science, Duke University Durham,
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
DARD: Distributed Adaptive Routing for Datacenter Networks
DARD: Distributed Adaptive Routing for Datacenter Networks Xin Wu Xiaowei Yang Dept. of Computer Science, Duke University Duke-CS-TR-2011-01 ABSTRACT Datacenter networks typically have many paths connecting
Paolo Costa [email protected]
joint work with Ant Rowstron, Austin Donnelly, and Greg O Shea (MSR Cambridge) Hussam Abu-Libdeh, Simon Schubert (Interns) Paolo Costa [email protected] Paolo Costa CamCube - Rethinking the Data Center
Path Selection Methods for Localized Quality of Service Routing
Path Selection Methods for Localized Quality of Service Routing Xin Yuan and Arif Saifee Department of Computer Science, Florida State University, Tallahassee, FL Abstract Localized Quality of Service
Data Center Switch Fabric Competitive Analysis
Introduction Data Center Switch Fabric Competitive Analysis This paper analyzes Infinetics data center network architecture in the context of the best solutions available today from leading vendors such
Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches
Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches Introduction For companies that want to quickly gain insights into or opportunities from big data - the dramatic volume growth in corporate
Switching Architectures for Cloud Network Designs
Overview Networks today require predictable performance and are much more aware of application flows than traditional networks with static addressing of devices. Enterprise networks in the past were designed
Resolving Packet Loss in a Computer Centre Applications
International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data
Rethinking the architecture design of data center networks
Front.Comput.Sci. DOI REVIEW ARTICLE Rethinking the architecture design of data center networks Kaishun WU 1,2, Jiang XIAO 2, Lionel M. NI 2 1 National Engineering Research Center of Digital Life, State-Province
Data Center Networking with Multipath TCP
Data Center Networking with Multipath TCP Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley Hotnets 2010 報 告 者 : 莊 延 安 Outline Introduction Analysis Conclusion
Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani
Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks
Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks Yoonseon Han, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering, POSTECH {seon54, jwkhong}@postech.ac.kr
Mixed-Criticality Systems Based on Time- Triggered Ethernet with Multiple Ring Topologies. University of Siegen Mohammed Abuteir, Roman Obermaisser
Mixed-Criticality s Based on Time- Triggered Ethernet with Multiple Ring Topologies University of Siegen Mohammed Abuteir, Roman Obermaisser Mixed-Criticality s Need for mixed-criticality systems due to
CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS
137 CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS 8.1 CONCLUSION In this thesis, efficient schemes have been designed and analyzed to control congestion and distribute the load in the routing process of
Optical interconnection networks with time slot routing
Theoretical and Applied Informatics ISSN 896 5 Vol. x 00x, no. x pp. x x Optical interconnection networks with time slot routing IRENEUSZ SZCZEŚNIAK AND ROMAN WYRZYKOWSKI a a Institute of Computer and
Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)
Chapter 4 VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network) 4.1 Introduction Traffic Engineering can be defined as a task of mapping traffic
MANY big-data computing applications have been deployed
1 Data Transfer Scheduling for Maximizing Throughput of Big-Data Computing in Cloud Systems Ruitao Xie and Xiaohua Jia, Fellow, IEEE Abstract Many big-data computing applications have been deployed in
Asynchronous Bypass Channels
Asynchronous Bypass Channels Improving Performance for Multi-Synchronous NoCs T. Jain, P. Gratz, A. Sprintson, G. Choi, Department of Electrical and Computer Engineering, Texas A&M University, USA Table
[Yawen comments]: The first author Ruoyan Liu is a visting student coming from our collaborator, A/Prof. Huaxi Gu s research group, Xidian
[Yawen comments]: The first author Ruoyan Liu is a visting student coming from our collaborator, A/Prof. Huaxi Gu s research group, Xidian Universtity. He stays in the University of Otago for 2 mongths
Towards a Next Generation Data Center Architecture: Scalability and Commoditization
Towards a Next Generation Data Center Architecture: Scalability and Commoditization Albert Greenberg, Parantap Lahiri, David A. Maltz, Parveen Patel, Sudipta Sengupta Microsoft Research, Redmond, WA, USA
AIN: A Blueprint for an All-IP Data Center Network
AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
Technical Bulletin. Enabling Arista Advanced Monitoring. Overview
Technical Bulletin Enabling Arista Advanced Monitoring Overview Highlights: Independent observation networks are costly and can t keep pace with the production network speed increase EOS eapi allows programmatic
How To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
On implementation of DCTCP on three tier and fat tree data center network topologies
DOI 10.1186/s40064-016-2454-4 RESEARCH Open Access On implementation of DCTCP on three tier and fat tree data center network topologies Saima Zafar 1*, Abeer Bashir 1 and Shafique Ahmad Chaudhry 2 *Correspondence:
Layer-3 Multipathing in Commodity-based Data Center Networks
Layer-3 Multipathing in Commodity-based Data Center Networks Ryo Nakamura University of Tokyo Email: [email protected] Yuji Sekiya University of Tokyo Email: [email protected] Hiroshi Esaki University of
Data Center Challenges Building Networks for Agility
Data Center Challenges Building Networks for Agility reenivas Addagatla, Albert Greenberg, James Hamilton, Navendu Jain, rikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel udipta
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
NetworkCloudSim: Modelling Parallel Applications in Cloud Simulations
2011 Fourth IEEE International Conference on Utility and Cloud Computing NetworkCloudSim: Modelling Parallel Applications in Cloud Simulations Saurabh Kumar Garg and Rajkumar Buyya Cloud Computing and
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
Smart Queue Scheduling for QoS Spring 2001 Final Report
ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS Smart Queue Scheduling for QoS Spring 2001 Final Report By Haijing Fang([email protected]) & Liu Tang([email protected])
Datacenter Networks Are In My Way
Datacenter Networks Are In My Way Principals of Amazon James Hamilton, 2010.10.28 e: [email protected] blog: perspectives.mvdirona.com With Albert Greenberg, Srikanth Kandula, Dave Maltz, Parveen Patel,
TCP and UDP Performance for Internet over Optical Packet-Switched Networks
TCP and UDP Performance for Internet over Optical Packet-Switched Networks Jingyi He S-H Gary Chan Department of Electrical and Electronic Engineering Department of Computer Science Hong Kong University
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology Scalable Interconnect Topology Tradeoffs Since its inception, InfiniBand has been optimized for constructing clusters with very large
Integrating Servers and Networking using an XOR-based Flat Routing Mechanism in 3-cube Server-centric Data Centers
Integrating Servers and Networking using an XOR-based Flat Routing Mechanism in 3-cube Server-centric Data Centers Rafael Pasquini 1, Fábio L. Verdi 2 and Maurício F. Magalhães 1 1 Department of Computer
Power-efficient Virtual Machine Placement and Migration in Data Centers
2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing Power-efficient Virtual Machine Placement and Migration
DARD: Distributed Adaptive Routing for Datacenter Networks
: Distributed Adaptive Routing for Datacenter Networks TR-2- Xin Wu Xiaowei Yang Dept. of Computer Science, Duke University {xinwu, xwy}@cs.duke.edu ABSTRACT Datacenter networks typically have many paths
MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet
MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet TR-11-09 Yu Chen Xin Wu Xiaowei Yang Department of Computer Science, Duke University {yuchen, xinwu, xwy}@cs.duke.edu ABSTRACT
Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, [email protected]
Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, [email protected] 1. Introduction Ad hoc wireless networks pose a big challenge for transport layer protocol and transport layer protocols
AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK
Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,
PART III. OPS-based wide area networks
PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity
Advanced Computer Networks. Scheduling
Oriana Riva, Department of Computer Science ETH Zürich Advanced Computer Networks 263-3501-00 Scheduling Patrick Stuedi, Qin Yin and Timothy Roscoe Spring Semester 2015 Outline Last time Load balancing
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Network-aware Virtual Machine Consolidation for Large Data Centers
Network-aware Virtual Machine Consolidation for Large Data Centers by Kakadia Dharmesh, Nandish Kopri, Vasudeva Varma in The 3rd International Workshop on Network-aware Data Management Denver, Colorado.
Latency on a Switched Ethernet Network
Application Note 8 Latency on a Switched Ethernet Network Introduction: This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency
Hardware Implementation of Improved Adaptive NoC Router with Flit Flow History based Load Balancing Selection Strategy
Hardware Implementation of Improved Adaptive NoC Rer with Flit Flow History based Load Balancing Selection Strategy Parag Parandkar 1, Sumant Katiyal 2, Geetesh Kwatra 3 1,3 Research Scholar, School of
Per-packet Load-balanced, Low-Latency Routing for Clos-based Data Center Networks
Per-packet Load-balanced, Low-Latency Routing for Clos-based Data Center Networks Jiaxin Cao,, Rui Xia,, Pengkun Yang 3, Chuanxiong Guo, Guohan Lu, Lihua Yuan, Yixin Zheng 5, Haitao Wu, Yongqiang Xiong,
Multipath and Dynamic Queuing base load balancing in Data Centre Network
Multipath and Dynamic Queuing base load balancing in Data Centre Network Sadhana Gotiya Pursuing M-Tech NIIST, Bhopal,India Nitin Mishra Asst.Professor NIIST, Bhopal,India ABSTRACT Data Centre Networks
Minimizing Energy Consumption of Fat-Tree Data Center. Network
Minimizing Energy Consumption of Fat-Tree Data Center Networks ABSTRACT Qing Yi Department of Computer Science Portland State University Portland, OR 9727 [email protected] Many data centers are built using
