Rethinking the architecture design of data center networks
|
|
|
- Brett Rose
- 10 years ago
- Views:
Transcription
1 Front.Comput.Sci. DOI REVIEW ARTICLE Rethinking the architecture design of data center networks Kaishun WU 1,2, Jiang XIAO 2, Lionel M. NI 2 1 National Engineering Research Center of Digital Life, State-Province Joint Laboratory of Digital Home Interactive Applications, School of Physics and Engineering, Sun Yat-sen University, Guangzhou , China 2 Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China c Higher Education Press and Springer-Verlag Berlin Heidelberg 2012 Abstract In the rising tide of the Internet of things, more and more things in the world are connected to the Internet. Recently, data has kept growing at a rate more than four times of that expected in Moore s law. This explosion of data comes from various sources such as mobile phones, video cameras and sensor networks, which often present multidimensional characteristics. The huge amount of data brings many challenges on the management, transportation, and processing IT infrastructures. To address these challenges, the state-of-art large scale data center networks have begun to provide cloud services that are increasingly prevalent. However, how to build a good data center remains an open challenge. Concurrently, the architecture design, which significantly affects the total performance, is of great research interest. This paper surveys advances in data center network design. In this paper we first introduce the upcoming trends in the data center industry. Then we review some popular design principles for today s data center network architectures. In the third part, we present some up-to-date data center frameworks and make a comprehensive comparison of them. During the comparison, we observe that there is no so-called optimal data center and the design should be different referring to the data placement, replication, processing, and query processing. After that, several existing challenges and limitations are discussed. According to these observations, we point out some possible future research directions. Keywords data center networks, switch-based networks, Received August 23, 2011; accepted January 15, [email protected], [email protected], [email protected] direct networks, hybrid networks 1 Introduction Because of rapid data explosion, many companies are outgrowing their current server space and more data centers are required. These data-intensive systems may have hundreds of thousands of computers and an overwhelming requirement of aggregate network bandwidth. Different from traditional hosting facilities, in these systems the computations continue to move into the cloud and the computing platforms are becoming warehouses full of computers. These new datacenters should not be simply considered as a collection of servers because plenty of the hardware and software resources are working together by taking a good Internet service provider (ISP) into account. To support such services, industry companies have invested greatly the data center constructions such as ebay, Facebook, Microsoft, and Yahoo. For example, Google had 10 million servers and Microsoft also had servers in their data centers in 2009 as shown in Fig. 1 [1]. During the last few years, research on data centers is growing fast. In these developments, networking is the only component that has not been changed dramatically [2]. Though this component does not have the largest cost among others, it is considered as one of the key sources to reduce cost and improve performance. Being aware of this, the architecture design of the data centers has become a hot research topic in the last decade. Nowadays, typical network architectures are
2 Kaishun WU et al. Rethinking the architecture design in data center networks 2 Fig. 1 Microsoft data center in Chicago a hierarchy of routers and switches. When the network size scales up and the hierarchy becomes deeper, more powerful (and much more expensive) routers and switches are needed. As the data centers further develop and expand, the gap between the desired bandwidth and provisioning increases, though the hardware develops quickly as well. Thus, one of the major challenges in architecture design is how to achieve higher performance while keeping the cost low. This article focuses on this question and will present some current works in this area. In the remainder of this article, we will first introduce some basic design principles. Subsequently, we give details of the interconnection techniques using commodity switches, such as fat-tree [3], DCell [4], and BCube [5]. We then make a comparison of the different data center architectures. Based on what we learned, we highlight some open challenges and limitations of the existing works and suggest some possible future directions. 2 Design principles/issues In this section, we present some design criteria and considerations of modern data center designs. Scalability: as the amount of data grows, we need more storage capacity to keep the data in the data center. One typical way to increase storage is to add more components instead of replacing old ones. As more hardware is integrated in data centers, the scalability of the data center network is crucial. Incremental scalability: in practice, instead of adding a huge number of servers at a time, we usually add a small number of storage hosts at a time. We expect minimal impact during the add-on on both the system operator and the system itself [6]. Cabling complexity: in traditional networks (e.g., homes and offices), the cabling complexity is simple. In data center environments, however, cabling is a critical issue when tens of thousands of nodes are hosted. The massive number of cables introduces many practical problems such as the connecting efforts, maintenance, and cooling. Bisection bandwidth: bisection bandwidth is defined as the bandwidth between two equal parts segmented from the original network using an arbitrary partition manner. This metric is widely used in performance evaluation of data center networks [3 5, 7, 8]. Aggregated throughput: the aggregate throughput measures the sum of the aggregated data rates when a networkwide broadcast is conducted. It is also known as the system throughput. Oversubscription: given a particular communication topology in a data center network, its oversubscription is defined as the ratio of the maximal aggregate bandwidth among the end hosts in the worst case to the total bisection bandwidth [3]. Fault Tolerance: hardware failures are common in largescale data centers, which make data center networks suffer from poor reliability and low utilization [9]. When hardware failures occur, alternative means are needed to ensure the availability of the data center networks. Energy consumption: recently, the power efficiency of data center networks has become increasingly important. To save the energy consumption, we can use low-power CPUs and GPUs, equip more efficient power supplies, and apply water-cooling mechanisms. Software means, such as visualization and smart cooling, can also help. Besides these, the architecture design is also critical for controlling the energy cost. Costs: the cost greatly affects the design decisions for building a large-scale data center networks. We hope to leverage economic off-the-shelf hardware for large-scale network interconnection [2]. Fairness: for applications that farm work out to many workers and finish when the last worker finishes,such fairness can greatly improve overall performance in data center networks. Reliability: high reliability is one of the most essential criteria for designing data center networks. In fact, a great waste of computing resources can be induced by an unreliable data center networks that cause the operation of applications and services to fail. Security: security is also critical for the success of the data
3 Front. Comput. Sci. China 3 center network services. Data exchanged between different nodes should be isolated from other unintended services to guarantee security. Latency: in data center networks, the delay incurred in the end systems or transmission between network nodes is called latency. Low latency interconnection in data center networks will benefit international data traffic. For example, by reducing the international data transmission latency, the colocation cost can be reduced. These criteria interplay with each other to influence the performance of data center networks. Such interaction includes checks and balances. For example, a data center network will induce high latency when some link fails (because there are too many hops to transmit a packet); a data center network with high reliability should also be fault tolerant. 3 Existing DCN architectures The state-of-art data center networks can be classified into three main classes according to the different network interconnection principles, namely switch-based networks, direct networks, and hybrid networks. In the next part we will elaborate and exemplify each of them. 3.1 Switch-based network A switch-based network, called an indirect network, typically consists of a multi-level tree of switches to connect the end servers (typically two or three levels). Switch-based networks are widely adopted and implemented in today s terascale data centers. They are able to support communications between tens of thousands of servers. Take a conventional three-level switch-based network as an example. The leaf switches (also known as the top of rack (ToR) switches) have a set of 1 Gbps Ethernet ports and are responsible for transferring packets within the rack. The layer two aggregation switches have 10 Gbps links to interconnect ToR switches, and these layer-2 switches will be connected by a more powerful switch when more hierarchy structure is applied In switch-based network architectures, the bottleneck is at the top level of the tree. Such bandwidth bottleneck is often alleviated by employing more powerful hardware at the expense of high-end switches. These solutions may increase the oversubscription problem and cause scalability issues. To address these issues, Fat-tree architecture [3] has been proposed. Instead of the skinny links that are used in a traditional tree, Fat-tree allows fatter links from the leaves towards the root. A typical Fat-tree can be split into three layers: core, aggregation, and edge (see Fig. 2). Suppose there are k pods (a pod is a small bunch of servers with certain connections in between) in the aggregation layer and each pod supports non-blocking operation among ( k 2 )2 core switches in a data center network (in the example of Fig. 2, k=4). The aggregate switches are divided into two groups: one is directly connected to k 2 servers and the other is connected to the remaining group. Consecutively, the total number of servers supported by a fat tree is k3 4. Each core switch can connect to other pod switches and ultimately connect to the servers. Fat-tree architecture performs as well as the traditional tree architecture but uses commodity switches only. It avoids the high-end expensive devices. Recently, the proliferation of cloud services incentivizes the construction of data center monsters. To supply a myriad of distinct services in such scale data centers, server utilizations should be improved as well. Towards this end, the agility of assigning any server to any service is an essential property for a data center network design. The architecture of higher agility can achieve a high utilization and the capability of dynamically allocating resources. For instance, Greenberg et al. [9] introduce the virtual layer 2 (VL2) architecture based on the basic fat tree topology. VL2 presents the attractive agility of connecting all the servers to a separate VL2 Ethernet switch with servers ranging from one to VL2 deploys valiant load balancing (VLB) among multipath to ensure non-interfering network. To better host online applications running on substantial servers within a common multi-rooted tree data center, PortLand [10] is proposed. PortLand adopts a plug-and-play layer-2 design to enhance the fault tolerance and scalability. By employing an OperFlow fabric manger with the local switches in the edge of data center networks, PortLand can delicately make the appropriate forwarding decision. 3.2 Direct network Another option to make connections between the servers is by a direct network (also termed as router-based network ). Direct networks directly connect servers to other servers in the network without any switch, routers, or network devices. Servers will serve both as a server and a network forwarder. Direct networks are often used to provide better scalability, fault tolerance, and high network capacities. Some practical implementations of direct networks will be presented
4 4 Kaishun WU et al. Rethinking the architecture design in data center networks DCell1 Core DCell0[0] DCell0[1] DCell0[2] DCell0[3] DCell0[4] Aggregation Edge [0.0][0.1][0.2][0.3] [1.0][1.1][1.2][1.3] [2.0][2.1][2.2][2.3] [3.0][3.1][3.2][3.3] [4.0] 4.1] [4.2][4.3] Fig. 2 Switch-base architecture Fig. 3 DCell architecture Ethernet switch BCube k BCube k-1[0] BCube k-1[1] BCube k-1[n-1] [0] [1]... [n k-1 ] [0] [1]... [n k-1 ] [0] [1]... [n k-1 ] Optical switch Fig. 4 BCube architecture Fig. 5 Hybrid architecture here. DCell [4] is one of the first direct data center networks. In DCell, servers are connected to several other servers via mini-switches with bidirectional communication links. A high-level DCell is constructed in a top-to-down, recursive manner. More specifically, denote a level k DCell as DCell k where k 0. Primarily, n servers and a mini-switch form a DCell 0 in which all servers are connected to the mini-switch. DCell 1 is therefore made of n + 1 DCell 0. All pairs of these n DCell 0 will be connected by a single link. Therefore in DCell 1, each sever has two links, one to connect to its miniswitch and the other to connect to a server in a DCell 0 (see Fig. 3 as an exmaple). Similarly we can construct DCell 2 with n + 1 DCell 1 and so on for DCell k using n + 1 DCell k 1. High network capacity and good fault tolerance are desirable traits in DCell. Though DCell is advantageous to scale out, its incremental scalability is incredibly poor. Specifically, as long as a DCell architecture is accomplished, it is very hard to add a small number of new servers to the architecture without ruining the original architecture structure. Moreover, the imbalanced traffic load also makes DCell perform poorly. To support unevenly distributed traffic load in data centers, generalized DCell framework [7] is proposed which has a smaller diameter and higher symmetry structure. With the data-intensive services spread out all over the world, a high degree mobility modular data center (MDC) is urged to emerge. Shipping-container based MDC is ideal for eliminating hardware administration tasks (e.g., installation, trouble-shooting, and maintenance). It achieves cost effectiveness and environmental robustness by deploying a severcentric approach. For instance, BCube [5] structure is specially devised for MDC consisting of multi-port servers and switches. Based on DCell, BCube is recursively constructed from n BCube 0 and n n-port switches. Such BCube k (k 1, denotes the level) is built from n BCube k 1 in which each server has k +1 ports. It is easy to see that a BCube k comprises n k+1 servers and k+1 levels of switches. Figure 4 illustrates the basic procedure of constructing BCubek. The employed strategy of BCube guarantees that switches merely connect to servers rather than other switches. Recently, Microsoft research proposed a project named CamCube [11] that applies a 3D torus topology. The 3D torus topology shares a similar idea of BCube. In 3D torus, each server directly connects to six other servers bypassing the usage of switches or routers. As the communication links between servers in data center are directed, higher bisection bandwidth is expected. 3.3 Hybrid network A novel approach to interconnect servers and switches appears in the rising tide of employing optical circuit switches. Compared to packet switching, the optical circuit switching
5 Front. Comput. Sci. China 5 is superior in terms of its ultra-high bandwidth, low transmission loss, and low power consumption. More importantly, optical switches are becoming commodity off-the-shelf (COT- S) and require shorter reconfiguration time thanks to the recent advances in micro-electro-mechanical systems (MEM- S). With these improvements, a number of data center networks deploy both optical circuit switching and electrical packet switching to make the connections. We call these hybrid networks as shown in Fig. 5. For instance, Helios [12] explores a hybrid 2-level multi-rooted tree architecture. By simply programming the packet switches and circuit switches, Helios creates an opportunity to provide preferable ultrahigh network bandwidth and a reduced number of wiring cables. Besides Helios, another hybrid data center network architecture is called c-through [13]. c-through makes a better use of the transient high capacity optical circuits by integrating optical circuit switches and packet switching servers. The optical circuit switches will buffer traffic to collect sufficient volumes for high-speed transmission. A key difference between Helios and c-through is that Helios implements its traffic load on switches while c-through alternatively utilizes the hosts to buffer data. Helios is advantageous for its transparency of traffic control at the end hosts, but requires the modification of every employed switch. In contrast, c- Through buffers data in the hosts, which allows c-through to amortize the workload over a longer period of time and utilize the optical link more effectively. Helios and c-through are t- wo typical hybrid schemes that attempt to optimize the data center networks by taking the advantages from both kinds of switches to make the data center networks most beneficial by optimizing both kind of switches. 4 A sea of architectures: which to choose? In the previous section we give insights of the state-of-art data center network architectures. These proposals exhibit promising features according to their measurements and performance evaluations and etc. It is, however, not clear how they perform when mutual comparison is conducted. We therefore make a comprehensive comparison between them. In this section we construct a typical data center network context and compare the performances of different proposals using the metrics in Section 2. In our comparisons, we compare the following alternatives and summarize them in Table 1. The traditional hierarchical tree structure presents the advantages of ease-of-wire but is limited by poor scalability. It is well known that tree-based architectures are vulnerable to link failures between switches and routers and therefore fault-tolerance is poor. Fat-tree solves this problem to some extent by increasing the number of aggregation switches but the wiring become much more complex. Multipath routing is effective in maximizing the network capacity such as the TwoLevelTable, hot-spot-routing used by VL2, and locationdiscovery-protocol (LDP) by PortLand. To cope with the tremendous workload volatility in data centers, fat-tree adopts VLB to guarantee the balance among different traffic patterns. In terms of the fault-tolerance, fat tree provides gracefully degraded performance, making it greatly outperform the tree structure. It develops a failure broadcast protocol to handle two groups of link failure between: (a) the lower- and upper-layer switches, and (b) the upper layer and core switches. Fat tree is also much more cost effective than the tree structure as it requires no expensive high-end switches and routers. DCell is an alternative proposal that adopts direct recursively defined interconnection topology. In DCell, servers in identical layers are fully connected, which makes it more s- calable than fat tree. However, incremental development is a strenuous mission for DCell due to the significant cabling complexity. In addition, traffic imbalance could be a severe obstacle to considering DCell as a primary choice. In most commercial companies today, a shippingcontainer-based modular data center meets the need of high degree mobility. BCube is the first representative Modular data center. It packs sets of servers and switches into a s- tandard 20- or 40- feet shipping-container and then connects different containers through external links. Based on DCell, BCube is designed to support various traffic loads and provide high bisection bandwidth. Load balancing is an appealing advantage of BCube compared to DCell. MDCube [14] s- cales the BCube structure to a mega level while ensuring high capacity at a reasonable cost. The server centric MDCube deploys a virtual generalized hypercube at the container level. This approach directly interconnects multiple BCube blocks using 10 Gbps optical switch links. Each switch functions as a virtual interface and each BCube block is treated as a virtual node. In a way that one node can have multiple interfaces; MDCube can interconnect a huge number of BCube blocks with high network capacity. Also, it delicately provides load balancing and fault tolerant routing to immensely improve the performance. For hybrid structures, electrical switches provide lowlatency immediate configuration and optical switches are advanced at the ultra-high speed data transmission, low loss, ultra-high bandwidth and low power consumption. To com-
6 6 Kaishun WU et al. Rethinking the architecture design in data center networks Table 1 General comparison of state-of-art data center architecture Tree Fat tree DCell BCube Hybrid Scalability Poor Good Excellent Good Good (scale up) (scale out) (scale out) (scale out) Incremental Good Good Poor Not necessary Good scalability Wiring Easy Easy Very difficult Difficult Easy Multipath No Switch and router End-host protocol upgrade Switch and router routing protocol upgrade protocol upgrade Fault Poor Against switch Against switch, router and Against switch, router tolerance and router failures end-host port failures and end-host port failures Cost High-end switches Low-end customized switches and routers Low-end Ethernet and routers (cheap but many) and optical switches Traffic balance No Yes No Yes Yes Graceful Poor Good Excellent Excellent Good degradation bine the best of both worlds, hybrid networks develop traffic demand estimation and traffic demultiplexing mechanisms to dynamically allocate traffic onto the circuit or packet switched network. In this section, we observe that the existing topologies of data center networks are similar with HPC. The difference between them is in the low layer design methods. For example, latency is a key issue in the both data center network and H- PC. But data transfer from memory in HPC is different from the data transfer between the servers in data center network. Existing data center architectures are all fixed. They may not provide adequate support for dynamic traffic patterns and different traffic loads. It is still an open question as to which architecture will perform best and whether the adaptive dynamic data center networks are feasible. 5 Challenges With the existing data center network designs, we identify some key limitations and point out some open challenges that can be the subject of future research. First, existing interconnection designs are all symmetrical. These symmetric architectures are difficult to extend when we need to add a small number of servers, or we have to lose the original network structure. In other words, these architectures have poor incremental salability. For example, in order to expand a data center of the hypercube architecture, the number of servers has to be doubled every time. In practice, however, most companies cannot afford the cost of adding such a large number of servers at one time. In BCube, D- Cell, and Fat-tree, when the present configuration is full, the network performance will be lower due to imbalance when only a small number of new servers are added. Besides these interconnection problems, heterogeneity is also a major issue in the network design. In 10 years time new technologies will be accessible, we will face a practical dilemma: either we have to integrate the old and new technologies into a single system, or we have to obsolete the old ones. It is yet an open question which will be the better choice. To deal with this problem, is it a good idea to reserve some place for such potential applications at the present time? Second, not only should we consider the connections within the data center but also the connections to the external world. In a switch based network such as Fat-tree, it is easy to connect to external world. In the direct networks we merely focus on the interconnection of the data center. Instead, take no account for connection to the external world into account. Clearly, the latter problem is also crucial. For example, HTTP serves the external traffic while MapReduce serves the internal traffics. In that case, we should not treat them as the same and take uniform actions. Different flows may have a varied impact on the network performance. We might consider the problem as a quality of service (QoS) problem and find an optimal design to better schedule the external traffic flow. Third, as MEMS further develops energy issues become increasingly important. The demand of different applications, may present various traffic patterns with certain unique characteristics. For example, Amazon EC2 is a cloud service that provides platform as a service (PAAS). In EC2, many users and applications run concurrently within a datacenter. Workloads are affected by user activities which are difficult if not impossible to predict. Thus, the traffic pattern con-
7 Front. Comput. Sci. China 7 stantly changes over time. In such cases, the networking design should not assume any fixed traffic pattern. Additionally, the design should also emphasize the network connections to the outside world (the Internet). Another example is that for a datacenter that runs data-intensive applications such as Hadoop, the network design may be optimized for bisectionbandwidth. It is less important to consider how to connect to the outside world. We can observe that some data may be used very frequently at a given time (called hot data ). Without careful design, the disk and servers may consume a lot of energy in the transfer between sleep and wake-up states. Some servers, in contrast, store data for backup purpose only (called cold data ); such servers and disks can safely stay in a sleep state to save energy. In practice, optimizations can be achieved by appropriate scheduling of servers between sleep and wake-up cycles. With this in mind, we can observe that data placement is also important for green data centers. Notice that there may not be a single optimal design that is suitable for all applications and thus the choice is likely to be application-dependent. But it is not yet clear what the implications are for such the applications. Today, the data may come from various sources such as mobile phones, camera videos, and sensor networks that present multidimensional characteristics. Different users may have different requirements according to their data. User requirements may have different workloads and thus have different traffic patterns. In that case, suppose we are designing a data center for the traffic data from monitor cameras or sensors. The optimal data center architecture is not straightforward. For example, if the trajectory of a taxi is distributed across several servers. How do we place this data so that we can search the trajectory of this taxi quickly? Should we replicate this data? When new trajectory data arrives at the data center, how can we migrate the original data? If we design a data center for such data, the data placement, replication and migration will also become challenges. All these questions are challenging to answer in practical environments. In addition, the application relies on the queries to be used. Each query also implies some communication patterns between nodes. 6 Conclusion and future directions This paper discusses the architecture design issues in current data center networks. Motivated by a better support for data intensive applications and supplying higher bandwidth performance, how to optimize the interconnection of data centers becomes a fundamental issue. We begin by introducing the development atmosphere of current data center architectures. Then, we comprehensively elaborate on the prevalent data center frameworks deployed in current enterprises and research associations. We compare several well-known architectures and remark on the existing limitations. After reviewing several representative architecture designs, we list some possible research directions. Thanks to the maturity of optical techniques, some aforementioned hybrid architectures have begun to use both optical and electrical components, e.g., Helios and c-through [12, 13]. As optical devices become more and more inexpensive, all optical architectures may become another direction for future data center architectures. In [6], the authors show us an all optical architecture and compare the cost with Fattree. Though their work is in the initial stages without extensive experimental evaluation, we believe that the all optical architecture will become a good choice for its high capacity nature. However, these pure wire data center have static characteristics which cannot solve dynamic cases. This means that once a data center is built, it is hard to change its topology. There are two possible solutions for future data center network design. The first applicable to some fixed applications, we design the architecture for their special traffic patterns using the metrics we mentioned in Section 2. With the spread of container based data centers, the architecture design is fixed in some cases. In that case, the data center should enable some dynamic design applications to achieve higher performance. Moreover, the cabling complexity is a big issue in data centers as it will waste much space, can be difficult to connect, hard to maintain and needs adequate cooling. On demand of these, the hybrid data center which combines wired and wireless networks may be a good choice for such requirements. Recently, the authors in [15] have proposed to leverage multi-gigabit 60 GHz wireless links in data centers that reduce interference and enhance reliability. In [16], the feasibility of wireless data centers has been explored by applying a 3D beamforming technique, and thus has introduced improvements in link range and concurrent transmissions. Similarly, by taking advantages of line-of-sight (LOS) on top-ofrack servers, steered-beam mmwave links have been applied in wireless data center networks [8]. As multi-gigabit-speed wireless communication is being developed and specified by the Wireless Gigabit Alliance (WiGig), wireless data centers (WDC) are likely to arrive in the near future. Instead of using wired connections, wireless technologies will bring many
8 8 Kaishun WU et al. Rethinking the architecture design in data center networks advantages. For example, it is easy to expand WDC and its topology can be easily changed. Maintenance of the WDC is much easier since wireless nodes can be replaced easily. Also by using wireless connections, we can simply transmit packets from one node to any other node we wish. Building such data centers will take much less time for the connection of the servers. However, compared with the wired connections, wireless is less reliable and has lower channel capacity. How to design a good hybrid data center which combines wired and wireless to illustrate its performance will be a future research topic. Acknowledgements This research was supported in part by Pearl River New Star Technology Training Project, Hong Kong RGC Grant (HKUST617710, HKUST617811), the National High Technology Research and Development Program of China (2011AA010500), the NSFC-Guangdong Joint Fund of China (U , U , and U ), and the National Key Technology Research and Development Program of China (2011BAH27B01). References 1. Yang M, Ni L M. Incremental design of scalable interconnection networks using basic building blocks. IEEE Transactions on Parallel and Distributed Systems, 2000, 11(11): Greenberg A, Hamilton J, Maltz D A, Patel P. The cost of a cloud: research problems in datacenter networks. ACM SIGCOMM Computer Communication Review, 2009, 39(1): Al-Fares M, Loukissas A, Vahdat A. A scalable commodity data center network architecture. In: Proceedings of the ACM SIGCOMM 2008 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2008, Guo C, Wu H, Tan K, Shi L, Zhang Y, Lu S. DCell: a scalable and fault-tolerant network structure for data centers. In: Proceedings of the ACM SIGCOMM 2008 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2008, Guo C, Lu G, Li D, Wu H, Zhang X, Shi Y, Tian C, Zhang Y, Lu S. BCube: a high performance, servercentric network architecture for modular data centers. In: Proceedings of the ACM SIGCOMM 2009 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2009, Singla A, Singh A, Ramachandran K, Xu L, Zhang Y. Proteus: a topology malleable data center network. In: Proceedings of 9th ACM workshop on Hot Topics in Networks Kliegl M, Lee J, Li J, Zhang X, Guo C, Rincon D. Generalized DCell structure for load-balanced data center Networks. In: 2010 INFOCOM IEEE Conference on Computer Communications Workshops Katayama Y, Takano K, Kohda Y, Ohba N, Nakano D. Wireless data center networking with steered-beam mmwave links. In: Proceeding of 2011 IEEE Wireless Communications and Networking Conference. 2011, Greenberg A G, Hamilton J R, Jain N, Kandula S, Kim C, Lahiri P, Maltz D A, Patel P, Sengupta S.VL2: a scalable and flexible data center network. In: Proceedings of the ACM SIGCOMM 2009 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2009, Mysore R N, Pamboris A, Farrington N, Huang N, Miri P, Radhakrishnan S, Subramanya V, Vahdat A. Port- Land: a scalable fault-tolerant layer 2 data center network fabric. In: Proceedings of the ACM SIGCOMM 2009 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2009, Costa P, Donnelly A, O Shea G, Rowstron A. CamCube: a key-based data center. Technical Report, Microsoft Research, Farrington N, Porter G, Radhakrishnan S, Bazzaz H H, Subramanya V, Fainman Y, Papen G, Vahdat A. Helios: a hybrid electrical/optical switch architecture for modular data centers. In: Proceedings of the ACM SIGCOM- M 2010 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2010, Wang G, Andersen D G, Kaminsky M, Papagiannaki K, Ng T S E, Kozuch M, Ryan M P. c-through: part-time optics in data centers. In: Proceedings of the ACM SIG- COMM 2010 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2010, Wu H, Lu G, Li D, Guo C, Zhang Y. Mdcube: a high performance network structure for modular datacenter interconnection. In: Proceedings of the 2009 ACM Conference on Emerging Networking Experiments and Technology. 2009, Halperin D, Kandula S, Padhye J, Bahl P, Wetherall D. Augmenting data center networks with multi-gigabit wireless links. In: Proceedings of the ACM SIGCOMM 2011 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. 2011, Zhang W, Zhou X, Yang L, Zhang Z, Zhao B Y, Zheng H. 3D beamforming for wireless data centers. In: Proceedings of the 10th ACM Workshop on Hot Topics in Networks. 2011
9 Front. Comput. Sci. China Kaishun Wu is currently a research assistant professor at the Hong Kong University of Science and Technology (HKUST). He received PhD degree in Computer Science and Engineering from HKUST in He received his BEng degree from Sun Yat-sen University in His research interests include wireless communications, mobile computing, wireless sensor networks, and data center networks. Jiang Xiao is a first year PhD student in Hong Kong University of Science and Technology. Her research interests focus on wireless indoor localization systems, wireless sensor networks, and data center networks. Lionel M. Ni is chair professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology (HKUST). He also serves as the special assistant to the president of HKUST, he is the dean of HKUST Fok Ying Tung Graduate School and visiting chair professor of Shanghai Key Lab of Scalable Computing and Systems at Shanghai Jiao Tong University. A fellow of IEEE, Prof. Ni has chaired over 30 professional conferences and has received six awards for authoring outstanding papers. 9
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
Wireless Data Center Network. Reporter: Green
Wireless Data Center Network Reporter: Green Outline What is DCN? Challenges in current DCN Cable Topology Traffic load Heterogeneous DCN Optical DCN Wireless DCN Challenges in WCDN The researches in WDCN
A Comparative Study of Data Center Network Architectures
A Comparative Study of Data Center Network Architectures Kashif Bilal Fargo, ND 58108, USA [email protected] Limin Zhang Fargo, ND 58108, USA [email protected] Nasro Min-Allah COMSATS Institute
Data Center Network Architectures
Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of
Generalized DCell Structure for Load-Balanced Data Center Networks
Generalized DCell Structure for Load-Balanced Data Center Networks Markus Kliegl, Jason Lee,JunLi, Xinchao Zhang, Chuanxiong Guo,DavidRincón Swarthmore College, Duke University, Fudan University, Shanghai
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Yueping Zhang NEC Labs America, Inc. Princeton, NJ 854, USA Email: [email protected]
T. S. Eugene Ng Rice University
T. S. Eugene Ng Rice University Guohui Wang, David Andersen, Michael Kaminsky, Konstantina Papagiannaki, Eugene Ng, Michael Kozuch, Michael Ryan, "c-through: Part-time Optics in Data Centers, SIGCOMM'10
Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu [email protected]
Large-Scale Distributed Systems Datacenter Networks COMP6511A Spring 2014 HKUST Lin Gu [email protected] Datacenter Networking Major Components of a Datacenter Computing hardware (equipment racks) Power supply
Scafida: A Scale-Free Network Inspired Data Center Architecture
Scafida: A Scale-Free Network Inspired Data Center Architecture László Gyarmati, Tuan Anh Trinh Network Economics Group Department of Telecommunications and Media Informatics Budapest University of Technology
Lecture 7: Data Center Networks"
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
Wireless Link Scheduling for Data Center Networks
Wireless Link Scheduling for Data Center Networks Yong Cui Tsinghua University Beijing, 10084, P.R.China [email protected] Hongyi Wang Tsinghua University Beijing, 10084, P.R.China wanghongyi09@mails.
Data Center Network Topologies: VL2 (Virtual Layer 2)
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted
Chapter 6. Paper Study: Data Center Networking
Chapter 6 Paper Study: Data Center Networking 1 Data Center Networks Major theme: What are new networking issues posed by large-scale data centers? Network Architecture? Topology design? Addressing? Routing?
Delivering Scale Out Data Center Networking with Optics Why and How. Amin Vahdat Google/UC San Diego [email protected]
Delivering Scale Out Data Center Networking with Optics Why and How Amin Vahdat Google/UC San Diego [email protected] Vignette 1: Explosion in Data Center Buildout Motivation Blueprints for 200k sq. ft.
Resolving Packet Loss in a Computer Centre Applications
International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data
Portland: how to use the topology feature of the datacenter network to scale routing and forwarding
LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical
Paolo Costa [email protected]
joint work with Ant Rowstron, Austin Donnelly, and Greg O Shea (MSR Cambridge) Hussam Abu-Libdeh, Simon Schubert (Interns) Paolo Costa [email protected] Paolo Costa CamCube - Rethinking the Data Center
Data Center Networks
Data Center Networks (Lecture #3) 1/04/2010 Professor H. T. Kung Harvard School of Engineering and Applied Sciences Copyright 2010 by H. T. Kung Main References Three Approaches VL2: A Scalable and Flexible
Experimental Framework for Mobile Cloud Computing System
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 00 (2015) 000 000 www.elsevier.com/locate/procedia First International Workshop on Mobile Cloud Computing Systems, Management,
On Tackling Virtual Data Center Embedding Problem
On Tackling Virtual Data Center Embedding Problem Md Golam Rabbani, Rafael Esteves, Maxim Podlesny, Gwendal Simon Lisandro Zambenedetti Granville, Raouf Boutaba D.R. Cheriton School of Computer Science,
RDCM: Reliable Data Center Multicast
This paper was presented as part of the Mini-Conference at IEEE INFOCOM 2011 RDCM: Reliable Data Center Multicast Dan Li, Mingwei Xu, Ming-chen Zhao, Chuanxiong Guo, Yongguang Zhang, Min-you Wu Tsinghua
PCube: Improving Power Efficiency in Data Center Networks
PCube: Improving Power Efficiency in Data Center Networks Lei Huang, Qin Jia, Xin Wang Fudan University Shanghai, China [email protected] [email protected] [email protected] Shuang Yang Stanford
CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING
Journal homepage: http://www.journalijar.com INTERNATIONAL JOURNAL OF ADVANCED RESEARCH RESEARCH ARTICLE CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING R.Kohila
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya,
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
Green Routing in Data Center Network: Modeling and Algorithm Design
Green Routing in Data Center Network: Modeling and Algorithm Design Yunfei Shang, Dan Li, Mingwei Xu Tsinghua University Beijing China, {shangyunfei, lidan, xmw}@csnet1.cs.tsinghua.edu.cn ABSTRACT The
DATA center infrastructure design has recently been receiving
1 A Survey of Data Center Network Architectures Yang Liu, Jogesh K. Muppala, Senior Member, IEEE, Malathi Veeraraghavan, Senior Member, IEEE Abstract Large-scale data centers form the core infrastructure
Optical interconnection networks for data centers
Optical interconnection networks for data centers The 17th International Conference on Optical Network Design and Modeling Brest, France, April 2013 Christoforos Kachris and Ioannis Tomkos Athens Information
AIN: A Blueprint for an All-IP Data Center Network
AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center
Scale and Efficiency in Data Center Networks
Scale and Efficiency in Data Center Networks Amin Vahdat Computer Science and Engineering Center for Networked Systems UC San Diego [email protected] UC San Diego Center for Networked Systems Member Companies
Operating Systems. Cloud Computing and Data Centers
Operating ystems Fall 2014 Cloud Computing and Data Centers Myungjin Lee [email protected] 2 Google data center locations 3 A closer look 4 Inside data center 5 A datacenter has 50-250 containers A
Data Center Switch Fabric Competitive Analysis
Introduction Data Center Switch Fabric Competitive Analysis This paper analyzes Infinetics data center network architecture in the context of the best solutions available today from leading vendors such
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
TeachCloud: A Cloud Computing Educational Toolkit
TeachCloud: A Cloud Computing Educational Toolkit Y. Jararweh* and Z. Alshara Department of Computer Science, Jordan University of Science and Technology, Jordan E-mail:[email protected] * Corresponding
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
Non-blocking Switching in the Cloud Computing Era
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
Ch. 4 - Topics of Discussion
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies Lecture 6 Cloud Platform Architecture over Virtualized Data Centers Part -2: Data-Center Design and Interconnection Networks & Architecture
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Photonic Switching Applications in Data Centers & Cloud Computing Networks
Photonic Switching Applications in Data Centers & Cloud Computing Networks 2011 CALIENT Technologies www.calient.net 1 INTRODUCTION In data centers and networks, video and cloud computing are driving an
Optical Networking for Data Centres Network
Optical Networking for Data Centres Network Salvatore Spadaro Advanced Broadband Communications Centre () Universitat Politècnica de Catalunya (UPC) Barcelona, Spain [email protected] Workshop on Design
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Integrating Servers and Networking using an XOR-based Flat Routing Mechanism in 3-cube Server-centric Data Centers
Integrating Servers and Networking using an XOR-based Flat Routing Mechanism in 3-cube Server-centric Data Centers Rafael Pasquini 1, Fábio L. Verdi 2 and Maurício F. Magalhães 1 1 Department of Computer
BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN
BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN Abhinava Sadasivarao, Sharfuddin Syed, Ping Pan, Chris Liou (Infinera) Inder Monga, Andrew Lake, Chin Guok Energy Sciences Network (ESnet) IEEE
Last time. Data Center as a Computer. Today. Data Center Construction (and management)
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
Data Center Networking
Malathi Veeraraghavan Charles L. Brown Dept. of Elec. & Comp. Engr. University of Virginia Charlottesville, VA 22904-4743, USA [email protected] http://www.ece.virginia.edu/mv Jogesh K. Muppala Dept. of
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Scalable Data Center Networking. Amin Vahdat Computer Science and Engineering UC San Diego [email protected]
Scalable Data Center Networking Amin Vahdat Computer Science and Engineering UC San Diego [email protected] Center for Networked Systems 20 Across CSE, ECE, and SDSC CNS Project Formation Member Companies
IMPACT OF DISTRIBUTED SYSTEMS IN MANAGING CLOUD APPLICATION
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND SCIENCE IMPACT OF DISTRIBUTED SYSTEMS IN MANAGING CLOUD APPLICATION N.Vijaya Sunder Sagar 1, M.Dileep Kumar 2, M.Nagesh 3, Lunavath Gandhi
On implementation of DCTCP on three tier and fat tree data center network topologies
DOI 10.1186/s40064-016-2454-4 RESEARCH Open Access On implementation of DCTCP on three tier and fat tree data center network topologies Saima Zafar 1*, Abeer Bashir 1 and Shafique Ahmad Chaudhry 2 *Correspondence:
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Big Data Storage Architecture Design in Cloud Computing
Big Data Storage Architecture Design in Cloud Computing Xuebin Chen 1, Shi Wang 1( ), Yanyan Dong 1, and Xu Wang 2 1 College of Science, North China University of Science and Technology, Tangshan, Hebei,
Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches
Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches Introduction For companies that want to quickly gain insights into or opportunities from big data - the dramatic volume growth in corporate
Networking in the Hadoop Cluster
Hadoop and other distributed systems are increasingly the solution of choice for next generation data volumes. A high capacity, any to any, easily manageable networking layer is critical for peak Hadoop
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Transparent and Flexible Network Management for Big Data Processing in the Cloud
Transparent and Flexible Network Management for Big Data Processing in the Cloud Anupam Das, Cristian Lumezanu, Yueping Zhang, Vishal Singh, Guofei Jiang, Curtis Yu UIUC NEC Labs UC Riverside Abstract
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
A Framework for the Design of Cloud Based Collaborative Virtual Environment Architecture
, March 12-14, 2014, Hong Kong A Framework for the Design of Cloud Based Collaborative Virtual Environment Architecture Abdulsalam Ya u Gital, Abdul Samad Ismail, Min Chen, and Haruna Chiroma, Member,
Datacenter Networks Are In My Way
Datacenter Networks Are In My Way Principals of Amazon James Hamilton, 2010.10.28 e: [email protected] blog: perspectives.mvdirona.com With Albert Greenberg, Srikanth Kandula, Dave Maltz, Parveen Patel,
Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani
Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:
BUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
Applying NOX to the Datacenter
Applying NOX to the Datacenter Arsalan Tavakoli UC Berkeley Martin Casado and Teemu Koponen Nicira Networks Scott Shenker UC Berkeley, ICSI 1 Introduction Internet datacenters offer unprecedented computing
Computer Networks COSC 6377
Computer Networks COSC 6377 Lecture 25 Fall 2011 November 30, 2011 1 Announcements Grades will be sent to each student for verificagon P2 deadline extended 2 Large- scale computagon Search Engine Tasks
Auspex Support for Cisco Fast EtherChannel TM
Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516
Data Center Network Topologies
Data Center Network Topologies. Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: 3-1 Overview
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack HIGHLIGHTS Real-Time Results Elasticsearch on Cisco UCS enables a deeper
Cisco s Massively Scalable Data Center
Cisco s Massively Scalable Data Center Network Fabric for Warehouse Scale Computer At-A-Glance Datacenter is the Computer MSDC is the Network Cisco s Massively Scalable Data Center (MSDC) is a framework
SDN and Data Center Networks
SDN and Data Center Networks 10/9/2013 1 The Rise of SDN The Current Internet and Ethernet Network Technology is based on Autonomous Principle to form a Robust and Fault Tolerant Global Network (Distributed)
C. Hu M. Yang K. Zheng K. Chen X. Zhang B. Liu X. Guan
1 Automatically configuring 2 the network layer of data 3 centers for cloud computing 4 With the requirement of very large data centers for cloud computing, 5 the challenge lies in how to produce a scalable
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology Scalable Interconnect Topology Tradeoffs Since its inception, InfiniBand has been optimized for constructing clusters with very large
