Data Center Networks

Size: px
Start display at page:

Download "Data Center Networks"

Transcription

1 Data Center Networks (Lecture #3) 1/04/2010 Professor H. T. Kung Harvard School of Engineering and Applied Sciences Copyright 2010 by H. T. Kung

2 Main References Three Approaches VL2: A Scalable and Flexible Data Center Network, SIGCOMM 2009 (Lecture #1 12/21/2009) PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric, SIGCOMM 2009 (Lecture #2 12/23/2010) BCube: A High Performance, Servercentric Network Architecture for Modular Data Centers, SIGCOMM 2009 (Lecture #3---Today s Lecture) 2

3 Approach 1: Virtual Layer Two Approach Multi-rooted tree Complete Bipartite Layer 3 Interconnection Use a highly redundant multipath layer-3 network as a virtual layer-2 network 3

4 Approach 2:The PortLand Approach Multi-rooted tree Core Aggregation Edge Hosts Pod 0 Pod 1 Pod 2 Pod 3 Switches discover their position in the topology Pseudo MAC (PMAC) addresses are assigned to all end hosts to encode their position in the topology The hierarchical PMAC addresses enable efficient, provably loop-free forwarding with small switch state 4

5 Approach 3: Server-centric Source-routing Not a multi-rooted tree! This is a peer-to-peer approach in the peer nodes will keep states and do the routing Can use commodity switches Graceful performance degradation under faulty conditions Suited to shipping-container based, modular data centers, where physical access by service personnel can be difficult or not allowed due to regulations 5

6 Review of the Last Week s Exam (1/3) (1) These are true-false questions. (a) [2] For VL2, the rack and cluster switches in Ref. #1 can actually be IP routers. (True) (b) [2] For PortLand, the rack and cluster switches in Ref. #1 can actually be IP routers. (False) (c) [2] When PortLand uses TCP to avoid packet loss in a data center, a TCP header will need to be added to each packet.. (True) (d) [2] In VL2 and PortLand, when a new host is added to the data center, the network will automatically learn the position of the host so it can be reached by other hosts.. (True) (e) [2] Multicast support is useful for GFS. (True) (f) [4] In both VL2 and Portland, multi-rooted tree topologies are used. Is it true that the multi-rooted tree topologies are useful for all of the following purposes: scaling network bandwidth, fault tolerance and multicast?. (True) 6

7 Review of the Last Week s Exam (2/3) (2) [6] We noted in class that putting servers to sleep will save power, but may make local disks unavailable. Give three ideas on how to solve/alleviate this problem? Answer: data replication, robotic arms for disk drive insertion/removal, software cache, and putting storage on a switching/network fabric rather than CPU buses. (3) [15] (The first and second correct answers earn 5 and 10 points, respectively) VL2 and PortLand share similar approaches in several aspects in providing large layer-2 networks for data centers. For example, they both use multi-rooted tree topologies. Please describe two other areas where both methods share similar approaches. Please give succinct answers in the bullet form. Hints: Think about addressing. Answer: i. Hierarchical addressing VL2: hierarchical IP addresses PortLand: hierarchical Pseudo PMAC (PMAC) addresses ii. Separation of host identifier and host location VL2: AA vs. LA PortLand: AMAC vs. PMAC 7

8 Review of the Last Week s Exam (3/3) (4) [10 points] VL2 and PortLand share drawbacks in some similar ways. Describe one such area where both methods may potentially have similar performance problems. Please use no more than a total of 30 words in your answers. Hints: think about possible congestion or update issues. Answer: i. Congestion problem for "elephant flows" ii. Update delay and overhead for location addresses LA and PMAC (5) [15 points] When discussing PortLand in class, we showed a three-layer multirooted tree based on k-port switches with k = 6 (slide 12 of Lecture #2). We noted that the total amount of bandwidth connecting the top two layers of switches is less than that connecting the bottom two layers of switches. As pointed out by someone in class, we can fix this problem by adding some additional switches in the top layer. How many additional switches do we need? Show the resulting drawing, like the one on slide 12. To save time in drawing, you should just add nodes and links on top of the existing drawing of slide 12. Answer: Add three additional switches in the top layer. For the three switches in the middle layer of each pod, connect each switch to a separate added switch. 8

9 Container-based Datacenter (1/2) Placing the server racks (thousands of servers) into a standard shipping container and integrating heat exchange and power distribution into the container Air handling is similar to in-rack cooling and typically allows higher power densities than regular raised-floor datacenters Microsoft Data Center Near Chicago (9/30/2009) Source: /09/30/microsoft-unveils-its-container-powered-cloud The container-based facility has achieved extremely high energy efficiency ratings compared with typical datacenters today 9

10 Container-based Datacenter (2/2) Shipping-container based, modular data center (MDC) offers a new way in which data centers are built and deployed. In an MDC, up to a few thousands of servers are interconnected via switches to form the network infrastructure, say, a typical, twoor three-level tree in the current practice. All the servers and switches are then packed into a standard 20- or 40-feet shipping-container No longer tied to a fixed location, organizations can place the MDC anywhere they intend and then relocate as their requirements change In addition to high degree of mobility, an MDC has other benefits including shorter deployment time, higher system and power density, and lower cooling and manufacturing cost 10

11 BCube: A Network Architecture for Modular Data Centers BCube is a network architecture specifically designed for shipping-container based, modular data centers At the core of the BCube architecture is its servercentric network structure, where servers with multiple network ports connect to multiple layers of commercial off-the-shelf (COTS) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shippingcontainer data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components 11

12 Goals Support bandwidth-intensive traffic patterns among data center servers: One-to-one One-to-several (e.g., distributed file systems) One-to-all (e.g., application data broadcasting) All-to-all (e.g., MapReduce) Beyond using commodity servers, go one step further by using only low-end COTS mini-switches. This option eliminates expensive high-end switches Different from a traditional data center, it is difficult or even impossible to service an MDC once it is deployed. Therefore, BCube needs to achieve graceful performance degradation in the presence of server and switch failures 12

13 Approach Take the server-centric approach, rather than the switch-oriented practice. It places intelligence on MDC servers and works with commodity switches Provide multiple parallel short paths between any pair of servers BCube not only provides high one-to-one bandwidth, but also greatly improves fault tolerance and load balancing BCube accelerates one-to-x traffic by constructing edge-disjoint complete graphs and multiple edge-disjoint server spanning trees. Moreover, due to its low diameter, BCube provides high network capacity for all-to-all traffic such as MapReduce BCube runs a source routing protocol called BSR (BCube Source Routing). BSR places routing intelligence solely onto servers. By taking advantage of the multi-path property of BCube and by actively probing the network, BSR balances traffic and handles failures without link-state distribution (this is a typical p2p probing method). With BSR, the capacity of BCube decreases gracefully as the server and/or switch failure increases BCube uses more wires than the tree structure. But wiring is a solvable issue for containers which are at most 40-feet long (a strange argument!) 13

14 Requirement 1: Support for Bandwidth-intensive Traffic One-to-one, which is the basic traffic model in which one server moves data to another server. For example, this takes place on server pairs that exchange large amount of data such as disk backup. Good one-to-one support also results in good severalto-one and all-to-one support One-to-several, in which one server transfers the same copy of data to several receivers. Current distributed systems such as GFS, HDFS, and CloudStore, replicate data chunks of a file several times (typically three) at different chunk servers to improve reliability. When a chunk is written into the file system, it needs to be simultaneously replicated to several servers. One-to-all, in which a server transfers the same copy of data to all the other servers in the cluster. There are several cases that one-to-all happens: to upgrade the system image, to distribute application binaries, or to distribute specific application data All-to-all, in which every server transmits data to all the other servers. The representative example of all-to-all traffic is MapReduce. The reduce phase of MapReduce needs to shuffle data among many servers, thus generating an all-to-all traffic pattern 14

15 Requirement 2: Use of Low-end Commodity Switches Current data centers use commodity PC servers, but high-end switches/routers. We want to use low-end non-programmable COTS switches instead of the high-end ones, based on the observation that the per-port price of the low-end switches is much cheaper than that of the high-end ones The COTS switches, however, can speak only the spanning tree protocol, which cannot fully utilize the links in advanced network structures (why?). The switch boxes are generally not as open as the server computers. Re-programming the switches for new routing and packet forwarding algorithms is much harder, if not impossible, compared with programming the servers. This is a challenge we need to address 15

16 Requirement 3: Graceful Performance Degradation Given that we only assume commodity servers and switches in a shipping-container data center, we should assume a failure model of frequent component failures. Moreover, an MDC is prefabricated in factory, and it is rather difficult, if not impossible, to service an MDC once it is deployed in the field, due to operational and space constraints ( data center in a shipping-container is analogous to system on a chip built with low-power transistors which may fail) Therefore, it is important that we design our network architecture to be fault tolerant and to degrade gracefully in the presence of continuous component failures 16

17 BCube s Recursively Defined Topology BCube 1 (i.e., k = 1): Throughout this class, we assumed n = 4 How many paths are there between server 00 and server 21? (see a later slide) Let n be the expansion factor at each level. That is, the total number of servers is increased by 4X with each additional level. Throughout this class, we assume n = 4, unless stated otherwise BCube k at level k is constructed from by connecting n = 4 copies of BCube k-1 at level k-1 using n k n-port switches Each switch connects n servers, each in a separate Bcube k-1 Each server in BCube k has k + 1 ports, each connecting to a switch in a seperate level 17

18 Constructing Level 2 from Level 1 For BCube k, we have: k +1 levels: level-0 through level-k # servers is n k+1 # n-port switches at each level is the same, that is, n k. Thus the total number of switches is (k + 1)n k For example, with n = 8 and k = 3, BCube 3 connects 8 4 =4096 servers in four levels by using 8 3 = port switches each level Note that switches only connect to servers and never directly connect to other switches. we can treat the switches as dummy crossbars that connect several neighboring servers and let servers relay traffic for each other 18

19 How to Route from Server 00 to Server 21? Level 1: Fix 2 nd Digit Level 0: Fix 1 st Digit The blue path fixes the 1 st digit first and then the 2 nd digit, whereas the red path uses the reverse order Note that the blue and red paths are node-disjoint. This is not an accident! Question: Are there other paths from 00 to 21? There is no magic here: The BCube topology is actually the well-known hypercube topology. Routing over BCube can be understood by examining the intuitive routing we can easily see on hypercube 19

20 Hypercube 2-node node (a) B i na ry 1-c ub e, built o f tw o bina ry 0-c u bes, label ed 0 and 1 (b) B i na ry 2-c ub e, built o f tw o bina ry 1-c u bes, label ed 0 and node (c ) B ina ry 3-c u be, bu ilt o f two bin ary 2 -c u bes, lab eled 0 an d 1 16-node (d) B i na ry 4-c ub e, bui lt o f two bi na ry 3-c ub es, label ed 0 and 1 Source: Slides from Introduction to Parallel Processing: Algorithms and Architectures by Behrooz Parhami 20

21 The 64-Node Hypercube Only sample wraparound links are shown to avoid clutter Isomorphic to the D torus (each has 64 6/2 links) Source: Slides from Introduction to Parallel Processing: Algorithms and Architectures by Behrooz Parhami 21

22 Neighbors of a Node in a Hypercube x q 1 x q 2... x 2 x 1 x 0 x q 1 x q 2... x 2 x 1 x 0 x q 1 x q 2... x 2 x 1 x ID of node x dimension-0 neighbor; N 0 (x) dimension-1 neighbor; N 1 (x).. x q 1 x q 2... x 2 x 1 x 0 dimension-(q 1) neighbor; N q 1 (x) The q neighbors of node x Nodes whose labels differ in k bits (at Hamming distance k) connected by shortest path of length k Both node- and edge-symmetric Strengths: symmetry, log diameter, and linear bisection width Weakness: poor scalability due to many long interconnection wires Dim 0 x Dim 2 Dim Dim Source: Slides from Introduction to Parallel Processing: Algorithms and Architectures by Behrooz Parhami 22

23 BCube Uses Switches to Implement Hypercube Links 16-node Hypercube 16-node BCube Sw Sw (c ) B ina ry 3-c u be, bu ilt o f two bin ary 2 -c u bes, lab eled 0 an d 1 Sw1 Sw3 Sw2 Sw Sw Sw (c ) B ina ry 3-c u be, bu ilt o f two bin ary 2 -c u bes, lab eled 0 an d 1 23

24 Hypercube Routing Gives BCube Routing 16-node Hypercube 16-node BCube Sw Sw (c ) B ina ry 3-c u be, bu ilt o f two bin ary 2 -c u bes, lab eled 0 an d 1 Sw1 Sw3 Sw2 Sw Sw Sw (c ) B ina ry 3-c u be, bu ilt o f two bin ary 2 -c u bes, lab eled 0 an d 1 Thus BCubeRouting is the same as the routing algorithm for Hypercube 24

25 Single-path Routing in BCube In BcubeRouting, A=a k a k-1 a 0 is the source server and B=b k b k-1 b 0 is the destination server. We systematically build a series of intermediate servers by correcting one digit of the previous server. Hence the path length is at most k+1 Note that the intermediate switches in the path can be uniquely determined by its two adjacent servers, hence are omitted from the path 25

26 Multi-paths for One-to-one Traffic Two parallel paths between a source server and a destination server exist if they are nodedisjoint, i.e., the intermediate servers and switches on one path do not appear on the other Theorem. There are k + 1 parallel paths between any two servers in a BCube k BCube should also well support several-to-one and all-to-one traffic patterns. We can fully utilize the multiple links of the destination server to accelerate these x-to-one traffic patterns 26

27 Speedup for One-to-several Traffic Edge-disjoint complete graphs with k + 2 servers can be efficiently constructed in a BCube k. These complete graphs can speed up data replications in distributed file systems like GFS 27

28 BCube Source Routing (BSR) In BSR, the source server decides which path a packet flow should traverse by probing the network and encodes the path in the packet header Source routing has the following advantages: The source can control the routing path without coordinations of the intermediate servers (this is suited for data center management, why?) Intermediate servers do not involve in routing and just forward packets based on the packet header. This simplifies their functionalities y reactively probing the network, we can avoid link state broadcasting, which suffers from scalability concerns when thousands of servers are in operation When a new flow comes, the source sends probe packets over multiple parallel paths. The intermediate servers process the probe packets to fill the needed information, e.g., the minimum available bandwidth of its input/output links. The destination returns a probe response to the source. When the source receives the responses, it uses a metric to select the best path, e.g., the one with maximum available bandwidth 28

29 The PathSelection Procedure A source uses BuildPathSet to obtain k + 1 parallel paths and then probes these paths. If one path is found not available, the source uses the Breadth First Search (BFS) algorithm to find another parallel path. For n = 8 and k = 3, the execution time of BFS is less than 1 millisecond An intermediate server updates the available bandwidth field of the probe packet if its available bandwidth is smaller than the existing value A destination server updates the available bandwidth field of the probe packet if the available bandwidth of the incoming link is smaller than the value carried in the probe packet. It then sends the value back to the source in a probe response message 29

30 Path Adaption During the lifetime of a flow, its path may break due to various failures and the network condition may change significantly as well. The source periodically (say, every 10 seconds) performs path selection to adapt to network failures and dynamic network conditions When an intermediate server finds that the next hop of a packet is not available, it sends a path failure message back to the source. As long as there are paths available, the source does not probe the network immediately when the message is received. Instead, it switches the flow to one of the available paths obtained from the previous probing. When the probing timer expires, the source will perform another round path selection and try its best to maintain k+ 1 parallel paths When multiple flows between two servers arrive simultaneously, they may select the same path. To make things worse, after the path selection timers expire, they will probe the network and switch to another path simultaneously. This results in path oscillation. We mitigate this symptom by injecting randomness into the timeout value of the path selection timers 30

31 Packaging and Wiring We show how packaging and wiring can be addressed for a container with 2048 servers and port switches (a partial BCube with n = 8 and k = 3). The interior size of a 40-feet container is 12m x 2.35m x 2.38m In the container, we deploy 32 racks in two columns, with each column has 16 racks. Each rack accommodates 44 rack units (or 1.96m high) We use 32 rack units to host 64 servers as the current practice can pack two servers into one unit, and 10 rack units to host 40 8-port switches. The 8- port switches are small enough, and we can easily put 4 into one rack unit. Altogether, we use 42 rack units and have 2 unused units 31

32 Packaging and Wiring (Cont.) As for wiring, the Gigabit Ethernet copper wires can be 100 meters long, which is much longer than the perimeter of a 40-feet container. And there is enough space to accommodate these wires. We use 64 servers within a rack to form a BCube1 and 16 8-port switches within the rack to interconnect them The wires of the BCube1 are inside the rack and do not go out. The inter-rack wires are layer-2 and layer-3 wires and we pace them on the top of the racks We divide the 32 racks into four super-racks. A super-rack forms a BCube2 and there are two super-racks in each column. We evenly distribute the layer-2 and layer-3 switches into all the racks, so that there are 8 layer-2 and 16 layer-3 switches within every rack. The level- 2 wires are within a super-rack and level-3 wires are between superracks Our calculation shows that the maximum number of level-2 and level-3 wires along a rack column is 768 (256 and 512 for level-2 and level-3, respectively). The diameter of an Ethernet wire is 0.54cm. The maximum space needed is approximate 176cm 2 < (20cm) 2. Since the available height from the top of the rack to the ceil is 42cm, there is enough space for all the wires 32

33 Graceful Degradation The aggregate bottleneck throughput (ABT) is the throughput of the bottleneck flow times the number of total flows in the all-to-all traffic model. ABT reflects the all-to-all network capacity Server Failure Rate (%) Switch Failure Rate (%) 33

34 Implementation Architecture The BCube architecture includes a BCube protocol stack. The BCube stack locates between the TCP/IP protocol driver and the Ethernet NDIS driver. The BCube driver is located at 2.5 layer: to the TCP/IP driver, it is a NDIS driver; to the real Ethernet driver, it is a protocol driver If we directly use the 32-bit addresses, we need many bytes to store the complete path. For example, we need 32 bytes when the maximum path length is 8. We leverage the fact that neighboring servers in BCube differ in only one digit in their address arrays to reduce the space needed for an intermediate server, from four bytes to only one byte 34

35 Implementation Architecture (Cont.) Components: BSR Protocol for Routing Neighbor Maintenance Protocol (maintains a neighbor status table) Packet sending/receiving part (interacts with the TCP/IP stack) Packet Forwarding Engine (relays packets for other servers) Header: Between the Ethernet Header and IP Header Contains typical fields Similar to DCell: 1-1 mapping between IP and BCube addresses Different from DCell: every BCube packet store the complete path and a next hop index (NHI) Using 1-digit address difference between neighbors, path is stored efficiently 35

36 Packet Forwarding Engine Neighbor Status Table: Maintained by Neighbor Maintenance Protocol Consists of Neighbor MACs, connecting output ports, and a Status Flag indicating availability Table is almost static (MACs change when a neighboring NIC is replaced, status flag changes when the neighbor s status changes.) Forwarding: Only one lookup for Gets the packet, checks the NHA (next hop array) for status and MAC of the next hop Checks the Neighbor Status Table if it is alive Does Checksum Forwards the packet to the identified output port Because of PCI Interface limitations (160Mb/s) software implementation is used 36

37 Testbed 16 Servers port Gigabit Ethernet mini-switches BCube1 with 4 BCube0 s No disk I/O No Ethernet flow control 37

38 CPU Overhead for Packet Forwarding 38

39 Bandwidth-Intensive Application Support MTU: 9KB Tests: 1-1, 1-M, 1-All, All-All Topology: 39

40 Bandwidth-Intensive Application Support 40

41 Bandwidth-Intensive Application Support 41

42 Performance Comparisons 42

43 Cost, Power, and Wiring Comparison 43

44 Conclusion By installing a small number of network ports at each server and using COTS mini-switches as crossbars, and putting routing intelligence at the server side, BCube forms a server-centric architecture We have shown that BCube significantly accelerates one-to-x traffic patterns and provides high network capacity for all-to-all traffic The BSR routing protocol further enables graceful performance degradation Future work will study how to scale the current server-centric design from the single container to multiple containers 44

BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers

BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers Chuanxiong Guo 1, Guohan Lu 1, Dan Li 1, Haitao Wu 1, Xuan Zhang 1,2, Yunfeng Shi 1,3, Chen Tian 1,4, Yongguang Zhang

More information

Advanced Computer Networks. Datacenter Network Fabric

Advanced Computer Networks. Datacenter Network Fabric Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking

More information

Data Center Network Topologies: FatTree

Data Center Network Topologies: FatTree Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously

More information

Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere!

Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere! Interconnection Networks Interconnection Networks Interconnection networks are used everywhere! Supercomputers connecting the processors Routers connecting the ports can consider a router as a parallel

More information

Router Architectures

Router Architectures Router Architectures An overview of router architectures. Introduction What is a Packet Switch? Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers 2 1 Router Components

More information

Resolving Packet Loss in a Computer Centre Applications

Resolving Packet Loss in a Computer Centre Applications International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data

More information

Lecture 23: Interconnection Networks. Topics: communication latency, centralized and decentralized switches (Appendix E)

Lecture 23: Interconnection Networks. Topics: communication latency, centralized and decentralized switches (Appendix E) Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E) 1 Topologies Internet topologies are not very regular they grew incrementally Supercomputers

More information

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control

More information

System Interconnect Architectures. Goals and Analysis. Network Properties and Routing. Terminology - 2. Terminology - 1

System Interconnect Architectures. Goals and Analysis. Network Properties and Routing. Terminology - 2. Terminology - 1 System Interconnect Architectures CSCI 8150 Advanced Computer Architecture Hwang, Chapter 2 Program and Network Properties 2.4 System Interconnect Architectures Direct networks for static connections Indirect

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility

More information

Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu [email protected]

Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu lingu@ieee.org Large-Scale Distributed Systems Datacenter Networks COMP6511A Spring 2014 HKUST Lin Gu [email protected] Datacenter Networking Major Components of a Datacenter Computing hardware (equipment racks) Power supply

More information

DATA center infrastructure design has recently been receiving

DATA center infrastructure design has recently been receiving 1 A Survey of Data Center Network Architectures Yang Liu, Jogesh K. Muppala, Senior Member, IEEE, Malathi Veeraraghavan, Senior Member, IEEE Abstract Large-scale data centers form the core infrastructure

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Chapter 6. Paper Study: Data Center Networking

Chapter 6. Paper Study: Data Center Networking Chapter 6 Paper Study: Data Center Networking 1 Data Center Networks Major theme: What are new networking issues posed by large-scale data centers? Network Architecture? Topology design? Addressing? Routing?

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Generalized DCell Structure for Load-Balanced Data Center Networks

Generalized DCell Structure for Load-Balanced Data Center Networks Generalized DCell Structure for Load-Balanced Data Center Networks Markus Kliegl, Jason Lee,JunLi, Xinchao Zhang, Chuanxiong Guo,DavidRincón Swarthmore College, Duke University, Fudan University, Shanghai

More information

Paolo Costa [email protected]

Paolo Costa costa@imperial.ac.uk joint work with Ant Rowstron, Austin Donnelly, and Greg O Shea (MSR Cambridge) Hussam Abu-Libdeh, Simon Schubert (Interns) Paolo Costa [email protected] Paolo Costa CamCube - Rethinking the Data Center

More information

Interconnection Networks Programmierung Paralleler und Verteilter Systeme (PPV)

Interconnection Networks Programmierung Paralleler und Verteilter Systeme (PPV) Interconnection Networks Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Interconnection Networks 2 SIMD systems

More information

Data Center Network Architectures

Data Center Network Architectures Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of

More information

Dual-Centric Data Center Network Architectures

Dual-Centric Data Center Network Architectures Dual-Centric Data Center Network Architectures Dawei Li, Jie Wu Department of Computer and Information Sciences, Temple University, Philadelphia, USA {dawei.li, jiewu}@temple.edu Zhiyong Liu, Fa Zhang

More information

Interconnection Network

Interconnection Network Interconnection Network Recap: Generic Parallel Architecture A generic modern multiprocessor Network Mem Communication assist (CA) $ P Node: processor(s), memory system, plus communication assist Network

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

A Reliability Analysis of Datacenter Topologies

A Reliability Analysis of Datacenter Topologies A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

PCube: Improving Power Efficiency in Data Center Networks

PCube: Improving Power Efficiency in Data Center Networks PCube: Improving Power Efficiency in Data Center Networks Lei Huang, Qin Jia, Xin Wang Fudan University Shanghai, China [email protected] [email protected] [email protected] Shuang Yang Stanford

More information

Definition. A Historical Example

Definition. A Historical Example Overlay Networks This lecture contains slides created by Ion Stoica (UC Berkeley). Slides used with permission from author. All rights remain with author. Definition Network defines addressing, routing,

More information

Chapter 1 Reading Organizer

Chapter 1 Reading Organizer Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small

More information

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012 VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation

More information

Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors

Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors 2011 International Symposium on Computer Networks and Distributed Systems (CNDS), February 23-24, 2011 Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors Atefeh Khosravi,

More information

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:

More information

Distributed Computing over Communication Networks: Topology. (with an excursion to P2P)

Distributed Computing over Communication Networks: Topology. (with an excursion to P2P) Distributed Computing over Communication Networks: Topology (with an excursion to P2P) Some administrative comments... There will be a Skript for this part of the lecture. (Same as slides, except for today...

More information

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal

More information

COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook)

COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) Vivek Sarkar Department of Computer Science Rice University [email protected] COMP

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat

Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical

More information

Lecture 18: Interconnection Networks. CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012)

Lecture 18: Interconnection Networks. CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012) Lecture 18: Interconnection Networks CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012) Announcements Project deadlines: - Mon, April 2: project proposal: 1-2 page writeup - Fri,

More information

Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.

Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup. CEN 007C Computer Networks Fundamentals Instructor: Prof. A. Helmy Homework : Network Layer Assigned: Nov. 28 th, 2011. Due Date: Dec 8 th, 2011 (to the TA) 1. ( points) What are the 2 most important network-layer

More information

Interconnection Network Design

Interconnection Network Design Interconnection Network Design Vida Vukašinović 1 Introduction Parallel computer networks are interesting topic, but they are also difficult to understand in an overall sense. The topological structure

More information

Topological Properties

Topological Properties Advanced Computer Architecture Topological Properties Routing Distance: Number of links on route Node degree: Number of channels per node Network diameter: Longest minimum routing distance between any

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight

More information

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data

More information

CS 457 Lecture 19 Global Internet - BGP. Fall 2011

CS 457 Lecture 19 Global Internet - BGP. Fall 2011 CS 457 Lecture 19 Global Internet - BGP Fall 2011 Decision Process Calculate degree of preference for each route in Adj-RIB-In as follows (apply following steps until one route is left): select route with

More information

SecondNet: A Data Center Network Virtualization Architecture with Bandwidth Guarantees

SecondNet: A Data Center Network Virtualization Architecture with Bandwidth Guarantees SecondNet: A Data Center Network Virtualization Architecture with Bandwidth Guarantees Chuanxiong Guo, Guohan Lu, Helen J. Wang, Shuang Yang, Chao Kong, Peng Sun, Wenfei Wu, Yongguang Zhang MSR Asia, MSR

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric

PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya,

More information

Exam 1 Review Questions

Exam 1 Review Questions CSE 473 Introduction to Computer Networks Exam 1 Review Questions Jon Turner 10/2013 1. A user in St. Louis, connected to the internet via a 20 Mb/s (b=bits) connection retrieves a 250 KB (B=bytes) web

More information

SiteCelerate white paper

SiteCelerate white paper SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance

More information

TRILL for Data Center Networks

TRILL for Data Center Networks 24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

TRILL Large Layer 2 Network Solution

TRILL Large Layer 2 Network Solution TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network

More information

Answers to Sample Questions on Network Layer

Answers to Sample Questions on Network Layer Answers to Sample Questions on Network Layer ) IP Packets on a certain network can carry a maximum of only 500 bytes in the data portion. An application using TCP/IP on a node on this network generates

More information

Components: Interconnect Page 1 of 18

Components: Interconnect Page 1 of 18 Components: Interconnect Page 1 of 18 PE to PE interconnect: The most expensive supercomputer component Possible implementations: FULL INTERCONNECTION: The ideal Usually not attainable Each PE has a direct

More information

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.

More information

Intel Advanced Network Services Software Increases Network Reliability, Resilience and Bandwidth

Intel Advanced Network Services Software Increases Network Reliability, Resilience and Bandwidth White Paper Network Connectivity Intel Advanced Network Services Software Increases Network Reliability, Resilience and Bandwidth Adapter teaming is a long-proven method for increasing network reliability,

More information

Architecture of distributed network processors: specifics of application in information security systems

Architecture of distributed network processors: specifics of application in information security systems Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern

More information

Data Center Network Topologies: VL2 (Virtual Layer 2)

Data Center Network Topologies: VL2 (Virtual Layer 2) Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted

More information

Auspex Support for Cisco Fast EtherChannel TM

Auspex Support for Cisco Fast EtherChannel TM Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516

More information

Computer Networks COSC 6377

Computer Networks COSC 6377 Computer Networks COSC 6377 Lecture 25 Fall 2011 November 30, 2011 1 Announcements Grades will be sent to each student for verificagon P2 deadline extended 2 Large- scale computagon Search Engine Tasks

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable

More information

Introduction to IP v6

Introduction to IP v6 IP v 1-3: defined and replaced Introduction to IP v6 IP v4 - current version; 20 years old IP v5 - streams protocol IP v6 - replacement for IP v4 During developments it was called IPng - Next Generation

More information

Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering

Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls

More information

White Paper Abstract Disclaimer

White Paper Abstract Disclaimer White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification

More information

OpenFlow Based Load Balancing

OpenFlow Based Load Balancing OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple

More information

CS514: Intermediate Course in Computer Systems

CS514: Intermediate Course in Computer Systems : Intermediate Course in Computer Systems Lecture 7: Sept. 19, 2003 Load Balancing Options Sources Lots of graphics and product description courtesy F5 website (www.f5.com) I believe F5 is market leader

More information

Lecture 7: Data Center Networks"

Lecture 7: Data Center Networks Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE

More information

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet CCNA R&S: Introduction to Networks Chapter 5: Ethernet 5.0.1.1 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.

More information

Layer 3 Network + Dedicated Internet Connectivity

Layer 3 Network + Dedicated Internet Connectivity Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for

More information

Top-Down Network Design

Top-Down Network Design Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,

More information

How To Learn Cisco Cisco Ios And Cisco Vlan

How To Learn Cisco Cisco Ios And Cisco Vlan Interconnecting Cisco Networking Devices: Accelerated Course CCNAX v2.0; 5 Days, Instructor-led Course Description Interconnecting Cisco Networking Devices: Accelerated (CCNAX) v2.0 is a 60-hour instructor-led

More information

Load Balancing Mechanisms in Data Center Networks

Load Balancing Mechanisms in Data Center Networks Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider

More information

Interconnection Networks

Interconnection Networks CMPT765/408 08-1 Interconnection Networks Qianping Gu 1 Interconnection Networks The note is mainly based on Chapters 1, 2, and 4 of Interconnection Networks, An Engineering Approach by J. Duato, S. Yalamanchili,

More information

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013 Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center

More information

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Implement Spanning Tree Protocols LAN Switching and Wireless Chapter 5 Explain the role of redundancy in a converged

More information

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities)

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities) QoS Switching H. T. Kung Division of Engineering and Applied Sciences Harvard University November 4, 1998 1of40 Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p

More information

How To Build A Low Cost Data Center Network With Two Ports And A Backup Port

How To Build A Low Cost Data Center Network With Two Ports And A Backup Port 1 FiConn: Using Backup Port for Server Interconnection in Data Centers Dan Li, Chuanxiong Guo, Haitao Wu, Kun Tan, Yongguang Zhang, Songwu Lu Microsoft Research, Asia, University of California, Los Angeles

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of

More information

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Operating Systems. Cloud Computing and Data Centers

Operating Systems. Cloud Computing and Data Centers Operating ystems Fall 2014 Cloud Computing and Data Centers Myungjin Lee [email protected] 2 Google data center locations 3 A closer look 4 Inside data center 5 A datacenter has 50-250 containers A

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Route Discovery Protocols

Route Discovery Protocols Route Discovery Protocols Columbus, OH 43210 [email protected] http://www.cse.ohio-state.edu/~jain/ 1 Overview Building Routing Tables Routing Information Protocol Version 1 (RIP V1) RIP V2 OSPF

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

Lecture 24: WSC, Datacenters. Topics: network-on-chip wrap-up, warehouse-scale computing and datacenters (Sections 6.1-6.7)

Lecture 24: WSC, Datacenters. Topics: network-on-chip wrap-up, warehouse-scale computing and datacenters (Sections 6.1-6.7) Lecture 24: WSC, Datacenters Topics: network-on-chip wrap-up, warehouse-scale computing and datacenters (Sections 6.1-6.7) 1 Topology Examples Grid Torus Hypercube Criteria 64 nodes Performance Bisection

More information

Network Simulation Traffic, Paths and Impairment

Network Simulation Traffic, Paths and Impairment Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating

More information

DATACENTER NETWORKS AND RELEVANT STANDARDS

DATACENTER NETWORKS AND RELEVANT STANDARDS CHAPTER 4 DATACENTER NETWORKS AND RELEVANT STANDARDS Daniel S. Marcon, Rodrigo R. Oliveira, Luciano P. Gaspary and Marinho P. Barcellos Institute of Informatics, Federal University of Rio Grande do Sul,

More information

IP Multicasting. Applications with multiple receivers

IP Multicasting. Applications with multiple receivers IP Multicasting Relates to Lab 10. It covers IP multicasting, including multicast addressing, IGMP, and multicast routing. 1 Applications with multiple receivers Many applications transmit the same data

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information