Scaling 10Gb/s Clustering at Wire-Speed
|
|
|
- Jeremy Moore
- 9 years ago
- Views:
Transcription
1 Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc Stender Way, Santa Clara, CA Tel: Fax:
2 Introduction Data center and high performance computing clusters that cannot compromise on scalable and deterministic performance need the ability to construct large node count non-blocking switch configurations. To deliver service level agreements (SLA) with their applications based on 10Gb/s or higher networking speeds, it is necessary to utilize Clos networks, more commonly known as Fat Tree or Constant Bisectional Bandwidth (CBB) networks. A Clos network is a switch topology in which integrated non-blocking switch elements (crossbars) with a relatively low number of ports are used to build a non-blocking switch topology supporting a much larger number of endpoints. There are two important aspects of constructing CBB networks: 1. topology 2. link level forwarding. Topology defines how the network is physically connected together. From a purely physical perspective, point to point networks are virtually identical and any topology can be constructed using similar crossbar switch elements. Forwarding deals with the way individual packets are actually routed through the physical topology and thus effectively define a logical topology that exists on top of the physical one. While all point to point technologies can implement the same physical topologies, this is not the case with the logical topology created by different forwarding algorithms. In fact, unlike InfiniBand, bridged Ethernet imposes severe constraints on what types of logical topologies can be implemented and typically can not make efficient use of the underlying physical topology. In order to overcome this limitation of layer 2 Ethernet switches, it is necessary to use layer 3 (and higher) IP switches. Using layer 3 and above switches severely impact price and deterministic performance metrics (both bandwidth and latency) critical in the data center. Topology Clos, CBB, and Fat Tree Networks The concept of a CBB or Fat-Tree topology is based on the seminal 1953 work on non-blocking switch networks by Charles Clos 1. This paper describes how a non-blocking switch network can be constructed using minimal number of basic switch building blocks. In order to be strictly non-blocking, several key requirements must be met. The basic requirement is that the crossbar switches which make up the network, must be configured appropriately in order to offer non-blocking forwarding for the particular traffic pattern being switched. This requirement is easily met in the case of the public switched telephone network for which Clos networks were originally designed, since voice conversations are relatively long lived compared to the time required to set up the crossbar switch forwarding tables. This basic requirement is not met by packet based networks since every packet is routed independently through the network of switches. Nonetheless, for a very large set of typical networking applications, there exist traffic patterns which closely resemble long lived circuit switched connections. Because of this fact, Clos networks have been successfully used in packet based networks, and to first order, provide non-blocking switch configurations. Leiserson 2 was the first to recognize this and extended the concept of a Clos net-work from telecommunications to high performance computing communications based on fat tree networks. 1 C. Clos, A study of non-blocking switching networks, Bell System Technical Journal, Vol. 32, 1953, pp Charles E. Leiserson: Fat-Trees: Universal Networks for Hardware-Efficient Supercomputing., IEEE Transactionson Computers, Vol 34, October 1985, pp MellanoxTechnologies Inc. 2
3 Figure 1 shows both a simple binary tree as well as a fat tree topology. As can be seen in the simple binary tree, the number of links and thus the aggregate bandwidth is reduced by half at each stage of the network. This can result in serious congestion hot spots towards the root as all traffic funnels into a single link. The fat tree topology remedies this situation by maintaining identical bandwidth at each level of the network. Another way to look at the fat tree is to imagine a line that bisects the network into an equal number of endpoints. In a fat tree topology any such bisection line would cut an equal number of links - hence the name Constant Bi-Sectional Bandwidth. Note that the diagram shows simple switch elements with only a few ports, however the CBB topology is readily extended to higher port count crossbar switch building blocks. Binary Tree Fat Tree Figure 1: Binary and Fat Tree topologies In general, the Clos, CBB, and Fat Tree networks describe identical switch topologies, however, in high performance compute clusters the CBB nomenclature is most common and thus will be used from here on. So what is the point of building a CBB network? Given a basic switch building block with N ports and the requirement to build a switch network to connect to M end nodes (where M > N), a CBB network defines an efficient topology using multiple levels of basic switch elements providing a non-blocking switch configuration supporting M endpoints. In such multi-level switch architectures, some of the ports of each basic switch element are connected to end nodes, while other ports are connected internally to additional levels of basic switch elements which do not connect to endpoints. As previously described, the key attribute of the CBB topology is that for any given level of the switch topology, the amount of bandwidth connected to the downstream end nodes is identical to the amount of bandwidth connected to the upstream path used for interconnection. Note that the links are bidirectional so the notion of upstream and downstream describes the direction of the interconnect topology towards or away from the shortest path to an end node, rather than the actual data flow MellanoxTechnologies Inc. 3
4 Two Level CBB Network Example Level 2 P/2 Ports Level 1 P/2 Ports P/2 Ports P/2 Ports P/2 Ports Figure 2: Example of a simple two level CBB switch topology Consider the two level switch topology shown in Figure 2. It is constructed with basic switch building blocks having P ports. Suppose there are N1 Level 1 switches, each of which has P/2 ports connected downstream to end nodes, and an identical number connected upstream to the second level switch. The total number of end points connected is then: N1 * P/2. Suppose for example each basic switch building block has eight ports and a two level network is constructed with four ports connected both to end points and Level 2 switches as shown. There are four Level 1 switches so: P=8 and N1=4, hence using an 8 port building block this topology builds a 16 port non-blocking switch. It is possible to connect more total endpoints by increasing the number of Level 1 switches and even further by increasing the number of levels in the switch topology. The key to making the configurations non-blocking is to always preserve identical bandwidth (upstream and downstream) between any two levels of the multi-level switch topology. Closer examination of the example yields the observations that for two level CBB networks, the total number of basic switch element ports used is three times the total number of end nodes being connected. This is so because for each end node being connected there is another Level 1 switch port which connects to yet a third port on a second level switch element. Thus the 16 end nodes supported in the example use six separate eight port switches requiring a total of 48 ports. Link Level Forwarding, Loops, and Logical Topologies While point to point networks are nearly identical at the topology (physical) level, there are profound differences at the logical level. Two key issues related to link level forwarding are: 1. spanning (the ability of every node to communicate with every other node) and 2. loop breaking. Many different algorithms are available to address these issues, but these algorithms have a profound impact on network performance. Of the two, providing an algorithm that spans is fairly straightforward, whereas, loop breaking is the more critical issue. Loop breaking is required since the physical network topology is human administered and therefore it can not be guaranteed that the network is physically connected so as to be free of loops. Network loops are problematic since it is possible that a packet forwarded from the output port of a crossbar switch, may ultimately arrive to an input port of the same crossbar switch. This crossbar element will, of course, forward the packet out from the same output port originally used to transmit it, thereby creating a transmission loop. Under these circumstances and without any other mechanisms to resolve the situation, the packet will flow indefinitely in a loop consuming bandwidth and ultimately creating a fully congested network 2006 MellanoxTechnologies Inc. 4
5 unable to forward any other traffic. In short, loops are a potentially catastrophic aspect of switched networks that must be algorithmically addressed by any useful forwarding algorithm. Ethernet Forwarding and the Spanning Tree Algorithm Ethernet adopted a simple but elegant algorithm, known as the spanning tree algorithm, to deal with loops. Every layer 2 Ethernet switch is required to run this algorithm during initialization, and periodically to be able to accommodate changes to the physical network. The algorithm can be briefly explained as follows: Every layer 2 switch in the network starts by broadcasting a 2-Tuple consisting of its (i) own unique NodeID and (ii) a cost of 0. Every layer 2 switch in the network continues to broadcast a 2-Tuple consisting of i. the best (lowest) NodeID received via any port and ii. the best cost (in terms of number of hops) required to reach this best ID. If the switch receives a better {NodeID, cost} it will increment the cost by 1 and start sending this new {NodeID,cost}2-Tuple. Ultimately, the switch with the best NodeID is recognized throughout the network as the root node. Every switch will determine its own root port which provides the lowest cost to the root node. Ports with ties in the cost metric will be arbitrarily decided by the port number and a single root port chosen. In addition, every switch will also determine ports for which self has been identified as the best for-warding bridge. All other ports will be logically turned off (no packets forwarded out these ports). More or less this is it - elegant and simple at breaking loops but unfortunately not ideal for CBB networks as will be shown. Problems with Spanning Tree for Implementing CBB Clusters This description overly simplifies the spanning tree algorithm which also includes enhancements such as switches periodically scanning the network and discarding information to accommodate network changes and device failure. Nonetheless, the description captures the essence of the algorithm and is adequate to understand the algorithm s limitations for construction of CBB networks for clustering. As an example, let s apply the spanning tree algorithm to the network as shown. For simplicity assume the unique IDs of the various switches range from 1 to 6 as shown in Figure 3. Clearly, as the spanning tree algorithm progresses, the switch on the upper left with the ID of 1 will emerge as root node. 1 2 Level 2 P/2 Ports Level 1 Figure 3: Fat tree topology with switches labeled with arbitrary unique IDs 1-6 Now, since the shortest path to the root node is directly to switch #1, the spanning tree algorithm will disable all level 1 switch ports that connect to the second level switch labeled # MellanoxTechnologies Inc. 5
6 Furthermore, one of the two ports from each of the level 1 switch connecting to switch #1 will be disabled, leaving only one active port. So the physical CBB network reduces to the logical network as shown in Figure 4 with disabled ports and switches shown with red dotted lines. 1 2 Level 2 P/2 Ports Level 1 Figure 4: Logical network after Spanning Tree Algorithm disables ports to eliminate loops (disabled elements shown with dotted lines) In essence, switch #2 is logically removed from the network. Furthermore only half of the links available between the level 1 switches and switch #1 are now active. The reason the spanning tree algorithm behaves this way is that its goal is to provide a loop free logical network that spans to every node (i.e. every node in the network is reachable by every other node). Thus, only connectivity is considered and preserving balanced bandwidth is not. Consequently, switch #2 is logically removed as it does not provide any additional spanning connectivity that is not provided by switch #1. Similarly the redundant inter-level links, though allowing for balanced bandwidth connections, are discarded as superfluous by the spanning tree algorithm. Without balanced bandwidth facilities, the Ethernet network becomes subjected to numerous congestion points leading to dropping of packets and throttling of injection rates in transmit nodes leading to less than wire speed and non-deterministic performance. In other words, the spanning tree algorithm does not allow the use of layer 2 Ethernet switches to build CBB cluster networks. Scaling and Cost Challenges with Other Alternatives Use of link aggregation schemes (such as IEEE 802.3ad) to bundle links or ports to single logical ports is an option that can be deployed. However, load balancing and aggregation across the links in the bundle are based on fixed algorithms and not designed for addressing dynamically changing traffic patterns or congestion points. When the number of switch tiers in the network infrastructure increases, programming of the link aggregation algorithms to meet the needs of the network becomes an IT manager s headache. As such, link aggregation schemes solve localized bandwidth restrictions, but cannot scale to provide wire-speed routing across the data center network. Use of higher level (layer 3 and above) Ethernet switches with different routing algorithms is another option, but again, these are not designed with CBB clustering networks in mind. Furthermore, IP routing requires sophisticated functionality such as longest pre-fix matching, subnetting, and super-netting. This functionality requires advanced processing which requires use of store-and-forward methods, and thus precludes the use of cut-through switches. The requirement for store-and-forward and advanced processing introduces significant switch latency, makes performance non-deterministic and greatly increases the cost of these switches MellanoxTechnologies Inc. 6
7 InfiniBand Forwarding Can Scale Deterministically By contrast, InfiniBand does not dictate the use of the spanning tree algorithm and, in fact, does not mandate any specific algorithm at all. Instead, InfiniBand defines a set of forwarding and loop breaking mechanisms on top of which any destination routed algorithm may be implemented. The choice of algorithm is a function of the Subnet Manager (SM) which is responsible for discovering the physical topology and set up the logical topology by configuring the forwarding tables in each switch. A single master SM is required to manage an InfiniBand subnet which may reside on any node (typically running on a switch or host server). The SM is a software process and thus configuration policies are completely flexible and can implement appropriate high performance forwarding algorithms. In the case of a CBB network, as shown, a weighted, shortest mean path algorithm is typically implemented so that traffic is evenly distributed across the available connectivity and loops are prevented. Using these more sophisticated algorithms, InfiniBand can address the issue of loop breaking, without sacrificing the ability to recognize logical topologies that take full advantage of the physical topologies. In fact, other even more advanced load balancing and dynamic algorithms are possible which can take advantage of InfiniBand s multi-path, virtual lane, and automatic path migration algorithms. Multi-pathing extends InfiniBand s destination routing capabilities such that traffic flows to a given endpoint can actually traverse different paths through the network. This allows load balancing software to monitor the network, detect hot spots, and choose different routes through the network to avoid traffic and avoid congestion. Similarly, automatic path migration capabilities which are useful to implement failover can also be used by sophisticated routing algorithms. Furthermore, with InfiniBand, the SM has mechanisms other than port disabling to break potential loops. For example, InfiniBand offers service level to virtual lane mapping functionality which allows packets erroneously received to be dropped without actually completely disabling a port. These more powerful mechanisms allow InfiniBand fabrics to take full advantage of the physical topology and distribute traffic across links to make best use of all available bandwidth. The user is shielded from all this sophistication as the SM automatically and transparently discovers and initializes the fabric. For users who do want direct control of configuring the logical topology, the SM provides a software interface to allow user programmable algorithms to be implemented. Thus, InfiniBand architecture provides the best combination of automatic provisioning with the flexibility to implement optional user defined algorithms. At an even lower level, InfiniBand Architecture is a loss-less fabric and includes mechanisms that avoid dropped packets and guarantee in-order packet delivery. InfiniBand defines link level, credit based flow control so that packet data is not transmitted unless there is guaranteed buffering at the receiving end of the link. This means that, unlike Ethernet, the InfiniBand link level fabric does not drop packets due to buffer overruns and thus provides guaranteed in-order delivery and deterministic performance. Summary Constant Bi-Sectional Bandwidth (CBB) or Fat Tree networks have emerged as a key ingredient to deliver non-blocking and scalable bandwidth for high performance computing and other large scale data center clusters. In addition to wiring the network to provide a physical CBB network topology, system architects also need to pay close attention to the underlying networking technology. The spanning tree algorithm required by Ethernet layer 2 switches is not able to exploit physical CBB fat tree topologies. Moving to expensive layer 3 switches solves the spanning tree related scaling problem, but adds the overhead of additional layer 3 algorithms and store-and-forward switching MellanoxTechnologies Inc. 7
8 InfiniBand on the other hand combines automatic configuration and flexibility of the forwarding algorithm, to be able to fully take advantage of the underlying CBB network. As such, InfiniBand can scale deterministically, maintaining full wire speed across the cluster, and through use of simple cut-through switching, helps keep costs down to a minimum MellanoxTechnologies Inc. 8
InfiniBand Clustering
White Paper InfiniBand Clustering Delivering Better Price/Performance than Ethernet 1.0 Introduction High performance computing clusters typically utilize Clos networks, more commonly known as Fat Tree
Non-blocking Switching in the Cloud Computing Era
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere!
Interconnection Networks Interconnection Networks Interconnection networks are used everywhere! Supercomputers connecting the processors Routers connecting the ports can consider a router as a parallel
Cisco s Massively Scalable Data Center
Cisco s Massively Scalable Data Center Network Fabric for Warehouse Scale Computer At-A-Glance Datacenter is the Computer MSDC is the Network Cisco s Massively Scalable Data Center (MSDC) is a framework
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School
Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
InfiniBand in the Enterprise Data Center
InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
ECE 358: Computer Networks. Solutions to Homework #4. Chapter 4 - The Network Layer
ECE 358: Computer Networks Solutions to Homework #4 Chapter 4 - The Network Layer P 4. Consider the network below. a. Suppose that this network is a datagram network. Show the forwarding table in router
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing WHITE PAPER Highlights: There is a large number of HPC applications that need the lowest possible latency for best performance
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology Scalable Interconnect Topology Tradeoffs Since its inception, InfiniBand has been optimized for constructing clusters with very large
System Interconnect Architectures. Goals and Analysis. Network Properties and Routing. Terminology - 2. Terminology - 1
System Interconnect Architectures CSCI 8150 Advanced Computer Architecture Hwang, Chapter 2 Program and Network Properties 2.4 System Interconnect Architectures Direct networks for static connections Indirect
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Architecting Low Latency Cloud Networks
Architecting Low Latency Cloud Networks Introduction: Application Response Time is Critical in Cloud Environments As data centers transition to next generation virtualized & elastic cloud architectures,
Chapter 3. Enterprise Campus Network Design
Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
Multi-Chassis Trunking for Resilient and High-Performance Network Architectures
WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture
Chapter 1 Reading Organizer
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
CCNA R&S: Introduction to Networks. Chapter 5: Ethernet
CCNA R&S: Introduction to Networks Chapter 5: Ethernet 5.0.1.1 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
Interconnection Networks Programmierung Paralleler und Verteilter Systeme (PPV)
Interconnection Networks Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Interconnection Networks 2 SIMD systems
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Exam 1 Review Questions
CSE 473 Introduction to Computer Networks Exam 1 Review Questions Jon Turner 10/2013 1. A user in St. Louis, connected to the internet via a 20 Mb/s (b=bits) connection retrieves a 250 KB (B=bytes) web
WHITEPAPER. VPLS for Any-to-Any Ethernet Connectivity: When Simplicity & Control Matter
WHITEPAPER VPLS for Any-to-Any Ethernet Connectivity: When Simplicity & Control Matter The Holy Grail: Achieving Simplicity and Control in the IT Infrastructure Today s Information Technology decision-makers
Data Center Switch Fabric Competitive Analysis
Introduction Data Center Switch Fabric Competitive Analysis This paper analyzes Infinetics data center network architecture in the context of the best solutions available today from leading vendors such
CHAPTER 10 LAN REDUNDANCY. Scaling Networks
CHAPTER 10 LAN REDUNDANCY Scaling Networks CHAPTER 10 10.0 Introduction 10.1 Spanning Tree Concepts 10.2 Varieties of Spanning Tree Protocols 10.3 Spanning Tree Configuration 10.4 First-Hop Redundancy
White Paper Abstract Disclaimer
White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
LAN Switching and VLANs
26 CHAPTER Chapter Goals Understand the relationship of LAN switching to legacy internetworking devices such as bridges and routers. Understand the advantages of VLANs. Know the difference between access
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops
ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Implement Spanning Tree Protocols LAN Switching and Wireless Chapter 5 Explain the role of redundancy in a converged
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
Unified Fabric: Cisco's Innovation for Data Center Networks
. White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness
Lecture 18: Interconnection Networks. CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012)
Lecture 18: Interconnection Networks CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012) Announcements Project deadlines: - Mon, April 2: project proposal: 1-2 page writeup - Fri,
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
Isilon IQ Network Configuration Guide
Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3
- Hubs vs. Switches vs. Routers -
1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing
How To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
Interconnection Network Design
Interconnection Network Design Vida Vukašinović 1 Introduction Parallel computer networks are interesting topic, but they are also difficult to understand in an overall sense. The topological structure
Scalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
Simplifying the Data Center Network to Reduce Complexity and Improve Performance
SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,
How To Monitor Infiniband Network Data From A Network On A Leaf Switch (Wired) On A Microsoft Powerbook (Wired Or Microsoft) On An Ipa (Wired/Wired) Or Ipa V2 (Wired V2)
INFINIBAND NETWORK ANALYSIS AND MONITORING USING OPENSM N. Dandapanthula 1, H. Subramoni 1, J. Vienne 1, K. Kandalla 1, S. Sur 1, D. K. Panda 1, and R. Brightwell 2 Presented By Xavier Besseron 1 Date:
Lecture 23: Interconnection Networks. Topics: communication latency, centralized and decentralized switches (Appendix E)
Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E) 1 Topologies Internet topologies are not very regular they grew incrementally Supercomputers
Power Saving Features in Mellanox Products
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
QoS issues in Voice over IP
COMP9333 Advance Computer Networks Mini Conference QoS issues in Voice over IP Student ID: 3058224 Student ID: 3043237 Student ID: 3036281 Student ID: 3025715 QoS issues in Voice over IP Abstract: This
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Switching in an Enterprise Network
Switching in an Enterprise Network Introducing Routing and Switching in the Enterprise Chapter 3 Version 4.0 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Compare the types of
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors
2011 International Symposium on Computer Networks and Distributed Systems (CNDS), February 23-24, 2011 Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors Atefeh Khosravi,
Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu [email protected]
Large-Scale Distributed Systems Datacenter Networks COMP6511A Spring 2014 HKUST Lin Gu [email protected] Datacenter Networking Major Components of a Datacenter Computing hardware (equipment racks) Power supply
Latency on a Switched Ethernet Network
Application Note 8 Latency on a Switched Ethernet Network Introduction: This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency
Answers to Sample Questions on Network Layer
Answers to Sample Questions on Network Layer ) IP Packets on a certain network can carry a maximum of only 500 bytes in the data portion. An application using TCP/IP on a node on this network generates
Service Definition. Internet Service. Introduction. Product Overview. Service Specification
Service Definition Introduction This Service Definition describes Nexium s from the customer s perspective. In this document the product is described in terms of an overview, service specification, service
Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone.
Anritsu Network Solutions Voice Over IP Application Note MultiFlow 5048 CALL Manager Serv # 10.100.27 255.255.2 IP address 10.100.27.4 OC-48 Link 255 255 25 IP add Introduction Voice communications over
Computer Networking Networks
Page 1 of 8 Computer Networking Networks 9.1 Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office
LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.
- LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA. Summary Industry: Financial Institution Challenges: Provide a reliable,
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!
Using & Offering Wholesale Ethernet Network and Operational Considerations
White Paper Using and Offering Wholesale Ethernet Using & Offering Wholesale Ethernet Network and Operational Considerations Introduction Business services customers are continuing to migrate to Carrier
TrustNet CryptoFlow. Group Encryption WHITE PAPER. Executive Summary. Table of Contents
WHITE PAPER TrustNet CryptoFlow Group Encryption Table of Contents Executive Summary...1 The Challenges of Securing Any-to- Any Networks with a Point-to-Point Solution...2 A Smarter Approach to Network
Ethernet Fabric Requirements for FCoE in the Data Center
Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing [email protected] February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution
Cloud Fabric Data Center Network Solution Cloud Fabric Data Center Network Solution Product and Solution Overview Huawei CloudEngine (CE) series switches are high-performance cloud switches designed for
Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels
Design Guide Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Level 2 Routing: LAN Bridges and Switches
Level 2 Routing: LAN Bridges and Switches Norman Matloff University of California at Davis c 2001, N. Matloff September 6, 2001 1 Overview In a large LAN with consistently heavy traffic, it may make sense
Performance of networks containing both MaxNet and SumNet links
Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data
QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data Md. Golam Shagadul Amin Talukder and Al-Mukaddim Khan Pathan* Department of Computer Science and Engineering, Metropolitan University, Sylhet,
Architecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern
Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Efficient and low cost Internet backup to Primary Video lines
Efficient and low cost Internet backup to Primary Video lines By Adi Rozenberg, CTO Table of Contents Chapter 1. Introduction... 1 Chapter 2. The DVP100 solution... 2 Chapter 3. VideoFlow 3V Technology...
Lecture 7: Data Center Networks"
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya,
How To Switch In Sonicos Enhanced 5.7.7 (Sonicwall) On A 2400Mmi 2400Mm2 (Solarwall Nametra) (Soulwall 2400Mm1) (Network) (
You can read the recommendations in the user, the technical or the installation for SONICWALL SWITCHING NSA 2400MX IN SONICOS ENHANCED 5.7. You'll find the answers to all your questions on the SONICWALL
SDN CENTRALIZED NETWORK COMMAND AND CONTROL
SDN CENTRALIZED NETWORK COMMAND AND CONTROL Software Defined Networking (SDN) is a hot topic in the data center and cloud community. The geniuses over at IDC predict a $2 billion market by 2016
A New Approach to Developing High-Availability Server
A New Approach to Developing High-Availability Server James T. Yu, Ph.D. School of Computer Science, Telecommunications, and Information Systems DePaul University [email protected] ABSTRACT This paper
Wide Area Networks. Learning Objectives. LAN and WAN. School of Business Eastern Illinois University. (Week 11, Thursday 3/22/2007)
School of Business Eastern Illinois University Wide Area Networks (Week 11, Thursday 3/22/2007) Abdou Illia, Spring 2007 Learning Objectives 2 Distinguish between LAN and WAN Distinguish between Circuit
CONTROL LEVEL NETWORK RESILIENCY USING RING TOPOLOGIES. Joseph C. Lee, Product Manager Jessica Forguites, Product Specialist
CONTROL LEVEL NETWORK RESILIENCY Written by: Joseph C. Lee, Product Manager Jessica Forguites, Product Specialist DANGER 65 65 65 65 65 65 65 65 EtherNet/IP 1 3 4 5 6 LINK 1 LINK MOD NET 15 14 13 1 11
CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012
CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012 Important: The solutions to the homework problems from the course book have been provided by the authors.
hp ProLiant network adapter teaming
hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2
LAN Switching. 15-441 Computer Networking. Switched Network Advantages. Hubs (more) Hubs. Bridges/Switches, 802.11, PPP. Interconnecting LANs
LAN Switching 15-441 Computer Networking Bridges/Switches, 802.11, PPP Extend reach of a single shared medium Connect two or more segments by copying data frames between them Switches only copy data when
Network Structure or Topology
Volume 1, Issue 2, July 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Network Structure or Topology Kartik
Introduction to LAN/WAN. Network Layer
Introduction to LAN/WAN Network Layer Topics Introduction (5-5.1) Routing (5.2) (The core) Internetworking (5.5) Congestion Control (5.3) Network Layer Design Isues Store-and-Forward Packet Switching Services
TechBrief Introduction
TechBrief Introduction Leveraging Redundancy to Build Fault-Tolerant Networks The high demands of e-commerce and Internet applications have required networks to exhibit the same reliability as the public
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
