Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
|
|
|
- Alison Park
- 10 years ago
- Views:
Transcription
1 Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009
2 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications. Intel may make changes to specifications and product descriptions at any time, without notice. Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights. The Controller may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel and Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. *Other names and brands may be claimed as the property of others. Copyright Intel Corporation. All Rights Reserved. 2
3 Table of Contents Overview... 4 Lossless Operation using PFC...4 Flow Optimization Using ETS...5 Traffic Classes...6 Strict Priority...6 Deficit Round Robin (DRR)...7 Traffic Shaping...7 Example Implementation...7 Efficient Fabric Scaling...8 FM4000 Multi-path Support Cut-Through Architecture Power Efficiency Bandwidth Optimization using QCN...12 NIC Proxy Mechanism Fabric Management with DCBX...13 Conclusion
4 Overview As the size and density of datacenters increase, the cost, area, power and support of multiple interconnect fabric technologies cannot be tolerated. Because of this, the IEEE is developing several new standards that will enable Ethernet as the single unified fabric for data, storage and HPC traffic. The industry has coined these new initiatives Converged Enhanced Ethernet (CEE) or Datacenter Bridging (DCB). This paper will discuss several advanced Intel Ethernet Switch Family features available in the FM4000 that have been developed to support CEE-DCB applications. Figure 1 shows an overview of several features which are used for congestion management in the FM4000. The way these features can be used for CEE-DCB will be described in more detail throughout the rest of this paper. Figure 1. FM4000 Congestion Management Features Lossless Operation using PFC Storage protocols require deterministic, bounded latency to ensure that time-out values are not exceeded. Traditional Ethernet switches drop packets during periods of high congestion, requiring retransmission at 4
5 a higher layer. This added latency cannot be tolerated in storage applications. Standard IEEE pause frames will pause all traffic during periods of congestion. So with large enough buffers, traffic will not be dropped, but unacceptable latency times may still be seen. Because of this, the IEEE is standardizing Priority Flow Control (PFC), which can differentiate traffic classes, providing bounded latency for storage traffic. The FM4000 contains a Frame Forwarding Unit that can be used for deep header inspection including identification of traffic types such as storage or HPC. It can also assign these traffic types to one of 8 internal traffic classes. These traffic classes can also be assigned to one of 2 shared memory partitions. When a partition fills past a watermark, pause frames will be generated for all traffic classes assign to that memory partition. For example, storage traffic can be assigned to one memory partition and data traffic can be assigned to another memory partition. In this case, the switch will not pause storage traffic if data traffic causes congestion in the switch. The FM4000 device supports PFC. This capability is compliant with the latest version of draft IEEE specification: IEEE P802.1Qbb/D0.2. This capability was also designed in compliance with the initial Cisco specification made public through the IEEE in May, 2007 and later updated in proposals from Cisco. In addition, interoperability has been successfully tested at Intel's lab with several industry leading NIC vendors. The FM4000 may be configured at ingress and at egress to support PFC. At the switch ingress, a PFC pause frame can be generated from the FM4000 device to the upstream link partner based on memory partition watermark settings that are per port. When a watermark is exceeded, a PFC pause frame is generated, in accordance with the specification. At the switch egress, a PFC pause frame can be received by the FM4000 device and interpreted to pause a particular traffic class. When this happens, the egress scheduler will not schedule frames of that priority to that egress port, but may schedule frames from other priority queues to that egress port. All PFC features in the FM4000 device can perform at line rate with no impact to the latency or overall packet forwarding performance. Flow Optimization Using ETS In any switch fabric, traffic from multiple ingress ports can compete for the limited bandwidth in a single egress port. For example, bursts of data traffic could reduce the bandwidth available to storage traffic, causing congestion and increased latency for the storage traffic. To solve this problem, the IEEE is developing Enhanced Transmission 5
6 Selection (ETS) which provides advanced traffic scheduling including features such as guarantee minimum bandwidth for certain traffic classes like storage or HPC. The FM4000 device supports ETS and is compliant with the most current version of the IEEE specification, IEEE P802.1Qaz/D0.1. The ETS support in the FM4000 enables the definition of priority groups at each egress port of the switch. Priority queues are combined into groups, and deficit round robin scheduling algorithm can be applied to the groups. All of the queues in a particular group can have different priority, but the switch is able to apply similar congestion control capabilities to the multiple queues within a group. The switch is also able to establish some priority groups to be associated with PFC and some to be configured without PFC. It is important that both capabilities are configurable. The IEEE allows for the details of the scheduling algorithm to be vendor specific, provided that the general capability of the behavior on the wire can be met. In addition to the capability defined by the IEEE, the FM4000 device supports numerous enhancements in this area. The FM4000 is capable of implementing a hierarchical scheduling algorithm between the priority groups and the priority queues, which enables strict priority between the priority queues within a priority group. The FM4000 also supports shaping between the priority queues. The FM4000 supports 8 priority queues per egress port, which is contemplated by the IEEE but not required. The following sections describe some of the mechanisms and features available in the FM4000's multi-level egress scheduler. Traffic Classes The FM4000 contains a Frame Forwarding Unit that can be used for deep header inspection including identification of traffic types such as storage or HPC. It can also assign these traffic types to one of 8 internal traffic classes. These traffic classes are used for ingress flow control and egress scheduling. Strict Priority Scheduler groups can be defined to provide either strict priority or deficit round robin. For strict priority, higher number traffic classes are always scheduled before lower number traffic classes unless that traffic class has been bandwidth limited or is being flow controlled by a downstream device. Consecutive traffic class numbers can also be assigned to the same strict priority group such that any traffic class in the group with a frame queued will always be scheduled before lower number traffic classes unless all traffic classes in the group have been bandwidth limited or are being flow controlled by a downstream device. 6
7 Deficit Round Robin (DRR) Deficit round robin gives a minimum bandwidth guarantee to a traffic class, which can also be used to guarantee maximum latency. Consecutive traffic class numbers can also be grouped such that a group of traffic classes can be given a minimum bandwidth guarantee. For example, bandwidth can be allocated to various traffic classes within a priority group. There cannot be a strict priority group between DRR scheduler groups. A traffic class or classes within a DRR group will be scheduled at a minimum bandwidth rate assuming that there is traffic to be scheduled, that there is no higher priority traffic eligible that is consuming the bandwidth and there is no flow control for that traffic class. Traffic Shaping Traffic shaping is used to create an upper bound on the bandwidth for a traffic class in order to reduce jitter. If DRR is used, it is expected the maximum shaping bandwidth will be set higher than the minimum DRR bandwidth. Consecutive traffic class numbers can be in the same shaping group such that the aggregate bandwidth from that group does not exceed a maximum value. Traffic shaping in combination with DRR provides an excellent means to optimize traffic flows through the fabric. Example Implementation As you can see by the discussion above, there are a wide variety of scheduling configurations that can be implemented in each FM4000 egress port. Figure 2 is an example of how an egress scheduler could be used. 7
8 Figure 2. FM4000 Multi-level Scheduler Example Efficient Fabric Scaling Scalability is important in data center switches. But scalability cannot come at the expense of lower performance due to factors such as blocking. The FM4000 architecture provides several features to alleviate blocking in multi-stage datacenter fabrics. The main feature is a large number of 10G ports per switch element. This allows the efficient creation of a multi-tiered switch architecture commonly called a Fat Tree, which provides constant bandwidth between switching layers and forms a non-blocking, scalable switch as shown in Figure 3. 8
9 Figure 3. Multi-tiered Switch Architecture This configuration is another example of the importance of low-latency switch elements. With 300ns FM4000 switch elements, the maximum latency between any two CPU blades in any two racks is less than 1uS, whereas with an alternative switch element, the latency can be well over 10uS for the same configuration. Adding 10uS latency to virtually any application - even simple enterprise automation applications - will meaningfully impact performance and efficiency. A 2-tier fat tree, built from top-of-rack switches and blade switches, is shown in Figure 4. 9
10 Figure 4. Examplel Multi-tiered System using TOR Switches The number of network ports presented by a Fat Tree architecture scales exponentially with the number of tiers in the switch. The Intel Ethernet Switch Family switches were designed with a set of features optimized for the effective implementation of this architecture, where a 2-tier FM4000-based system provides up to G ports in a non-blocking configuration and a 3-tier system up to 3,456 nonblocking 10G ports (greater density can be achieved by introducing over-subscription in the system). By using highly integrated FM4000 switches in this architecture and by using standard Ethernet switching throughout the fabric, scalability is achieved that is more cost-, spaceand power-effective than the standard Big Iron approach. The Intel Ethernet Switch Family provides additional features to avoid blocking in multi-tiered fabrics such as an output queued shared memory architecture, separate memory partitions for different traffic types, efficient load distribution across second tier switch devices and congestion feedback mechanisms such as class-based pause frames and QCN. FM4000 Multi-path Support The FM4000 devices support multi-pathing. This capability allows one to define multiple output ports from the switch and enables the switches to hash between the output ports to load-balance the traffic between multiple second level switches. Unlike link aggregation, there 10
11 is no limitation with multi-pathing to require both ends of the wire to terminate on the same set of switches as in link aggregation. Multipathing is used to define Clos switches in Ethernet, similar to Clos switches in Infiniband applications. In conjunction with DCE features, and shared memory packet storage, the performance of multi-pathing can be considerably higher than in Infiniband. The FM4000 supports layer 2 multi-pathing with an ISL tag and layer 3 multi-pathing using ECMP. When the ISL tag is used, the multi-pathing configuration can be modified on the fly to accommodate changes in link topology for resiliency. These changes are implemented in a low-level mapping table sparing the switch from doing a spanning tree reconfiguration Cut-Through Architecture In order to achieve low latency independent of packet size, the Intel Ethernet Switch Family switches employ a cut-through architecture. Traditional Ethernet switches use store and forward mechanisms where the latency can be many microseconds per stage depending on the packet size, which is considered unacceptable for high performance datacenter applications. Intel Ethernet Switch Family latency compared to other Ethernet switch products is shown in Figure 5 below: Figure 5. Single Stage Swtich Latency vs. Packet Size 11
12 Intel Ethernet Switch Family switches have been designed for the datacenter using cut-through operation that can achieve less than 1uS latency through three fabric stages. In addition, this latency can be achieved with all L2/L3/L4 features enabled. This makes Ethernet a compelling unified fabric solution for the datacenter. The FM4000 devices also support parallel multicast, which is a muchneeded feature in many datacenter applications. The switch can saturate all of its output ports simultaneously with layer 2 or layer 3 multicast. In an uncongested switch, there is less than 70 ns of minmax delay between the first and last multicast packet in a 23-way multicast tree. This capability is unique to Intel switches and is considered highly desirable in market trading and other transaction applications that require low-latency synchronization between nodes. It is also very useful in HPC environments for MPI collective operations. Power Efficiency In large multi-stage switch configurations, the power per switch has a significant impact on overall system power. The FM4000 incorporates patented low-power Intel technology, which cannot be matched by standard Ethernet devices. In addition, unused interfaces can be disabled to consume no power and the core power scales directly with the level of activity. Bandwidth Optimization using QCN In multi-stage fabrics, congestion hot spots can occur which can cause congestion spreading. To combat this, a new IEEE work group has been formed to develop the Quantized Congestion Notification (QCN) standard. The FM4000 device was built with several features to support the QCN standard. The FM4000 reference design, Monaco, has implemented QCN features compatible with P802.1Qau/D1.1. In the Monaco reference design, a small, low-cost, low-power FPGA (Altera Cyclone III) is used between the switch and the CPU. This FPGA monitors the FM4000's queues and modifies the configuration of the FM4000 to make it compatible with the new QCN algorithm. The vast majority of the processing necessary for QCN is in the FM4000 device itself, so the performance of the algorithm is not determined by the FPGA. NIC Proxy Mechanism Due to the fact that NICs may not implement the QCN standard for some time, the FM4000 has been developed with a NIC proxy feature, which implements the logic that would normally go into a compliant NIC device. This allows the use of QCN before there is widespread 12
13 adoption of the algorithm by adaptor vendors. Support for the configuration of the QCN algorithm is implemented in the Intel Ethernet Switch Family API. NIC Proxy is a feature used in QCN and supported in the FM4000 to allow the entire algorithm to be implemented within the switch. The NIC proxy feature does this by providing the QCN rate limiters in the switch device itself. The switch is capable of trapping the QCN frames from a distant congested egress port. It then adjusts the token bucket rate limiters until the congestion point is satisfied with the aggregate flow rates. The trick is to convert the result of rate limiters into a format the NICs can interpret. This is done through a PFC pause pacing function. The rate limiter will cause the switch ingress port to send occasional pause messages (PFC messages) until the upstream NIC achieves the desired rate required. Because of this, the NIC only needs to support PFC. Fabric Management with DCBX Management of a datacenter bridging fabric requires the exchange of parameters between switches. The IEEE is developing the Datacenter Bridging Exchange Protocol (DCBX) to support this. The FM4000 supports the DCBX protocol also described in IEEE P802.1Qaz/D0.1. Support for the DCBX protocol in the silicon is determined by the support of LLDP, and the underlying support for the capabilities that DCBX enables. Intel currently supports the necessary features for DCBX packet handling in the Intel Ethernet Switch Family API. Intel also has DCBX software in development for other specialized environments. As a result of this work, the FM4000 has features for the discovery of DCB peer capability, detection of DCB misconfiguration and DCB peer configuration. Conclusion Emerging datacenters will require converged fabrics in order to minimize cost, size, power and support. Because of this, the IEEE has several new initiatives, which are known collectively as Converged Enhanced Ethernet (CEE) or Datacenter Bridging (DCB). These initiatives include PFC, ETS, QCN and DCBX. The FM4000 switch products support all of these features while also maintaining industryleading latency in large scalable fabric configurations. 13
14 NOTE: This page intentionally left blank. 14
Fiber Channel Over Ethernet (FCoE)
Fiber Channel Over Ethernet (FCoE) Using Intel Ethernet Switch Family White Paper November, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR
Ethernet Fabric Requirements for FCoE in the Data Center
Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing [email protected] February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
A Dell Technical White Paper Dell PowerConnect Team
Flow Control and Network Performance A Dell Technical White Paper Dell PowerConnect Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Unified Fabric: Cisco's Innovation for Data Center Networks
. White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
SDN. WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking. Recep Ozdag Intel Corporation
WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking Intel Ethernet Switch FM6000 Series - Software Defined Networking Recep Ozdag Intel Corporation Software Defined Networking
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing [email protected] Michael Kagan Chief Technology Officer [email protected]
A Whitepaper on. Building Data Centers with Dell MXL Blade Switch
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Intel Data Direct I/O Technology (Intel DDIO): A Primer >
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
FIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,
CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution
Cloud Fabric Data Center Network Solution Cloud Fabric Data Center Network Solution Product and Solution Overview Huawei CloudEngine (CE) series switches are high-performance cloud switches designed for
Network Configuration Example
Network Configuration Example Configuring DCBX Application Protocol TLV Exchange Published: 2014-01-10 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net
High-Performance Automated Trading Network Architectures
High-Performance Automated Trading Network Architectures Performance Without Loss Performance When It Counts Introduction Firms in the automated trading business recognize that having a low-latency infrastructure
Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco
Cisco Datacenter 3.0 Datacenter Trends David Gonzalez Consulting Systems Engineer Cisco 2009 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Agenda Data Center Ethernet (DCE) Fiber Channel over
How To Evaluate Netapp Ethernet Storage System For A Test Drive
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
Architecting Low Latency Cloud Networks
Architecting Low Latency Cloud Networks Introduction: Application Response Time is Critical in Cloud Environments As data centers transition to next generation virtualized & elastic cloud architectures,
Creating Overlay Networks Using Intel Ethernet Converged Network Adapters
Creating Overlay Networks Using Intel Ethernet Converged Network Adapters Technical Brief Networking Division (ND) August 2013 Revision 1.0 LEGAL INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
Scaling up to Production
1 Scaling up to Production Overview Productionize then Scale Building Production Systems Scaling Production Systems Use Case: Scaling a Production Galaxy Instance Infrastructure Advice 2 PRODUCTIONIZE
Data Center Network Topologies
Data Center Network Topologies. Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: 3-1 Overview
Vendor Update Intel 49 th IDC HPC User Forum. Mike Lafferty HPC Marketing Intel Americas Corp.
Vendor Update Intel 49 th IDC HPC User Forum Mike Lafferty HPC Marketing Intel Americas Corp. Legal Information Today s presentations contain forward-looking statements. All statements made that are not
How To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
Chapter 1 Reading Organizer
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
Switching Architectures for Cloud Network Designs
Overview Networks today require predictable performance and are much more aware of application flows than traditional networks with static addressing of devices. Enterprise networks in the past were designed
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Different NFV/SDN Solutions for Telecoms and Enterprise Cloud
Solution Brief Artesyn Embedded Technologies* Telecom Solutions Intel Xeon Processors Different NFV/SDN Solutions for Telecoms and Enterprise Cloud Networking solutions from Artesyn Embedded Technologies*
10GBASE-T for Broad 10 Gigabit Adoption in the Data Center
WHITE PAPER 10GBASE-T Ethernet Network Connectivity 10GBASE-T for Broad 10 Gigabit Adoption in the Data Center The increasing use of virtualization and unified networking places extreme I/O demands on
New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC
New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC Alan Gara Intel Fellow Exascale Chief Architect Legal Disclaimer Today s presentations contain forward-looking
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
NFV Reference Platform in Telefónica: Bringing Lab Experience to Real Deployments
Solution Brief Telefonica NFV Reference Platform Intel Xeon Processors NFV Reference Platform in Telefónica: Bringing Lab Experience to Real Deployments Summary This paper reviews Telefónica s vision and
Fibre Channel over Ethernet: Enabling Server I/O Consolidation
WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
Intel Media SDK Library Distribution and Dispatching Process
Intel Media SDK Library Distribution and Dispatching Process Overview Dispatching Procedure Software Libraries Platform-Specific Libraries Legal Information Overview This document describes the Intel Media
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need
How To Switch In Sonicos Enhanced 5.7.7 (Sonicwall) On A 2400Mmi 2400Mm2 (Solarwall Nametra) (Soulwall 2400Mm1) (Network) (
You can read the recommendations in the user, the technical or the installation for SONICWALL SWITCHING NSA 2400MX IN SONICOS ENHANCED 5.7. You'll find the answers to all your questions on the SONICWALL
Enterasys Data Center Fabric
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Cisco s Massively Scalable Data Center
Cisco s Massively Scalable Data Center Network Fabric for Warehouse Scale Computer At-A-Glance Datacenter is the Computer MSDC is the Network Cisco s Massively Scalable Data Center (MSDC) is a framework
Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers
WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster
IP ETHERNET STORAGE CHALLENGES
ARISTA SOLUTION GUIDE IP ETHERNET STORAGE INSIDE Oveview IP Ethernet Storage Challenges Need for Massive East to West Scalability TCP Incast Storage and Compute Devices Interconnecting at Different Speeds
Technology Overview. Class of Service Overview. Published: 2014-01-10. Copyright 2014, Juniper Networks, Inc.
Technology Overview Class of Service Overview Published: 2014-01-10 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, Junos,
Converged networks with Fibre Channel over Ethernet and Data Center Bridging
Converged networks with Fibre Channel over Ethernet and Data Center Bridging Technology brief, 2 nd edition Introduction... 2 Traditional data center topology... 2 Early attempts at converged networks...
www.careercert.info Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
2007 Cisco Systems, Inc. All rights reserved. DESGN v2.0 3-11 Enterprise Campus and Data Center Design Review Analyze organizational requirements: Type of applications, traffic volume, and traffic pattern
VMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data
White Paper Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data What You Will Learn Financial market technology is advancing at a rapid pace. The integration of
COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service
COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service Eddie Dong, Yunhong Jiang 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
White Paper Increase Flexibility in Layer 2 Switches by Integrating Ethernet ASSP Functions Into FPGAs
White Paper Increase Flexibility in Layer 2 es by Integrating Ethernet ASSP Functions Into FPGAs Introduction A Layer 2 Ethernet switch connects multiple Ethernet LAN segments. Because each port on the
"Charting the Course... ... to Your Success!" QOS - Implementing Cisco Quality of Service 2.5 Course Summary
Course Summary Description Implementing Cisco Quality of Service (QOS) v2.5 provides learners with in-depth knowledge of QoS requirements, conceptual models such as best effort, IntServ, and DiffServ,
WHITE PAPER. Best Practices in Deploying Converged Data Centers
WHITE PAPER Best Practices in Deploying Converged Data Centers www.ixiacom.com 915-2505-01 Rev C October 2013 2 Contents Introduction... 4 Converged Data Center... 4 Deployment Best Practices... 6 Testing
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V Technical Brief v1.0 September 2012 2 Intel Ethernet and Configuring SR-IOV on Windows*
Intel Service Assurance Administrator. Product Overview
Intel Service Assurance Administrator Product Overview Running Enterprise Workloads in the Cloud Enterprise IT wants to Start a private cloud initiative to service internal enterprise customers Find an
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing WHITE PAPER Highlights: There is a large number of HPC applications that need the lowest possible latency for best performance
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service. Eddie Dong, Tao Hong, Xiaowei Yang
COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service Eddie Dong, Tao Hong, Xiaowei Yang 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO
Understanding Fundamental Issues with TRILL
WHITE PAPER TRILL in the Data Center: Look Before You Leap Understanding Fundamental Issues with TRILL Copyright 2011, Juniper Networks, Inc. 1 Table of Contents Executive Summary........................................................................................................
Using & Offering Wholesale Ethernet Network and Operational Considerations
White Paper Using and Offering Wholesale Ethernet Using & Offering Wholesale Ethernet Network and Operational Considerations Introduction Business services customers are continuing to migrate to Carrier
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
iscsi Quick-Connect Guide for Red Hat Linux
iscsi Quick-Connect Guide for Red Hat Linux A supplement for Network Administrators The Intel Networking Division Revision 1.0 March 2013 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
Multi-Chassis Trunking for Resilient and High-Performance Network Architectures
WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture
IMPLEMENTING CISCO QUALITY OF SERVICE V2.5 (QOS)
IMPLEMENTING CISCO QUALITY OF SERVICE V2.5 (QOS) COURSE OVERVIEW: Implementing Cisco Quality of Service (QOS) v2.5 provides learners with in-depth knowledge of QoS requirements, conceptual models such
Implementing Cisco Quality of Service QOS v2.5; 5 days, Instructor-led
Implementing Cisco Quality of Service QOS v2.5; 5 days, Instructor-led Course Description Implementing Cisco Quality of Service (QOS) v2.5 provides learners with in-depth knowledge of QoS requirements,
Technical Overview of Data Center Networks Joseph L White, Juniper Networks
Joseph L White, Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material
Three Key Design Considerations of IP Video Surveillance Systems
Three Key Design Considerations of IP Video Surveillance Systems 2012 Moxa Inc. All rights reserved. Three Key Design Considerations of IP Video Surveillance Systems Copyright Notice 2012 Moxa Inc. All
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone.
Anritsu Network Solutions Voice Over IP Application Note MultiFlow 5048 CALL Manager Serv # 10.100.27 255.255.2 IP address 10.100.27.4 OC-48 Link 255 255 25 IP add Introduction Voice communications over
White Paper Abstract Disclaimer
White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification
Intel NetStructure Host Media Processing Software Release 1.0 for the Windows * Operating System
Datasheet Intel NetStructure Host Media Processing Software Release 1.0 for the Windows * Operating System Media Processing Software That Can Be Used To Build Cost-Effective IP Media Servers Features Benefits
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
How To Switch A Layer 1 Matrix Switch On A Network On A Cloud (Network) On A Microsoft Network (Network On A Server) On An Openflow (Network-1) On The Network (Netscout) On Your Network (
Software- Defined Networking Matrix Switching January 29, 2015 Abstract This whitepaper describes a Software- Defined Networking use case, using an OpenFlow controller and white box switches to implement
Data Center Bridging Plugfest
Data Center Bridging Plugfest November 2010 Page 1 Table of Contents 1 Introduction & Background Error! Bookmark not defined. 1.1 Introduction... 4 1.2 DCB Plugfest Objectives and Participants... 4 1.3
Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
Virtualizing the SAN with Software Defined Storage Networks
Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
VNF & Performance: A practical approach
VNF & Performance: A practical approach Luc Provoost Engineering Manager, Network Product Group Intel Corporation SDN and NFV are Forces of Change One Application Per System Many Applications Per Virtual
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!
BUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS
SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS ADVANCED PCIE 2.0 10GBASE-T ETHERNET NETWORKING FOR SUN BLADE AND RACK SERVERS KEY FEATURES Low profile adapter and ExpressModule form factors for Oracle
QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS
Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:
