Simplifying the Data Center Architecture: Reducing Network Tiers with MRJ21 Based Technology

Similar documents
Simplifying Data Center Network Architecture: Collapsing the Tiers

The Four Pillar Strategy for Next Generation Data Centers

SummitStack in the Data Center

SummitStack in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Building Tomorrow s Data Center Network Today

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Public, Hybrid and Private Virtualized Multi-Tenant Cloud Data Center Architecture Overview

Extreme Networks: Public, Hybrid and Private Virtualized Multi-Tenant Cloud Data Center A SOLUTION WHITE PAPER

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Arista and Leviton Technology in the Data Center

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

SX1012: High Performance Small Scale Top-of-Rack Switch

System. A Product Concept Introduction ORACLE WHITE PAPER JUNE 2015

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

OPTIMIZING SERVER VIRTUALIZATION

A 10 GbE Network is the Backbone of the Virtual Data Center

Brocade Solution for EMC VSPEX Server Virtualization

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

How To Write An Article On An Hp Appsystem For Spera Hana

Addressing Scaling Challenges in the Data Center

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Data Center Architecture with Panduit, Intel, and Cisco

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

Data Center Networking Designing Today s Data Center

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

Vblock Infrastructure Platforms 2010 Vblock Platforms Architecture Overview

Blade Switches Don t Cut It in a 10 Gig Data Center

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Brocade Campus LAN Switches: Redefining the Economics of

Managing Data Center Power and Cooling

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at

Juniper Networks QFabric: Scaling for the Modern Data Center

AMP NETCONNECT CABLING SYSTEMS FOR DATA CENTERS & STORAGE AREA NETWORKS (SANS) High-density, High Speed Optical Fiber and Copper Solutions

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

A Platform Built for Server Virtualization: Cisco Unified Computing System

Next Generation Data Centre ICT Infrastructures. Harry Forbes CTO NCS

Virtualizing the SAN with Software Defined Storage Networks

integrated lights-out in the ProLiant BL p-class system

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Visibility in the Modern Data Center // Solution Overview

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Data Center Evolution without Revolution

The Future of Cloud Networking. Idris T. Vasi

3 Red Hat Enterprise Linux 6 Consolidation

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

Ridgeline Network and Service Management Software

EonStor DS High-Density Storage: Key Design Features and Hybrid Connectivity Benefits

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Ethernet Fabrics: An Architecture for Cloud Networking

Consolidating Multiple Network Appliances

SN A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

University of Arizona Increases Flexibility While Reducing Total Cost of Ownership with Cisco Nexus 5000 Series Switches

The Impact of Cloud Computing to Data Network Cabling White Paper

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Private cloud computing advances

NETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD. Debbie Montano, Chief Architect dmontano@juniper.net

How To Build A Cisco Uniden Computing System

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Unified Computing Systems

Guidelines Introduction

EPICenter Network Management Software

Solutions Guide. High Availability IPv6

10 Gigabit Aggregation and Next-Gen Edge 96-Port Managed Switch Starter Kits

Small Business Stackable Switch White Paper January 16, 2001

The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers

This ESG White Paper was commissioned by Extreme Networks and is distributed under license from ESG.

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

White Paper Solarflare High-Performance Computing (HPC) Applications

Avaya Fabric Attach. avaya.com 1. Table of Contents. Fabric Attach the Ecosystem and Solution

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Prisma IP 10 GbE VOD Line Card

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Data Sheet Fujitsu PRIMERGY BX900 S1 Blade Server

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Whitepaper Unified Visibility Fabric A New Approach to Visibility

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

Enterasys Data Center Fabric

An Oracle Technical White Paper November Oracle Solaris 11 Network Virtualization and Network Resource Management

Transcription:

Simplifying the Data Center Architecture: Reducing Network Tiers with MRJ21 Based Technology Abstract: Adoption of newer technologies such as blade servers and server virtualization are changing the way data center networks are built. In many cases these technologies, despite delivering impressive efficiencies on the server side, are resulting in more complex network designs. This white paper explores the impact of the growing number of network tiers in the data center that arise due to the adoption of blade server technology, and outlines one approach to simplify and optimize the network architecture while still realizing the benefits of blade servers. 2010 Extreme Networks, Inc. All rights reserved. Do not reproduce.

Overview Server virtualization and blade servers are driving consolidation in today s data centers. With server virtualization technology, multiple server instances can be consolidated onto a single server in the form of Virtual Machines (VMs). The VMs on a single server communicate with each other through a virtual switch (vswitch). The virtual switch is a piece of software running in the virtualization layer on the server (also called the hypervisor) and functions as a software Layer 2 switch. Blade server technology allows packing a tremendous amount of computational power in a very compact form factor. A blade server enclosure can pack up to 8, 16 or 32 blade servers in a single chassis enclosure. Each blade server can have multiple Ethernet ports, for example one or two dedicated ports for LAN traffic, a port for management, and a port for supporting virtual machine mobility. Each of these ports connects in to the backplane of the blade chassis enclosure and is brought to the front panel using either a blade switch or a pass-through module. The pass-through module simply passes through each connection from every server out to the front panel for connectivity to the external network. This can lead to a cabling challenge. For example with 4 Ethernet ports per blade server, and up to 16 servers in the blade chassis, up to 64 Ethernet cables can be required for each blade chassis. With two or more blade chassis per rack, the number of cables quickly becomes unmanageable. The blade switch module on the other hand does local switching of traffic between servers within a blade chassis, and provides a smaller set of uplink ports for connectivity to the external network thus significantly simplifying the cabling challenge. As a result the blade switch is gaining in popularity as an efficient cabling solution for blade servers. Multiple blade switches can be inserted into a single blade chassis for the purpose of redundancy. the type of switches used. This applies to both network switches as well as blade switches. While increasing virtualization is driving greater and greater throughput demands right down to the edge of the data center (more VMs on a server pushing more traffic out the server) the increasing oversubscription due to the increasing number of network tiers can lead to sub-optimal performance due to increasing choke points or bottlenecks in the network. 3. Each tier adds management overhead and troubleshooting complexity since each switch has to be configured, monitored, maintained and updated with the latest software updates. 4. Finally, each tier also adds to overall cost of the network. Clearly, a different approach to the data center network architecture is needed in order to take advantage of the benefits of the blade server technology and server virtualization while addressing some of the issues described above. One such approach is outlined below. Core (Tier 1) Aggregation (Tier 2) Top-of-Rack (Tier 3) 5-Tiers of Switches Typical 5-tier data center design to connect 576 Blade Servers 42U 19 Rack Both the virtual switch described above and the blade switch, are technologies that have been driven from the server side of the data center. However, their impact to the network is significant and often overlooked. While traditionally, larger data centers would have a 3 tier network i.e. Core, Aggregation and Access tiers (the Access tier is typically a Top-of-Rack or TOR switch), the addition of the blade switch tier and the virtual switch tier leads to a 5 tier network. See figure 1. The addition of these switching tiers results in several issues. Blade Switches (Tier 4) Virtual Switch (Tier 5) Blade Servers 12 racks 1. Each tier in this network typically leads to increased end-to-end latency. In an environment where applications in various industries such as (but not limited to) finance, video and content delivery, and HPC, demand increasingly lower latencies, adding tiers to the data center network can adversely impact the application performance by increasing latency across the network. 2. Each tier can typically add oversubscription in the network. Oversubscription ratios of 2:1 or 3:1 are common at each tier but could be higher depending on 3 Blade Enclosures per rack with 14-16 Blade Servers each Figure 1 5395L-01 2010 Extreme Networks, Inc. All rights reserved. Reducing Network Tiers with MRJ21 Based Technology 2

Reducing Network Tiers with the Direct-Attach Architecture The direct-attach architecture is based on the premise of connecting blade servers (but more generally any server) directly into a very high density aggregation switch, bypassing both the blade switch and the TOR or access switch. There are two main components to this solution: 1. Very high density network aggregation switches modules such as the BlackDiamond 8900-G96T-c modules which provide 96 Ethernet ports on a single I/O switch module. Since access and aggregation tiers are typically added to the network to increase fan out, these high density modules in a chassis form factor reduce the need for having both an access and an aggregation tier in the network. The high density is supported on the BlackDiamond 8900 series modules by a high capacity switch fabric and an overall switching capacity of just under 4 Tbs. 2. Cabling technology such as the MRJ21 cable from Tyco which consolidates six Ethernet cables and connectors into one. The MRJ21 cable comes in different flavors. One such variant (octopus cable) uses 6 RJ-45 connectors on one end and an MRJ21 connector at the other end. Another variant uses MRJ21 connectors at both ends. See figure 2 below. By combining or aggregating six Ethernet cables into one, the MRJ21 cable provides significant cabling simplification. As mentioned earlier, blade servers can be connected to the external network through a pass-through module. However, as described above, the pass-through module can lead to significant cabling challenges. By using MRJ21 cables, six ports on the pass-through module can be connected to the external network via a single cable. This reduces cabling complexity significantly by achieving a 6:1 cable reduction. In effect, the pass-through module in conjunction with the MRJ21 cables now provides a viable alternative to the blade switch and can in fact, replace the blade switch thus eliminating one tier of switching from the network. The MRJ21 technology also allows very high density network switches to be built. By using MRJ21 connectors instead of RJ-45 connectors, very high density fan out can be achieved on network switches. For example the Extreme Networks BlackDiamond 8900-G96T-c switch blade for the BlackDiamond 8810 chassis uses 16 MRJ21 connectors on a single I/O blade to achieve a fan out of 96 Ethernet ports on a single blade. Up to eight of these blades can be stuffed into the chassis providing connectivity for up to 784 GbE ports within a single chassis. See figure 3 below. Figure 2. MRJ21 Cables Figure 3. BlackDiamond 8900-series Module with MRJ21 Connector 2010 Extreme Networks, Inc. All rights reserved. Reducing Network Tiers with MRJ21 Based Technology 3

By using MRJ21 cables to connect ports from the passthrough module of the blade chassis, directly into these high density network switch modules, the TOR or access switch layer can also be eliminated since the high density switch blades on the chassis provide the adequate fan out needed for such high density deployments. In effect, the servers become directly attached to the aggregation switch tier thereby eliminating both the blade switch tier and the TOR or access switch tier. See figure 4 below. The advantages of this architecture are many 1. Overall network latency is improved by eliminating two active switching tiers in the network. 2. Oversubscription within the network is significantly reduced since both the TOR or access switch and the blade switch tiers added oversubscription to the network. 3. Power consumption in the network is reduced as well by eliminating two switching tiers. 4. Management complexity is reduced. 5. The overall solution cost is reduced. In effect, deploying the direct-attach architecture enables the data center to take advantage of newer server technologies while reducing inefficiencies in the network. Extreme Networks Direct-Attach Architecture 3-Tiers of Switches Extreme Networks 3-tier data center design to connect 576 Blade Servers Core (Tier 1) BlackDiamond 8810s End-of-Row Chassis (Tier 2) Passive Patch Panels 42U 19 Rack Blade Servers Virtual Switch (Tier 3) 12 racks 3 Blade Enclosures per rack with 14-16 Blade Servers each 5395R-01 Figure 4 2010 Extreme Networks, Inc. All rights reserved. Reducing Network Tiers with MRJ21 Based Technology 4

Design Flexibility The new direct-attach design is very flexible in how it can be deployed. Scenario 1: MRJ21 cabling is run from the BlackDiamond 8900-G96T-c modules to a MRJ21 patch panel at the top of each server rack. The patch panel breaks out the connections from the MRJ21 connectors to RJ-45 connectors which are then wired down the rack to the appropriate servers. The MRJ21 patch panel is a passive network element and does not require management, nor does it impose any oversubscription or additional measurable latency in the network. Scenario 2: MRJ21 cabling is run directly from the server or pass-through module via the octopus cable to the high density aggregation Ethernet switch module such as the BlackDiamond 8900-G96T-c. In this deployment, no patch panel or break outs are used. Scenario 1: Patch Panel Scenario 2: Octopus Cable Summary Adoption of newer server technologies such as blade servers and virtualization in the data center are driving complex and inefficient network architectures. By taking a holistic view of the network, newer architectures can be implemented that can both simplify the network and make it more efficient. The direct-attach architecture, utilizing high density modules in the aggregation switch, along with unique cabling solutions, eliminates multiple switching tiers within the network. This leads to reduced end-to-end latency, reduced oversubscription in the network, better power efficiencies, and reduced cost. The Extreme Networks BlackDiaond 8900-G96T-c modules in conjunction with MRJ21 cable technology form Tyco offer a comprehensive solution for implementing the direct-attach architecture in data centers. www.extremenetworks.com Corporate and North America Extreme Networks, Inc. 3585 Monroe Street Santa Clara, CA 95051 USA Phone +1 408 579 2800 Europe, Middle East, Africa and South America Phone +31 30 800 5100 Asia Pacific Phone +852 2517 1123 Japan Phone +81 3 5842 4011 2010 Extreme Networks, Inc. All rights reserved. Extreme Networks, the Extreme Networks logo and BlackDiamond are either registered trademarks or trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other trademarks are the trademarks of their respective owners. Specifications are subject to change without notice. 1619_03 02/10