Simplify Your Data Center Network to Improve Performance and Decrease Costs
|
|
- Thomasine Ramsey
- 8 years ago
- Views:
Transcription
1 Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should rethink their designs and adopt simpler topologies and new control protocols to achieve better performance and operational agility and save as much as 50% on capital expenditures. Overview Key Challenges Data center (DC) networks must support distributed, highly virtualized and dynamic workloads, which create bigger intra-dc traffic flows (east-west). Data center network transit time must be predictable and independent from workload location. In a flat budget environment, the per-port cost savings due to organic technology improvements (Moore's Law) are not sufficient to finance increased DC network capacity requirements. Recommendations Implement simple one-tier or two-tier physical data center network topologies to save up to 50% on capital investment. Don't embrace a preconceived architecture; use our recommended criteria to guide network design and to select control protocols and vendors. Look to develop the network topology to support Software Defined Data Center, (SDDC) solutions supporting cost reductions of 50% over existing traditional data center architectures. Introduction Data center network requirements have changed substantially in the past 10 years for many enterprises, as applications have evolved from simple client/server designs to service-oriented, distributed architectures. In these environments, a three-tier hierarchical network (core, aggregation, access) is no longer adequate to support new traffic patterns and dynamic 1
2 workloads. This traditional design should be revisited when refreshing the infrastructure, and multiple alternatives exist. After years of relative stability, data center networks are evolving from an integrated model, where devices packaged with hardware and software supported a predefined set of network functions, to a more flexible environment, where multiple options are available at each layer, and software can augment capabilities over time. For example, in the past, when selecting a switch, there was no choice of software to operate it. Today, the same hardware can often operate with different software and be deployed in completely different architectures, so it is more important than ever to evaluate the options. To begin, network architects should conduct a high-level topology assessment ("What should my network look like?"; "How many layers do I need?"), then consider control plane options (the protocols that run the network). These two factors are closely related, so they must both be considered iteratively, until converging to a solution that best matches technical requirements and budget constraints. This document provides insight to support this process. The goal is not to develop a detailed technical design, but rather to enable a more thoughtful interaction with vendors. Clients realize they need to change and are trying to orient themselves across the many options they see in the market. We are living in transitional times for data center networking. Blindly embracing an architecture without performing an analysis can lead to suboptimal decisions, limit future options and increase costs. Analysis Implement Simple One-Tier or Two-Tier Physical Data Center Network Topologies to Save Up to 50% on Capital Investment With simple client/server applications, a hierarchical three-tier network design was a good solution, because it efficiently aggregated north-south traffic flows coming in and out from the data center, with Layer 3 providing scalability through separation of broadcast domains. Today, many applications are based on the service-oriented architecture (SOA) and spread out on multiple logical components, distributed across a large number of servers. Emerging big data applications have similar characteristics. In these environments, east-west (server-to-server) traffic flows predominate over north-south flows. In a traditional three-tier network design, these server-to-server flows might have to go across all tiers, up to the Layer 3 core, introducing latency and creating a performance bottleneck. In environments adopting cloud computing, seamless virtual machine (VM) mobility can also be 2 2 an issue, since Layer 2 domains (virtual LANs [VLANs]) do not extend across the Layer 3 core.
3 Simplifying Physical Network Topology Based on these new requirements, and considering the progress made by switching technology, we recommend simplifying the topology when refreshing the DC network. Reducing the number of tiers from three to two, or even one, reduces the number of switches, which translates not only to less capital expenditure, but also to better performance, latency and simplified operations. Some vendors position fabric extenders in this context. One-Tier Network Topology Using a big switch (actually, two, for redundancy) to connect all servers is the easiest connectivity option. Today, high-density 40 Gigabit Ethernet (GbE) switches can support more than a thousand 10GbE ports with Quad Small Form-factor Pluggable (QSFP) splitter cables. For those small to medium environments (indicatively from 200 to 2,000 ports) where growth and expandability are not the main concern, a one-tier topology provides a cost-effective, lowlatency, easy to implement and manage option. This architecture is sometimes referred to as "middle of row," as the switch pair is located in the middle of the racks, to optimize cabling. As the number of servers increases, running all cables to a central point becomes less practical (or even not possible with distance). This is the reason why the Top of Rack (ToR) architecture, with smaller ToR switches installed in each rack, became popular. To scale the one-tier approach, while keeping a simple network topology, some vendors (such as Dell and Alcatel-Lucent Enterprise) propose a single layer of fully meshed switches, which could be installed in each rack for small environments. For small and midsize networks, the one-tier design has technical advantages and eliminates the cost and complexity of additional layers, with associated switches and cabling. For example, we have seen configurations with 100 physical servers that would equal 1,000+ virtual server s dual-attached LAN and dual attached NAS at less than $200 per 10GbE port, which is greater than 50% less than some more traditional, multiple-tier designs. Two-Tier Network Topology ("Spine and Leaf") Going above one tier and adopting a modular design becomes a necessity for larger environments, indicatively over a thousand physical servers, or where great expandability is a requirement. Providing non-blocking connectivity across a large number of devices is the same challenge faced by the engineers of the first telephone exchange systems. This explains why data center networking vendors have rediscovered the Clos network concepts studied 3
4 back in the 1950s and are now proposing the spine-and-leaf physical topology. Figure 1 illustrates a simple spine-and-leaf topology. It is also called a "folded three-stage Clos network," because it can be depicted as a three-stage network (for the telephony-exchangeminded) or as a two-tier network. The topology is the same, and can efficiently support serverto-server (east-west) traffic flows. In a data center network design, the leaf corresponds often to the ToR switch, providing connectivity to servers, while the spine corresponds to the core switch. Servers can be dualattached to provide full-path redundancy. Table 1 summarizes the benefits provided by this network design. Benefit Availability Benefits of Spine-and-Leaf Design Details Connectivity remains available, although with reduced performance, when any link or switch stops working. Complete redundancy is achieved at system level; fault-tolerance at device level is not necessary. Efficient use of switching capacity Traffic is load-balanced across multiple links and switches (all paths have equal cost), so the available capacity can be fully utilized. 4 4 Horizontal The design can scale horizontally, using the same switch models. Adding scalability more spine switches increases core capacity, while adding more leaf
5 switches increases the number of server ports. The limit is the number of ports available for leaf-spine connections on both, which depends on the models of switch used. Deterministic and consistent Simplicity Every leaf is two hops away from every other leaf, so the switching fabric provides predictable latency across any server pair, resulting in consistent application performance. The design is based on a few standardized building blocks. This facilitates provisioning, automation and maintenance. There are three main factors to consider when evaluating the topology (how many switches, and what interconnections are among them): 1. Leaf switch features: This switch is used as ToR, and typically is a fixed form factor (FFF), because it has a relatively small number of 10GbE ports (less than 100), few 40GbE uplinks and its cost/port strongly influences the overall cost. The number of 10GbE ports sets the upper limit on the number of servers that can be attached to each leaf. The number of 40GbE ports sets a limit on the number of spines, since the leaf must connect to all of them. Each vendor has different ToR switch models. Some have 10GbE ports dedicated to server connectivity (typically 48 or 96) and 40GbE ports (typically from four to eight) for spine connectivity. Others models have only 40GbE ports, so any port can be used for spine connectivity, but they require QSFP splitters for attaching servers at 10GbE, which adds to cabling cost and complexity. 2. Oversubscription: This is the ratio between total access port capacity and spine connectivity bandwidth available at the leaf. For example, 96 x 10GbE access ports with 8 x 40GbE uplinks gives 960:320, or a 3:1 oversubscription. Sizing the level of bandwidth contention is a key design attribute. A value of one means that all servers connected to the leaf can send/receive traffic at wire speed to the spines simultaneously in other words, a non-blocking design. This would be an over-engineered solution in most cases; a 3:1 ratio is adequate for common application scenarios. Server attachment options can influence oversubscription. 3. Spine switch features: The size of this switch (i.e., the number of 40GbE ports) sets a limit to the number of leaves that can be connected. The simplest network configuration can have only two spine switches. Vendors normally propose chassisbased models in this scenario, because of their expandability, but a configuration with multiple FFF switches as spines can achieve a similar level of scalability at lower cost. For example, eight FFF spines with 32 x 40GbE ports have the equivalent capacity to two chassis switches with 128 x 40GbE ports. For their actual implementation of spine switches, network architects should consider the current trend toward FFF as well as specific requirements for a spineand-leaf design. 5
6 Combined, these three factors determine the maximum number of server ports that can be connected using a spine-and-leaf design with certain switch models. For example, a design based on 32 leaf switches with (96 x 10GbE) + (8 x 40GbE) each, in combination with eight spine switches with 32 x 40GbE ports, provides 32 x 96 = 3,072 server attach ports, with 3:1 oversubscription, a configuration that would fulfill the needs of the majority of enterprises (1,500 dual-attached physical servers, or 15,000 VMs, with an average 1:10 virtualization ratio). The street price for this type of configuration, based on FFF switches, is around $250 to $300 per 10GbE port. At some point, the design limit for the utilized switch models will be reached, and no further expansion will be possible. However, considering useful equipment lifetimes and the increasing use of virtualization and cloud, which reduce server footprint, a typical enterprise should have enough visibility to evaluate and consider this approach. After defining the network topology at a high level (i.e., number of tiers, what kind of switches and main interconnections), you must make an important decision about the control plane in other words, the Layer 2 (L2) and Layer 3 (L3) protocols that will govern the network. Many viable alternatives are possible. Every vendor will make its preferred recommendations, although most can support multiple options, each one with advantages and disadvantages. Network architects should weigh the options based on how they assess and prioritize the following requirements: Scale How many physical servers must be supported? Expected growth Is that number stable, growing (or shrinking) predictably or hard to forecast? Technical factors What are the design constraints (for example, L2 connectivity needs, path recovery times, etc.)? Budget Is cost containment a high priority? Vendor independence Regardless of cost implications, is this a necessity? Staff What human resources and skills are available for design and operations? Network architects should then ponder the following technical considerations before finalizing network design. Layer 2 or Layer 3 Design (Bridged or Routed Network) In an L2 design, VLANs extend across multiple leaf switches; while in an L3 design, VLANs are confined at the leaf, and IP routing takes place for every packet that goes from one leaf to another. The L2 design is simpler, but the L3 design is more scalable. Both have further options. Layer 2 Design 6 Spanning Tree Protocol (STP) is a legacy L2 protocol, designed to prevent loops by blocking links 6 when Ethernet networks had a treelike topology. In a spine-and-leaf design, multiple paths (i.e., loops) are created on purpose, to be used in parallel, so STP is no longer used to manage the
7 overall network's logical topology. STP is still used for specific purposes (for example, for dualserver attachment in Multi-Chassis Link Aggregation Group [MC-LAG] solutions). A number of alternatives are available to manage multiple paths at L2, including Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB) and various MC-LAG implementations, often based on proprietary variations of these protocols. Most vendors also implement virtual chassis functionalities, so that a number of physical switches (for example, all spines) could be managed like a single virtual switch. Although L2 can be simpler to implement, it is less scalable and less mature in terms of standards (such as TRILL and SPB), so it is not the preferred solution if multivendor interoperability is a requirement, or for very large implementations, because of the inherent limitations of L2 broadcast domains. Layer 3 Design An IP design for a spine-and-leaf physical topology requires a dynamic Internet Protocol (IP) routing protocol that supports load balancing with equalcost multipath (ECMP). Open Shortest Path First (OSPF) is the common choice; Intermediate System to Intermediate System (IS-IS) is possible. The choice recommended by most vendors is Border Gateway Protocol (BGP), which can be implemented as internal BGP (ibgp) or external BGP (ebgp), with the latter being the preferred choice, because it natively supports load balancing with ECMP. Traditionally, BGP was associated with Internet core routers and with complex WAN designs, so it is not widely known by DC networking staff, which limits its adoption. However, BGP is highly scalable and well-proven for multivendor interoperability. This design is the best choice for very large (5,000 servers or more) multivendor environments, such as cloud service providers. The limitation of an L3 design is that standard VLANs cannot extend beyond the leaf (ToR) switch. Virtualized environments that need L2 connectivity across servers to support VM mobility would require an overlay solution to circumvent the limitation. Programmable Ethernet Fabrics To simplify network deployment and operations, most vendors have introduced solutions that overcome the need to manage each switch individually and provide the opportunity to see the network as a whole. Auto-discovery mechanisms reduce the configuration task, and the overall network can be configured manually, through a graphical user interface (GUI), or programmatically, through a northbound API. A policy-driven framework on top can further abstract application requirements from individual device configuration. The combination of these additional functionalities, often packaged as a proprietary solution, transforms a network made of individual components into what is referred to as a "fabric" in vendor's marketing literature. Each vendor has one or more solutions in this space that have 7
8 evolved historically. Some are marketed as fabric, and some as software-defined networking (SDN)-like solutions. Clients should identify proprietary elements of the different vendor solutions and evaluate whether the benefits, in terms of simplification of recurrent operational tasks, outweigh the lock-in risks. These solutions can encompass both L2 and L3 to provide all necessary connectivity options, and might include distributed L3 functions to deliver optimized switching and routing. Some fabrics rely on Virtual Extensible LAN (VXLAN) overlays to deliver L2 connectivity on an L3 backbone. Overlays Creating an abstraction layer on top of the physical network can solve the problem of providing L2 connectivity across L3. Tunnels based on VXLAN or Network Virtualization using Generic Routing Encapsulation (NVGRE) can transport L2 frames over IP, extending virtual LANs wherever needed. The VXLAN tunnel endpoints (VTEP) can be implemented in a software virtual switch (vswitch) running on servers or in leaf switches at the edge of the network. The first option is common in software-based overlays; the second is used for connecting bare-metal servers, but also in solutions that combine a BGP control plane with Ethernet Virtual Private Network (EVPN) to transparently extend L2 domains across an IP network. Overlays can also be used to isolate parts of the network, and to provide additional security or to support multiple tenants, even if they have overlapping IP addresses. Overlays are not a complete network solution, though; they require an underlay network with enough capacity and reliability to operate. Overlays do not necessarily need a spine-and-leaf network topology. However, the combination of the two designs is a good match; it combines the flexibility of the overlay and the robustness of the underlay. SDN SDN is an architectural model for networks. A spine-and-leaf infrastructure can support a genuine SDN deployment, with the control plane running in a central controller, decoupled from the switches. All the considerations above would still be conceptually relevant, and multiple L2/L3 designs are possible, since SDN is an architectural model and not a standardized solution. Customers who have already embraced the device-based SDN model based on OpenFlow and VMware Software Defined Data Center topology with VMware NSX switch topology with integration from Arista Networks and Palo Alto Network can be implement a spine-and-leaf network topology. 8 Conclusions 8 Data center networks are rapidly evolving, and a large number of options are available, although at different levels of maturity. There are multiple factors that network architects and managers
9 must consider. The spine-and-leaf physical topology is valuable, regardless of the selected control plane. Adopting a proprietary solution for the control plane can bring short-term benefits at the price of limiting future options, but viable alternatives exist. Architectural decisions of this magnitude do not occur every year, and many organizations might not possess the necessary skill sets. In this case, consider obtaining design validation from a third party that does not benefit from the sale of the solution. 9
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
More informationScalable Approaches for Multitenant Cloud Data Centers
WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,
More informationData Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
More informationTechnology Overview for Ethernet Switching Fabric
G00249268 Technology Overview for Ethernet Switching Fabric Published: 16 May 2013 Analyst(s): Caio Misticone, Evan Zeng The term "fabric" has been used in the networking industry for a few years, but
More informationBrocade Data Center Fabric Architectures
WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center. TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building
More informationVMware and Brocade Network Virtualization Reference Whitepaper
VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM
More informationArchitecting Data Center Networks in the era of Big Data and Cloud
Architecting Data Center Networks in the era of Big Data and Cloud Spring Interop May 2012 Two approaches to DC Networking THE SAME OLD Centralized, Scale-up Layer 2 networks Monstrous chassis es TRILL
More informationNon-blocking Switching in the Cloud Computing Era
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
More informationOVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight
More informationTesting Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
More informationNetwork Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
More informationBrocade Data Center Fabric Architectures
WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building
More informationNetwork Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
More informationData Center Use Cases and Trends
Data Center Use Cases and Trends Amod Dani Managing Director, India Engineering & Operations http://www.arista.com Open 2014 Open Networking Networking Foundation India Symposium, January 31 February 1,
More informationTRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
More informationSTATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
More informationSimplify the Data Center with Junos Fusion
Simplify the Data Center with Junos Fusion Juniper Networks Fabric Technology 1 Table of Contents Executive Summary... 3 Introduction: Network Challenges in the Data Center... 3 Introducing Juniper Networks
More informationExtreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
More informationConnecting Physical and Virtual Networks with VMware NSX and Juniper Platforms. Technical Whitepaper. Whitepaper/ 1
Connecting Physical and Virtual Networks with VMware NSX and Juniper Platforms Technical Whitepaper Whitepaper/ 1 Revisions Date Description Authors 08/21/14 Version 1 First publication Reviewed jointly
More informationVMware. NSX Network Virtualization Design Guide
VMware NSX Network Virtualization Design Guide Table of Contents Intended Audience... 3 Overview... 3 Components of the VMware Network Virtualization Solution... 4 Data Plane... 4 Control Plane... 5 Management
More informationJuniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
More informationA 10 GbE Network is the Backbone of the Virtual Data Center
A 10 GbE Network is the Backbone of the Virtual Data Center Contents... Introduction: The Network is at the Epicenter of the Data Center. 1 Section II: The Need for 10 GbE in the Data Center 2 Section
More informationSoftware-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
More informationEthernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
More informationWhite Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.
White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3
More informationTransform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure
White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure
More informationVirtualization, SDN and NFV
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
More informationAgility has become a key initiative for business leaders. Companies need the capability
A ZK Research White Paper Influence and insight through social media Prepared by Zeus Kerravala March 2014 A Guide To Network Virtualization ZK Research Zeus Kerravala A Guide to BYOD Network And Virtualization
More informationSummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
More informationNext Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking
Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Important Considerations When Selecting Top-of-Rack Switches table of contents + Advantages of Top-of-Rack Switching.... 2 + How to Get from
More informationCore and Pod Data Center Design
Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by
More informationHow the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership
How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,
More informationMultitenancy Options in Brocade VCS Fabrics
WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.
More informationSoftware Defined Cloud Networking
Introduction Ethernet networks have evolved significantly since their inception back in the 1980s, with many generational changes to where we are today. Networks are orders of magnitude faster with 10Gbps
More informationVXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
More informationVMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
More informationMigrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
More informationBrocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
More information全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!
More informationSOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control
More informationThe Impact of Virtualization on Cloud Networking Arista Networks Whitepaper
Virtualization takes IT by storm The Impact of Virtualization on Cloud Networking The adoption of virtualization in data centers creates the need for a new class of networking designed to support elastic
More informationAvaya VENA Fabric Connect
Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer
More informationVMware and Arista Network Virtualization Reference Design Guide for VMware vsphere Environments
VMware and Arista Network Virtualization Reference Design Guide for VMware vsphere Environments Deploying VMware NSX with Arista's Software Defined Cloud Networking Infrastructure REFERENCE DESIGN GUIDE
More informationEnterasys Data Center Fabric
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
More informationA Whitepaper on. Building Data Centers with Dell MXL Blade Switch
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
More informationBuilding Virtualization-Optimized Data Center Networks
Technical white paper Building Virtualization-Optimized Data Center Networks HP s Advanced Networking Solutions Meet the Stringent Performance, Scalability and Agility Demands of the new Virtualized Data
More informationTesting Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
More informationPanel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26
Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26 1 Outline Cloud data center (CDC) Software Defined Network (SDN) Network Function Virtualization (NFV) Conclusion 2 Cloud Computing Cloud computing
More informationAnalysis of Network Segmentation Techniques in Cloud Data Centers
64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology
More informationData Center Interconnects. Tony Sue HP Storage SA David LeDrew - HPN
Data Center Interconnects Tony Sue HP Storage SA David LeDrew - HPN Gartner Data Center Networking Magic Quadrant 2014 HP continues to lead the established networking vendors with respect to SDN with its
More informationVirtualizing the SAN with Software Defined Storage Networks
Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands
More informationNETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD. Debbie Montano, Chief Architect dmontano@juniper.net
NETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD Debbie Montano, Chief Architect dmontano@juniper.net DISCLAIMER This statement of direction sets forth Juniper Networks current intention
More informationNetwork Virtualization for the Enterprise Data Center. Guido Appenzeller Open Networking Summit October 2011
Network Virtualization for the Enterprise Data Center Guido Appenzeller Open Networking Summit October 2011 THE ENTERPRISE DATA CENTER! Major Trends change Enterprise Data Center Networking Trends in the
More informationRedefine Virtualized and Cloud Data Center Economics with Active Fabric. A Dell Point of View
Redefine Virtualized and Cloud Data Center Economics with Active Fabric THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT
More informationTRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
More informationSDN and Data Center Networks
SDN and Data Center Networks 10/9/2013 1 The Rise of SDN The Current Internet and Ethernet Network Technology is based on Autonomous Principle to form a Robust and Fault Tolerant Global Network (Distributed)
More informationSimplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage
Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation
More informationSimplifying the Data Center Network to Reduce Complexity and Improve Performance
SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,
More informationJUNIPER. One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER. 1 Copyright 2010 Juniper Networks, Inc. www.juniper.net
JUNIPER One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER 1 Copyright 2010 Juniper Networks, Inc. www.juniper.net 2-3-7: JUNIPER S BUSINESS STRATEGY 2 Customer Segments 3 Businesses Service
More informationCloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.
Cloud Fabric Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD. Huawei Cloud Fabric - Cloud Connect Data Center Solution Enable Data Center Networks to Be More Agile for
More informationData Center Fabrics What Really Matters. Ivan Pepelnjak (ip@ioshints.info) NIL Data Communications
Data Center Fabrics What Really Matters Ivan Pepelnjak (ip@ioshints.info) NIL Data Communications Who is Ivan Pepelnjak (@ioshints) Networking engineer since 1985 Technical director, later Chief Technology
More informationI D C M A R K E T S P O T L I G H T
I D C M A R K E T S P O T L I G H T E t h e r n e t F a brics: The Foundation of D a t a c e n t e r Netw o r k Au t o m a t i o n a n d B u s i n e s s Ag i l i t y January 2014 Adapted from Worldwide
More informationBrocade VCS Fabrics: The Foundation for Software-Defined Networks
WHITE PAPER DATA CENTER Brocade VCS Fabrics: The Foundation for Software-Defined Networks Software-Defined Networking (SDN) offers significant new opportunities to centralize management and implement network
More informationIntroduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre
Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre Wilfried van Haeren CTO Edgeworx Solutions Inc. www.edgeworx.solutions Topics Intro Edgeworx Past-Present-Future
More informationFabrics that Fit Matching the Network to Today s Data Center Traffic Conditions
Sponsored by Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions In This Paper Traditional network infrastructures are often costly and hard to administer Today s workloads
More informationBrocade SDN 2015 NFV
Brocade 2015 SDN NFV BROCADE IP Ethernet SDN! SDN illustration 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. INTERNAL USE ONLY 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. INTERNAL USE ONLY Brocade ICX (campus)
More informationBUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
More informationBroadcom Smart-NV Technology for Cloud-Scale Network Virtualization. Sujal Das Product Marketing Director Network Switching
Broadcom Smart-NV Technology for Cloud-Scale Network Virtualization Sujal Das Product Marketing Director Network Switching April 2012 Introduction Private and public cloud applications, usage models, and
More informationSummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
More informationData Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
More informationSolving Scale and Mobility in the Data Center A New Simplified Approach
Solving Scale and Mobility in the Data Center A New Simplified Approach Table of Contents Best Practice Data Center Design... 2 Traffic Flows, multi-tenancy and provisioning... 3 Edge device auto-attachment.4
More informationOptimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
More informationThe Road to SDN: Software-Based Networking and Security from Brocade
WHITE PAPER www.brocade.com SOFTWARE NETWORKING The Road to SDN: Software-Based Networking and Security from Brocade Software-Defined Networking (SDN) presents a new approach to rapidly introducing network
More informationVIABILITY OF DEPLOYING ebgp AS IGP IN DATACENTER NETWORKS. Chavan, Prathamesh Dhandapaani, Jagadeesh Kavuri, Mahesh Babu Mohankumar, Aravind
VIABILITY OF DEPLOYING ebgp AS IGP IN DATACENTER NETWORKS Chavan, Prathamesh Dhandapaani, Jagadeesh Kavuri, Mahesh Babu Mohankumar, Aravind Faculty Advisors: Jose Santos & Mark Dehus A capstone paper submitted
More informationHow do software-defined networks enhance the value of converged infrastructures?
Frequently Asked Questions: How do software-defined networks enhance the value of converged infrastructures? Converged infrastructure is about giving your organization lower costs and greater agility by
More informationSoftware Defined Network (SDN)
Georg Ochs, Smart Cloud Orchestrator (gochs@de.ibm.com) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario
More informationWhy Software Defined Networking (SDN)? Boyan Sotirov
Why Software Defined Networking (SDN)? Boyan Sotirov Agenda Current State of Networking Why What How When 2 Conventional Networking Many complex functions embedded into the infrastructure OSPF, BGP, Multicast,
More informationWhite Paper. Network Simplification with Juniper Networks Virtual Chassis Technology
Network Simplification with Juniper Networks Technology 1 Network Simplification with Juniper Networks Technology Table of Contents Executive Summary... 3 Introduction... 3 Data Center Network Challenges...
More informationThe Future of Cloud Networking. Idris T. Vasi
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
More informationI D C M A R K E T S P O T L I G H T
I D C M A R K E T S P O T L I G H T The New IP: Building the Foundation of Datacenter Network Automation March 2015 Adapted from Worldwide Enterprise Communications and Datacenter Network Infrastructure
More informationData Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair
Data Center Network Virtualisation Standards Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair May 2013 AGENDA 1. Why standardise? 2. Problem Statement and Architecture
More informationSOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Eric Choi < echoi@brocade.com> Senior Manager, Service Provider Business Unit, APJ 2012 Brocade Communications Systems, Inc. EPF 7 2012/09/17 Software-Defined Networking
More informationCENTER I S Y O U R D ATA
I S Y O U R D ATA CENTER R E A DY F O R S D N? C R I T I C A L D ATA C E N T E R C O N S I D E R AT I O N S FOR SOFT WARE-DEFINED NET WORKING Data center operators are being challenged to be more agile
More informationConfiguring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015
Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015 Introduction 1 Netra Modular System 2 Oracle SDN Virtual Network Services 3 Configuration Details
More informationCloud Networking: Framework and VPN Applicability. draft-bitar-datacenter-vpn-applicability-01.txt
Cloud Networking: Framework and Applicability Nabil Bitar (Verizon) Florin Balus, Marc Lasserre, and Wim Henderickx (Alcatel-Lucent) Ali Sajassi and Luyuan Fang (Cisco) Yuichi Ikejiri (NTT Communications)
More informationALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center
ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center A NEW NETWORK PARADIGM What do the following trends have in common? Virtualization Real-time applications
More informationCONNECTING PHYSICAL AND VIRTUAL WORLDS WITH VMWARE NSX AND JUNIPER PLATFORMS
White Paper CONNECTING PHYSICAL AND VIRTUAL WORLDS WITH WARE NSX AND JUNIPER PLATFORMS A Joint Juniper Networks-ware White Paper Copyright 2014, Juniper Networks, Inc. 1 Connecting Physical and Virtual
More informationCisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures
White Paper Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
More informationThe Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud
The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud Introduction Cloud computing is one of the most important topics in IT. The reason for that importance
More informationNetworking in the Era of Virtualization
SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.
More informationARISTA WHITE PAPER Solving the Virtualization Conundrum
ARISTA WHITE PAPER Solving the Virtualization Conundrum Introduction: Collapsing hierarchical, multi-tiered networks of the past into more compact, resilient, feature rich, two-tiered, leaf-spine or Spline
More informationData Centre White Paper Summary. Application Fluency In The Data Centre A strategic choice for the data centre network
Data Centre White Paper Summary.. Application Fluency In The Data Centre A strategic choice for the data centre network Modernizing the Network: An Application-Fluent Approach With virtualization it s
More informationENABLING THE PRIVATE CLOUD - THE NEW DATA CENTER NETWORK. David Yen EVP and GM, Fabric and Switching Technologies Juniper Networks
ENABLING THE PRIVATE CLOUD - THE NEW DATA CENTER NETWORK David Yen EVP and GM, Fabric and Switching Technologies Juniper Networks Services delivered over the Network Dynamically shared resource pools Application
More informationGartner delivers the technology-related insight necessary for our clients to make the right decisions, every day.
Gartner delivers the technology-related insight necessary for our clients to make the right decisions, every day. 2008 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered
More informationThe Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers
The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers Table of Contents Enter Avaya Fabric Connect. 2 A typical data center architecture with Avaya SDN Fx... 3 A new way:
More informationAdvanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
More informationSoftware Defined Networks Virtualized networks & SDN
Software Defined Networks Virtualized networks & SDN Tony Smith Solution Architect HPN 2 What is Software Defined Networking Switch/Router MANAGEMENTPLANE Responsible for managing the device (CLI) CONTROLPLANE
More informationWalmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
More informationCLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business
More informationVMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
More information