Data Center Interconnects. Tony Sue HP Storage SA David LeDrew - HPN



Similar documents
VXLAN: Scaling Data Center Capacity. White Paper

HP FlexNetwork and IPv6

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

Network Virtualization for Large-Scale Data Centers

Data Center Use Cases and Trends

Why Software Defined Networking (SDN)? Boyan Sotirov

Scalable Approaches for Multitenant Cloud Data Centers

Brocade SDN 2015 NFV

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Arista 7060X and 7260X series: Q&A

VXLAN, Enhancements, and Network Integration

Extending Networking to Fit the Cloud

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Demonstrating the high performance and feature richness of the compact MX Series

Data Center Convergence. Ahmad Zamer, Brocade

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Using Network Virtualization to Scale Data Centers

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Building Virtualization-Optimized Data Center Networks

BUILDING A NEXT-GENERATION DATA CENTER

VMDC 3.0 Design Overview

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Avaya VENA Fabric Connect

SDN CONTROLLER. Emil Gągała. PLNOG, , Kraków

Building Tomorrow s Data Center Network Today

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Simplifying IT with SDN & Virtual Application Networks

WHITE PAPER. Network Virtualization: A Data Plane Perspective

Brocade Data Center Fabric Architectures

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

How To Orchestrate The Clouddusing Network With Andn

VMware and Brocade Network Virtualization Reference Whitepaper

Next-Gen Securitized Network Virtualization

CloudEngine 6800 Series Data Center Switches

VMware. NSX Network Virtualization Design Guide

SOFTWARE DEFINED NETWORKING: A PATH TO PROGRAMMABLE NETWORKS. Jason Kleeh September 27, 2012

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Virtualization, SDN and NFV

Network Virtualization Solutions

TRILL for Data Center Networks

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Multitenancy Options in Brocade VCS Fabrics

Brocade Data Center Fabric Architectures

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG

ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES

Cloud Computing and the Internet. Conferenza GARR 2010

Treinamento Comercial Campanha DC e CORE

BROCADE NETWORKING: EXPLORING SOFTWARE-DEFINED NETWORK. Gustavo Barros Systems Engineer Brocade Brasil

VXLAN Bridging & Routing

Data Center Networking Designing Today s Data Center

Brocade Solution for EMC VSPEX Server Virtualization

Towards an Open Data Center with an Interoperable Network Volume 5: WAN and Ultra Low Latency Applications Last update: May 2012

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

HP ExpertOne. HP0-Y45: Architecting HP Network Solutions. Table of Contents

An Overview of OpenFlow

MPLS VPN Services. PW, VPLS and BGP MPLS/IP VPNs

Broadcom Smart-NV Technology for Cloud-Scale Network Virtualization. Sujal Das Product Marketing Director Network Switching

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Enterasys Data Center Fabric

VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

TRILL Large Layer 2 Network Solution

Analysis of Network Segmentation Techniques in Cloud Data Centers

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

SummitStack in the Data Center

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

MPLS L2VPN (VLL) Technology White Paper

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

ICTTEN6172A Design and configure an IP- MPLS network with virtual private network tunnelling

Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols

The Coming Decade of Data Center Networking Discontinuities

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES

Layer 3 Network + Dedicated Internet Connectivity

Chapter 1 Reading Organizer

DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX

NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

SOFTWARE DEFINED NETWORKING

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

The Future of Cloud Networking. Idris T. Vasi

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Software Defined Network (SDN)

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs

Connecting Physical and Virtual Networks with VMware NSX and Juniper Platforms. Technical Whitepaper. Whitepaper/ 1

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Architecting Data Center Networks in the era of Big Data and Cloud

Demystifying Cisco ACI for HP Servers with OneView, Virtual Connect and B22 Modules

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT

HP VSR1000 Virtual Services Router Series

Arista Software Define Cloud Networking

SN A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

Transcription:

Data Center Interconnects Tony Sue HP Storage SA David LeDrew - HPN

Gartner Data Center Networking Magic Quadrant 2014 HP continues to lead the established networking vendors with respect to SDN with its SDN portfolio under the Virtual Application Networks banner The vendor remains a strong No. 2 player in the market, with recent launches of new data platforms (such as the FlexFabric 12900 Switch), should allow it to increase its presence in many enterprise accounts. HP should be considered for the shortlists for all data center networking requirements, especially for organizations looking to simplify network operations within a converged infrastructure or to take advantage of SDN as part of their data center network evolution. New data center products (such as the FlexFabric 12900 and 5930 Switch Series, and the 6125XLG Ethernet Blade Switch) provide HP customers with architectural choice and an ability to simply upgrade software and configurations to take advantage of new architectural approaches, such as fabric technologies and SDN.

Drivers for Layer 2 networks Application and server virtualization Virtualization of servers and applications is highly dependent on Layer 2 technology Network virtualization Virtualization demands increased control of traffic flows, efficient use of bandwidth, and reduced network elements for virtual backbones Geographic redundancy Virtual Layer 2 backbones eliminate the need for Layer 3 tunneling to extend both Layer 2 heartbeat traffic and private IP address space between separate data center locations Performance Ever-increasing need for more bandwidth. The deployment of Layer 2 virtualized backbones can reach far higher theoretical throughputs than their predecessors, which relied upon either STP or Layer 3 to manage traffic flow 3-2-1 and flat Layer 2 network designs Majority of network traffic in contemporary data centers is horizontal from server to server which is more efficient in a Layer 2 environment If a VM is moved from one data center to another, the IP address and default gateway has to be changed manually. In a Layer 2 solution, this can be achieved without having to change any IP addresses, resulting in a simpler VM management environment

Data center interconnection (DCI) Various methods for interconnecting data centers Ethernet LAN extension Extends Ethernet natively over dark fiber or DWDM Mostly applies to point-to-point deployments, where the sites are connected via dedicated dark fiber links or protected DWDM optical circuits Requires users have optical fiber or transmission resources. Enterprise customer who own or manage their own dark fiber or DWDM service, or acquires a service from a provider Layer 2 Interconnections using VPLS / MPLS Uses MPLS technologies to provide L2 connectivity services over a L3 network service. VPLS can simulate an Ethernet switch among multiple data centers in MPLS network to make inter-layer 2 port forwarding decisions based on MAC address or the combination of MAC address + VLAN ID Ethernet virtual interconnect (EVI): IP-based solution option extremely useful in simplifying data center interconnect Runs over existing network transport such as Internet, private IP network or MPLS infrastructure so it can be deployed without requiring changes to an existing infrastructure Allows Layer 2 connectivity across the network without having to deal with Layer 3 networking dependencies Connect up to eight globally-located data centers and supports over four thousand VLANs

Overlay Networking Endpoint A Endpoint B Endpoint C Virtual Switch EDGE WWAS15 Hardwar e Switches EDGE Endpoint D

Underlay Underlay Endpoint A Endpoint B Endpoint C Virtual Switch WWAS15 Hardwar e Switches Endpoint D

VXLAN Outer Header VXLAN 802.3 frame Outer Header VXLAN 802.3 frame Encapsulation HostA = VTEP Hypervisor A IP VNI 1001 VNI 1000 Underlay Network Switch C IP Hypervisor B IP Decapsulation HostB = VTEP VM 1 VM 2 VM n Encapsulation/De VM 6 802.3 frame capsulation 802.3 frame Switch = VTEP Physical Server 5 VM n

Overlay Networking Technologies Let s have a look at VxLAN - Architecture - Encapsulation - Addressing (locating endpoints) - The control plane Underlay network Endpoint B Endpoint A Overlay network

Overlay Networking Technologies VxLAN architecture Hypervisor Endpoint A VTEP VTEP Endpoint VTEP B Hypervisor VTEP A VTEP B Tunnel endpoints: Virtual Tunnel End Point Responsible for encapsulation and de-encapsulation of packets They can exist in the Hypervisor They can also exist on the Underlay network, typically on the ToR access switch And a mixed environment

Overlay Networking Technologies VxLAN encapsulation MAC header 6 bytes 6 bytes 2 bytes 12 bits 4 bits Destination MAC Source MAC 00:00:00:00:00:02 00:00:00:00:00:01 Type 0x8100 VLAN ID CoS 0-4094 2 bytes Type 0x0800 CoS: IEEE802.1p IHL: Inner Header Length (20 bytes) IP header 18 bytes 4 bits 4 bits 1 byte 2 bytes 2 bytes 3 bits 13 bits 1 byte 1 byte 2 bytes 4 bytes 4 bytes MAC Version IHL ToS Length Identification Flags Offset TTL Protocol ChecksumDestination IP Source IP 0x4 DSCP Payload F/DF Fragment128-0 UDP: 0x17 10.1.2.1 10.1.1.1 UDP header 18 bytes 20 bytes MAC IP 2 bytes 2 bytes 2 bytes 2 bytes Source port Destination port Length Checksum VxLAN:0x4789 VxLAN header 18 bytes 20 bytes MAC IP UDP 8 bytes 1 byte 3 bytes 3 bytes 1 byte Flags 00001000 Reserved VxLAN Network IdentifierReserved 24 bits means 16 million identifiers Means VNI present 18 bytes 20 bytes 8 bytes 8 bytes 32 bits VxLAN packet MAC IP UDP VxLAN FCS

Introduction: Traditional L2VPN Access Difficulty: It is impossible to completely integrate different access methods and different operator networks. Access IP / IPsec MPLS or IP IP / IPsec FR / ATM Broadband ATM FR / ATM Broadband Ethernet SONET Ethernet 11

Introduction: MPLS L2VPN Access L2VPN solution: The MPLS L2VPN technology is applied in the core network. Various access solutions can be completely integrated, no matter whether the operator network is an MPLS or IP network. Access IP / IPsec MPLS or IP IP / IPsec FR / ATM Broadband FR / ATM Broadband Ethernet Ethernet 12

Introduction: L2VPN Customer Site Tunnel Customer Site PE1 Pseudo Wires PE2 Customer Site Customer Site The L2VPN technology is used to transparently transmit L2 user data on a core network (as indicated by the blue box in the figure). As viewed by users, the core network is an L2 switching network through which L2 connections can be established between sites. Simply speaking, in the point of view of users, the devices of the two sites are directly connected with each other.

HP 5500 Switch Series Delivering scalability, agility, full enterprise feature set and low TCO 5500 HI 48G Enables business continuity with redundant management modules, hitless failover & nonstop switching Delivers outstanding resiliency, security, and multiservice support capabilities at the edge layer of large campus and branch networks. MPLS/VPLS, which is available on the 5500 HI Series Switches for Metro Ethernet deployments No software license required Stacking up to nine switches in a ring (up to 70 km distance); Fully featured L3 functionality, including RIP, OSPF, ISIS, BGP, and PIM Covered under our lifetime Warranty 2.0 5500 HI 24G 2 OR 4 FIXED SFP+ 10G UP-LINKS UP TO 9 Switches in a stack UP TO 1440W OF PoE+

Ethernet Virtual Interconnect EVI encapsulates traffic and transports it across a GRE Tunnel. It s standard MACoGRE (MAC over GRE) Each EVI network has A unique network ID Extends a unique list of VLAN s Has a separate control plane Has a separate forwarding plane Uses EVI IS-IS to propagate MAC info DC-01 DC-02 EVI DC-04 DC-03

DCI - Ethernet Virtual Interconnect (EVI) With EVI, enterprises are able to: Accelerate the delivery workload mobility with remote vmotion Increase applications performance with multipathing and load balancing Allow organizations to scale to up to 16 geographically dispersed data centers without requiring them to change the underlying network Simplify the L2 connection by encapsulating traffic over GRE and automatically isolate Spanning Tree Protocol Achieve optimum degrees of high availability and disaster recovery for valuable data Allows clients to have a simple set of L2 routing extensions that can provide data interconnectivity in minutes rather than the months of legacy approaches like VPLS

What is CLOS? CLOS Networks are named after Charles Clos who in 1953 published a paper to the Bell Systems Technical Journal entitled A Study of Non-blocking Switching Networks. Crossbar fabrics fell out of favor due to HOL (Head of the Line) blocking due to input queue limitation Limitations with technologies such as spanning tree made Fat Tree (3 tier architectures) inefficient and as network speeds and memory of the switches increased, so did the availability of ECMP technologies (TRILL / SPB, etc )

TRILL Data Center WAN Edge TRILL Fabric Core Access L3 L2 boundary

Example Of A HP Standards Based DC Fabric L3 Routers Spine Switches Link Aggregation Group (LAG) 2 stage CLOS network design L2 TRILL Fabric Leaf Switches 10/40G interconnects App VMs + VMs that provide services as L3 routing, Firewall, Load Balancer capabilities Servers TRILL is typically deployed in the data center using a CLOS network design with spine and leaf switches Leaf switches connect to spine switches while servers are connected to leaf switches. TRILL expansion is simple by either adding additional spine or leaf switches to increase capacity or performance.

HP FlexFabric Portfolio Core Aggregation WAN 12500E / 12500 / 12910 11900 7900 HSR6800 / 8800 VSR Access BladeSystems Virtual Connect 61xx Blade Switch 5700, 58X0, 59X0 Management IMC Service Orchestration Security TP Core Controller, vcontroller - S5100N IPS, Security Subscription Services

FlexFabric 12900 Switch Series Data center optimized distributed non-blocking architecture Next generation data center core switches Highest density 10G/40/100 GbE in Flex Fabric portfolio Data center optimized architecture Fully distributed architecture for data center core Data center Comware v7 innovations Modular enterprise networking OS built for resiliency Advanced features for L2/L3 and IPv4/APv6 scalability OpenFLow 1.3 support for SDN environments HP innovations and open standards power with IRF, TRILL, SPB, DCB, FCoE and more FlexFabric 12916 FlexFabric 12910 36 TBPS switching capacity Up to 256 wirespeed 40GbE ports Up to 768 wirespeed 1/10GbE ports

HP HSR6800 Router Series HSR6802 HSR6804 HSR6808 High performance distributed processing (up to 420 Mpps forwarding, 2 Tbps switch capacity) Comprehensive routing, switching, and security High density WAN (10 GbE, 40/100 GbE ready) IRF Ideal for small to large data center and campus WAN edge, aggregation, and core deployments

HP 8800 Router Series HP 8805 HP 8808 HP 8812 Distributed architecture with 4 independent plane separation, three engine forwarding, and non-blocking crossbar technology for high performance routing & scalability up to 864M PPS Comprehensive routing, switching, and security High density WAN/LAN options: Ethernet, 10GbE, OC-3~192, Serial, RPR, POS, CPOS Operates on the WAN edge, aggregation and core layers High reliability to fully meet carrier-level reliability requirements

HP VSR1000 Series - Virtual Services Router (VSR) Licensable, Comware 7 software product Runs on a VM created by the hypervisor installed on a COTS physical server. VSR provides the same functions and experience as the physical router. ISO, OVA, and IPE formats License based on: The number of Virtual CPUs (1, 4, 8) Minimal Resource Requirement for VM: 1 vcpu, 1GB RAM, 8GB Disk, 2 vnics VM VM VM VSR Hypervisor Server E1000, VMXNET3, VirtIO VMware vsphere, Linux KVM Standard X86 server

Data Center 40GbE Enterprise ToR HP FlexFabric 5930-32QSFP+ (JG726A) 40 GbE FlexFabric 5930 (JG726A) Leaf 5900 switches Spine 5930 switches - High Density 40 GbE ToR formfactor - 32 Ports of 40 GbE QSFP+ - Up to 104 Ports of 10 GbE via breakout cables. - Support for VXLAN and NVGRE gateway - Enhanced IRF for simpler network management - Full Comware v7 feature set

Data Center 10GbE Enterprise ToR Deep Buffer Ultra Deep Buffer FCoE 5900AF-48XG-4QSFP 5900AF-48XGT-4QSFP 5820-24XG-SFP+ 5920AF-24XG 5820-14XG-SFP+, FCoE FC / FCoE 5700-32XGT-8XG-2QSFP+ 5700-40XG-2QSFP+ HP FlexFabric 5900CP-48XG-4QSFP+