DATACENTER SWITCHES IN THE CAMPUS BACKBONE



Similar documents
Why Software Defined Networking (SDN)? Boyan Sotirov

Juniper Update Enabling New Network Architectures. Debbie Montano Chief Architect, Gov t, Edu & Medical dmontano@juniper.

Juniper / Cisco Interoperability Tests. August 2014

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

I Wanted a New Network

OpenFlow Technology Investigation Vendors Review on OpenFlow implementation

Brocade SDN 2015 NFV

Spotlight On Backbone Technologies

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang Nov 13, 2014

THE REVOLUTION TOWARDS SOFTWARE- DEFINED NETWORKING

Software Defined Networking A quantum leap for Devops?

ASM Educational Center (ASM) Est. 1992

Arista 7060X and 7260X series: Q&A

Cisco Certified Network Associate Exam. Operation of IP Data Networks. LAN Switching Technologies. IP addressing (IPv4 / IPv6)

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Network Virtualization

VXLAN: Scaling Data Center Capacity. White Paper

20 GE + 4 GE Combo SFP G Slots L3 Managed Stackable Switch

The State of OpenFlow: Advice for Those Considering SDN. Steve Wallace Executive Director, InCNTRE SDN Lab Indiana University

Demonstrating the high performance and feature richness of the compact MX Series

Interconnecting Cisco Networking Devices: Accelerated (CCNAX) 2.0(80 Hs) 1-Interconnecting Cisco Networking Devices Part 1 (40 Hs)

2013 ONS Tutorial 2: SDN Market Opportunities

Building A Cheaper Peering Router. (Actually it s more about buying a cheaper router and applying some routing tricks)

Software Defined Cloud Networking

NETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD. Debbie Montano, Chief Architect dmontano@juniper.net

CS244 Lecture 5 Architecture and Principles

Feature Support Cisco 2960 Cisco 2960S Brocade ICX 6450 Brocade ICX GE SFP, 2 10GE SFP+, 4 1GE SFP, 2 DUAL PURPOSE PORTS, 1GE OR SFP

NX-OS and Cisco Nexus Switching

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

High Performance 10Gigabit Ethernet Switch

IMPLEMENTING CISCO SWITCHED NETWORKS V2.0 (SWITCH)

VCS Monitoring and Troubleshooting Using Brocade Network Advisor

JUNIPER. One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER. 1 Copyright 2010 Juniper Networks, Inc.

Datacenter Rack Switch Redundancy Models Server Access Ethernet Switch Connectivity Options

Configuring the Transparent or Routed Firewall

Cisco IOS Software Release 12.2(31)SGA for Cisco Catalyst 4500 Series Supervisor Engines

IPv6 Fundamentals, Design, and Deployment

Ethernet Link SGI-4844F

OpenFlow/So+ware- defined Networks. Srini Seetharaman Clean Slate Lab Stanford University July 2010

BROCADE NETWORKING: EXPLORING SOFTWARE-DEFINED NETWORK. Gustavo Barros Systems Engineer Brocade Brasil

Cisco IOS Software Release 12.2(37)SG for Cisco Catalyst 4500 Series Supervisor Engines

Cisco Integrators Cisco Partners installing and implementing the Cisco Catalyst 6500 Series Switches

Introduction to Network Virtualization in IaaS Cloud. Akane Matsuo, Midokura Japan K.K. LinuxCon Japan 2013 May 31 st, 2013

Network Architecture Validated designs utilizing MikroTik in the Data Center

SDN and Data Center Networks

EX 3500 ETHERNET SWITCH

INTERCONNECTING CISCO NETWORKING DEVICES PART 2 V2.0 (ICND 2)

Defining SDN. Overview of SDN Terminology & Concepts. Presented by: Shangxin Du, Cisco TAC Panelist: Pix Xu Jan 2014

Layer 3 Network + Dedicated Internet Connectivity

Architecting Data Center Networks in the era of Big Data and Cloud

Analysis of Network Segmentation Techniques in Cloud Data Centers

Arista Software Define Cloud Networking

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Brocade Solution for EMC VSPEX Server Virtualization

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

BILOXI PUBLIC SCHOOL DISTRICT. Ethernet Switches

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

Interconnecting Cisco Networking Devices, Part 2 Course ICND2 v2.0; 5 Days, Instructor-led

Virtual Leased Line (VLL) for Enterprise to Branch Office Communications

: Interconnecting Cisco Networking Devices Part 2 v2.0 (ICND2)

Campus Network Design Science DMZ

COURSE AGENDA. Lessons - CCNA. CCNA & CCNP - Online Course Agenda. Lesson 1: Internetworking. Lesson 2: Fundamentals of Networking

EVALUATING NETWORK BUFFER SIZE REQUIREMENTS

Network Virtualization for the Enterprise Data Center. Guido Appenzeller Open Networking Summit October 2011

Top-Down Network Design

Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led

Supermicro Standalone Switch Products

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

Choosing Routers for the Campus

CloudEngine 6800 Series Data Center Switches

HP FlexNetwork and IPv6

Designing Virtual Network Security Architectures Dave Shackleford

Choosing Tap or SPAN for Data Center Monitoring

LHCONE Site Connections

Troubleshooting and Maintaining Cisco IP Networks Volume 1

Data Center SDN. ONF SDN Solutions Showcase Theme Demonstrations SDN SOLUTIONS SHOWCASE

DALLAS INDEPENDENT SCHOOL DISTRICT PURCHASING DEPARTMENT ADDENDUM No. 2 NETWORK ELECTRONICS

Open Ethernet. April

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Industry-based CLI (Cisco like) and Networking OS graphical user interface

Network Technologies for Next-generation Data Centers

Recommendations for a redundant campus network Best Practice Document

ICTTEN6172A Design and configure an IP- MPLS network with virtual private network tunnelling

Security Considerations in IP Telephony Network Configuration

VMware Virtual SAN 6.2 Network Design Guide

Creating and Using the OpenStack Aware Network

Professional Profile Company Experience & Biography SixNet Consulting Group .SixNetConsulting

Software Defined Networking (SDN)

Simplify the Data Center with Junos Fusion

DCRS-5960F Dual Stack 10G Ethernet Optical Routing Switch

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

Software Defined Networking & OpenFlow

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

SDN. Roadmap to Operating SDN-based Networks Workshop July 15, Kireeti Kompella CTO, JDI. Copyright 2014 Juniper Networks, Inc.

Transcription:

DATACENTER SWITCHES IN THE CAMPUS BACKBONE Dan Matthews Manager Network Engineering and Unified Communications, Case Western Reserve University

Using Fixed Configuration Data Center Switches in the Campus Backbone CONTENTS CC-NIE Cyberinfrastructure Challenge (Background) Evaluating Backbone Upgrade Options Evaluating Upgrade Requirements Evaluating What is Available Campus vs Data Center - Features CWRU Implementation and Experiences Deployment, Topology, Benefits Performance Monitoring (SNMP vs Splunk) Buffer monitoring and VXLAN [ 2 ]

CC-NIE Cyberinfrastructure Challenge (Background) CWRU Received a CC-NIE grant in October 2013 Included Science DMZ Component (100GE to OARnet / Internet2) Included network upgrades to extend 10GE to research centric buildings and labs Campus Backbone is Aged, Not ready for 10GE to buildings. Current Network Infrastructure is from circa 2003 Distribution routers are pairs of Cisco 6509-E/sup720Base (L3 HSRP) Core to Distribution is 10GE (WS-X6704-10GE) Distribution to Building (Access) is 1GE (WS-X6516A-GBIC) Buildings are dual connected to distribution pairs (Fairly typical campus design) Multiple Pis within Bingham distribution collaborated on CC-NIE. Bingham Distribution contains 20 Buildings Need to upgrade three to 10GE, the more the better obviously. [ 3 ]

CC-NIE Cyberinfrastructure Challenge (Background Cont.) Solution 1: Status Quo in the Backbone (Install Line cards) Install a Cisco WS-6708 and X2-LR optics in each distribution 6509 (List price $149k) Provides 10GE ports enough for 8 buildings (16 ports @$9,312.50 list per port). No other changes required. One of the benefits of chassis gear. Solution 2: Spend that money on something different. Possibly even replace the old equipment altogether. Lets generate a list of requirements and nice to haves for this part of the network. Lets look at the market and see that what else may meet these requirements at that price point. Seek better per port value. Lets look at feature sets that may provide more options in terms of high performance networking and ease of operations. We went with Option 2. [ 4 ]

Step 1. Take Inventory (What do we really need?) Interface Count and Type Need 24 optical 1/10GE ports minimum (12 10GE for fair comparison to option 1) Having 48 would be better for many reasons. L3 Requirements (modest tables sizes, standard protocols) Needs to support OSPF, OSPFv3, IPV6, a FHRP of some sort. Needs to support policy based routing, Standard ACLs, QoS Other standard campus routing needs like IPv4/IPv6 DHCP relay, RPF, PIM Needs to support 1000 ipv4 routes (Current is less than 500 in these routers) Needs to support 1000 ipv6 routes (Current is less than 25 in these routers) L2 Requirements Needs to support Spanning Tree Protocol Needs to support 10,000 entry CAM table (Currently ~5,600) Needs to support 10,000 entry ARP table [ 5 ]

Evaluating Backbone Upgrade Options (What's Available?) Traditional Campus Core and Dist Options (Overkill and Expensive) (gleaned vendor web sites and reference designs) Cisco Catalyst / N7K, HP 5400/7500, Brocade ICX / MLX, Juniper EX6200/8200 Most are chassis based, most have huge L2 and L3 table capabilities (~256k+ ipv4). Most have power supplies ranging from 2500W to 6000W Cost per 10GE port ranges from $4,000 to $7,000+ list with vendor optics. Cost per 40GE / 100GE too much for this exercise or just plain not available. Data Center ToR Switches (Sweet spot for Value and Functionality?) Cisco Nexus 9372px, Arista 7050/7280, Dell S4048-ON, HP 5900AF-48XG, Etc A lot of 1U fixed 48 SFP+, 4 to 6 QSFP, albeit smaller L2 and L3 tables. Most have efficient power supplies ranging from between 500W and 1200W Cost per 10GE port between $2,000 and $5,000 list with vendor optics Cost per 40GE / 100GE still pricy but available with flexible options (break out cables) [ 6 ]

Campus vs Data Center: Features (Differentiators) Features are comparable, but not the quite the same (see below). Data Center switches offer some neat stuff though. Traditional Campus Core and Distribution Features Most offer a virtual chassis system. (No FHRP, fewer configs, multi-chassis LAG) Most offer full MPLS / VPLS implementations. Some Offer integrated Security / NAC features. Some offer services line cards (Firewalls / Load Balancers / Wireless Controllers) Data Center Switch Features Most have some sort of fabric (if you are into that sort of thing). multi-chassis LAG. Most have VRF / VRF Lite Most offer Network Telemetry and very low latency forwarding. Most have API / OpenFlow integrations and automation tools (Puppet, Chef, XMPP). Most offer VXLAN for extending L2 over L3 networks. [ 7 ]

Campus vs Data Center: Sanity Check Are ToR switches suitable for both IMIX and research flows? Data Center Pros: More ports, Less Power, Less Space, Cool Features We get ~96 10GE capable ports instead of 16 and an upgrade path for all 20 buildings We get at least a 2x40GE EtherChannel between the pair and Multi-Chassis LAG. We get a 40GE or 100GE upgrade path for core links. We get to features like advanced buffer monitoring, automation, VXLAN. We use way less power, generate less heat, and take up less space. Data Center Cons: Longevity? More Risk. Shorter life span. No easy (change-less) upgrade path to dense 40GE/100GE. No operational experience with most of these devices and OSes. Higher risk overall by replacing all L2 and L3 services with new equipment. We won t be able to scale this OSPF area to 256k IPV4 routes bummer [ 8 ]

Data Center Switch Options Abound Many other Data Center ToR switches might be a good fit in campus backbones. Some Include Dell S4048-ON, Cisco Nexus 9372px, Brocade ICX-7750, HP 5900AF-48XG, Juniper QFX 5100-48S. Choose your favorite vendor, I bet they have something to look at. Most are based on merchant silicone. Software and support are key. Many campuses have already started using 1U switches like Cisco 4500-X, Juniper EX4550, etc, as those are cross-marketed as campus and data center. They lack some features of data center offerings. Dense 100GE switches are now on the market or shipping soon. Dell Z9100, Z6100 Arista 7060CX Cisco Nexus 3232C [ 9 ]

Lets roll the dice! We decided to take a shot. If it fails, we can always use the switches in well a data center. We settled on a really new switch at the time, the Arista 7280SE-64. Choosing Arista helped minimize some of the operational risk. We had been using Arista in HPC for a while so Engineers were familiar with EOS. We also chose Arista 7500 for HPC / Science DMZ integration. The Arista 7280SE-64 specs exceeded our needs (table sizes, port count) Based on the Broadcom Arad chipset. 48 1/10GE SFP+, 4 40GE QSFP (typical 4Watt per 10GE port) 64k IPv4 /12k IPv6 LPM routes, 128k MACs, 96k ARP / Host entries, PIM, VRRP Buffer Monitoring, VXLAN, Splunk App for Network Telemetry (we like Splunk), MLAG, etc. [ 10 ]

Data Center Switches in Campus Backbone: Outcomes Arista 7280SE-64 in production today and working really well. No VoIP /QoS / Multicast issues. No packet loss or high CPU or high latency that we have seen. Five Engineering buildings were upgraded to 10GE uplinks. Cost was less that adding line cards and optics to Catalyst 6509-E. We deployed pairs of Arista 7150S-24 as building aggregators to take care of the other side of the links and provide 10GE ports within the buildings. Energy Savings Add Up (Nearly $5k/year per pair) US Average (All Sectors) is 10.64 Cents /kwh http://www.eia.gov/electricity/monthly/epm_table_grapher.cfm?t=epmt_5_6_a Old equipment costs $5,331.40/yr (4*1430W)/1000)*.1064*24*365 New equipment costs $354.18/yr (4*95W)/1000)*.1064*24*365 If only our budgets recognized this, energy savings pays for maintenance! [ 11 ]

Data Center Switches in Campus Backbone: Measurement [ 12 ]

Actual PerfSonar Throughout Test Graph Hourly PerfSonar 10GE throughput tests from Bingham Dist to Science DMZ [ 13 ]

Traditional SNMP Obscures Traffic Bursts This shows only 750Mbps. Where are my spikes? [ 14 ]

Splunk Network Telemetry App: Bandwidth Chart Wow! I can see the PerfSonar traffic Bursts! [ 15 ]

Buffer Monitoring (Also with Splunk App) Looking at buffer (queue) utilization of Bingham Eth 33 (uplink to core) Can you guess when I stopped the 10GE PerfSonar Throughut Tests? [ 16 ]

Buffer Monitoring (No Splunk Required) You can see this via the CLI too Might be useful for identifying microburst congestion events that could cause packet loss. [ 17 ]

Extending your Science DMZ using VXLAN No real bandwidth advantage, but aids in applying consistent security controls and inspection. Make sure the VTEPs have a firewall free path! Glennan CWRU ScienceDMZ Deployment CASC Internet2 1GE 10 GE 40 GE 100 GE Olin PerfSonar-bing inetrouter0 WS-C6509E 40GE trunks w/ science DMZ and private HPC Nets DTN1 PerfSonar-dmz White Bingham Rockefeller bingham- h0- e1 MLAG bingham- h0- e2 Case Backbone 129.22.0.0/16 sciencerouter0 Juniper MX480 hpc-rhk15-m1-e1 Arista 7508E Po10 Nord CC-NIE Engineering Buildings Science DMZ Vlan Trunked to Building Lab System PBR Enabled FW Bypass Crawford inetrouter1 WS-C6509 KSL HPC VXLAN Tunnel CWRU HPC [ 18 ]

Summary Data Center class ToR L3 switches can work in campus backbone deployments. Thought must be given to current and mid-term requirements in terms of advanced features. The value proposition is compelling in comparison to traditional (or at least marketed as traditional) campus core and distribution options. Data Center network equipment is designed with power, heat, and space efficiency in mind. Depending on the size of your backbone, this could make a difference for you. Data Center network equipment seems to adopt new networking technology more rapidly than campus centric offerings. Some of which can be helpful to Cyberinfrastructure engineers. Data Center network equipment has a robust set of API and automation tools that are not as mature in campus or enterprise offerings. (didn t have time to cover this next time) [ 19 ]

References === Brocade List Pricing === http://des.wa.gov/sitecollectiondocuments/contractingpurchasing/brocade/price_list_2014-03-28.pdf === Cisco List Pricing === http://ciscoprice.com/ === Juniper List Pricing === http://www.juniper.net/us/en/partners/mississippi/juniper-pricelist-mississippi.pdf === HP List Pricing === http://z2z-hpcom-static2-prd-02.external.hp.com/us/en/networking/products/configurator/index.aspx#.vfwjxcbvhbc http://www.kernelsoftware.com/products/catalog/hewlett-packard.html === Dell Campus Networking Reference === http://partnerdirect.dell.com/sites/channel/documents/dell-networking-campus-switching-and-mobility-reference-architecture.pdf === HP Campus Network Design Reference === http://www.hp.com/hpinfo/newsroom/press_kits/2011/interopny2011/fcra_architecture_guide.pdf === Cisco Campus Network Design Reference === http://www.cisco.com/c/en/us/td/docs/solutions/enterprise/campus/ha_campus_dg/hacampusdg.html http://www.cisco.com/c/en/us/td/docs/solutions/enterprise/campus/campover.html http://www.cisco.com/c/en/us/td/docs/solutions/enterprise/campus/borderless_campus_network_1-0/borderless_campus_1-0_design_guide.pdf === Juniper Campus Network Design Reference === http://www.juniper.net/us/en/local/pdf/design-guides/jnpr-horizontal-campus-validated-design.pdf http://www.juniper.net/techpubs/en_us/release-independent/solutions/information-products/topic-collections/midsize-enterprise-campus-ref-arch.pdf https://www-935.ibm.com/services/au/gts/pdf/905013.pdf === Brocade Campus Network Design Reference === http://community.brocade.com/t5/campus-networks/campus-network-solution-design-guide-bradford-networks-network/ta-p/37280 [ 20 ]

DATACENTER SWITCHES IN THE CAMPUS BACKBONE Dan Matthews Manager Network Engineering and Unified Communications, Case Western Reserve University