Cisco Switching Platforms Deployment Case Studies BRKRST-1612

Similar documents
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

VMDC 3.0 Design Overview

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT

Data Center Design IP Network Infrastructure

Data Center Convergence. Ahmad Zamer, Brocade

Chapter 3. Enterprise Campus Network Design

BUILDING A NEXT-GENERATION DATA CENTER

Brocade Solution for EMC VSPEX Server Virtualization

Ethernet Fabrics: An Architecture for Cloud Networking

Data Center Networking Designing Today s Data Center

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

TRILL Large Layer 2 Network Solution

Cloud Computing and the Internet. Conferenza GARR 2010

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

DCICT: Introducing Cisco Data Center Technologies

Data Center Infrastructure Design Guide 2.1 Readme File

Data Center Multi-Tier Model Design

ANZA Formación en Tecnologías Avanzadas

Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels

DCUFI - Implementing Cisco Data Center Unified Fabric v5.0

Brocade One Data Center Cloud-Optimized Networks

Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0

How To Set Up A Virtual Network On Vsphere (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)

DEDICATED NETWORKS FOR IP STORAGE

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Unified Computing Systems

Expert Reference Series of White Papers. Cisco Data Center Ethernet

Cisco Advanced Routing and Switching for Field Engineers - ARSFE

VMware Virtual SAN 6.2 Network Design Guide

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtualizing the SAN with Software Defined Storage Networks

Building Tomorrow s Data Center Network Today

Cisco FabricPath Technology and Design

Building the Virtual Information Infrastructure

Enterasys Data Center Fabric

The evolution of Data Center networking technologies

Chapter 1 Reading Organizer

Data Center Network Evolution: Increase the Value of IT in Your Organization

Next Generation Data Center Networking.

How To Design A Data Centre

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

Multi-site Datacenter Network Infrastructures

Non-blocking Switching in the Cloud Computing Era

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

SummitStack in the Data Center

Course Contents CCNP (CISco certified network professional)

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

Overview of Routing between Virtual LANs

CHAPTER 6 DESIGNING A NETWORK TOPOLOGY

Dell Networking ARGOS 24/03/2016. Nicolas Roughol. Networking Sales Engineer. Tel : nicolas_roughol@dell.com

Optimizing Data Center Networks for Cloud Computing

The Evolving Data Center. Past, Present and Future Scott Manson CISCO SYSTEMS

VXLAN: Scaling Data Center Capacity. White Paper

How to Monitor a FabricPath Network

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

LAN Baseline Architecture Branch Office Network Reference Design Guide

Next-Gen Securitized Network Virtualization

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

PR03. High Availability

Private Cloud Computing

Datacenter Rack Switch Redundancy Models Server Access Ethernet Switch Connectivity Options

The Future of Cloud Networking. Idris T. Vasi

Fibre Channel over Ethernet in the Data Center: An Introduction

CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network

Network Virtualization

Data Center Architecture Overview

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

A Platform Built for Server Virtualization: Cisco Unified Computing System

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1.

Virtualized Access Layer. Petr Grygárek

Juniper Networks QFabric: Scaling for the Modern Data Center

Addressing Scaling Challenges in the Data Center

EVOLVED DATA CENTER ARCHITECTURE

LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.

Data Center Evolution and Network Convergence. Joseph L White, Juniper Networks

TRILL for Data Center Networks

Transcription:

Cisco Switching Platforms Deployment Case Studies BRKRST-1612

Agenda Problem which switch? Switch Roles Traditional Ethernet Switching functions Data Centre Switching Requirements Case Studies Recommendations New Campus New DC 2

Ethernet has been around since the early 80s. Why is there more choice in switches than there ever has been? 3

Traditional Switch Functions 4

Ethernet Switch Functions Layer 2 including Spanning Tree Layer 3 - IP Network Services - extra services residing in switches such as firewalling or load-balancing or DHCP QoS and Multicast Power over Ethernet Etherchannel* & Stacking 5

Traditional Switch Choice How many ports and at what speed? Does it need to route? How much redundancy - power and supervisor? PoE? If so, how much? What about other network services? What Role does the switch play? 6

Traditional LAN Design 7

Newer switch requirements in the Campus MPLS 10GE Multi-Chassis Etherchannel (VSS) Green More tuning and automation (EEM, Auto, Medianet, etc) How much of this is driven by user or business requirement? 8

Data Centre Switching Requirements 9

Why is Data Centre a special case? Business case justification As servers have grown and evolved their networking requirements have changed more rapidly than that of humans in an office environment Better ways have evolved What are some these problems we need to fix? Since 2004, the compound annual growth rate in workloads has been about 16 percent If a company had 100 servers in 2004, in 2010 they would have had 243. Importantly, each of those servers would have been connected to 4 networks on average 10

Server Evolution 11

Data Centre Switching Requirements Server Networking Needs aka Problems for Networks Scale bandwidth an ever-present issue Scale port counts create further issues: Fabric Infrastructure costs: lots of uplinks and downlinks especially since STP isn t very efficient lots of switches (bridges in STP) lots of cables Server Virtualisation (VMWare) : Virtual NICs Machine mobility 12

Data Centre Switching Technologies Fabric Extender (FEX) - fixing a physical problem, cabling VirtualNetwork Link (VN-Link) N1000v - fixing a control problem caused by virtualisation DCB and FCOE - improving on a scale problem Virtual PortChannel (vpc) improving efficiency (STP) FabricPath - improving efficiency and scale of LANs Overlay Transport Virtualisation (OTV) bridging a L3 link Are any of these useful in a Campus environment? 13

L2 MAC Address Scaling Issues MAC Table A MAC Table A Layer 2 Domain MAC Table MAC Table A MAC Table A Mac addresses facts: A There are billions They have no location associated to them, no hierarchy They are not registered by the hosts to the network A routing table is impossible at Layer 2: default forwarding behaviour is flooding A filtering database is set up to limit flooding The whole mechanism is not scalable MAC Table A 14

L2 Requires a Tree Branches of trees never interconnect (no loop) 11 Physical Links 5 Logical Links S2 S1 S3 The Spanning Tree Protocol (STP) is typically used to build this tree Tree topology implies: Wasted bandwidth -> over-subscription exacerbated Sub-optimal paths Conservative convergence -> failure catastrophic 15

Cisco STP Implementation Feature Rich Stability Speed Policy Enforcement Scale MST + + Rapid-PVST + - + + + + + Dispute + - Bridge Assurance + - Loopguard + - + + + - - - - - + + + RootGuard + - + BPDUGuard + + Global BPDU filter + Great features, but all working around problems 16

Fabric Extender (FEX) 17

Top of Rack FEX Deployment with Nexus 5000/2000 Nexus 7000 Nexus 7000 Distribution Layer MCEC Nexus 5000 Nexus 5000 Access Layer Nexus 2000 FEX x4 x4 x4 x4 x4 x4 x4 x4 Nexus 2000 FEX Rack Rack 1 1 Rack 2 Rack 1 Rack Rack 122 Rack 1 Rack 22 Rack 1 Rack 122 18

Transparency in the Eye of the Beholder With server virtualisation, VMs have a transparent view of their resources 19

Transparency in the Eye of the Beholder but its difficult to correlate network and storage back to virtual machines 20

Transparency in the Eye of the Beholder Scaling globally (across ESX hosts) depends on maintaining transparency while also providing operational consistency 21

Cisco Nexus 1000V Architecture VM VM VM VM VM VM VM VM Installation ESX & ESXi VUM & Manual Installation VEM is installed/upgraded like any ESX patch Nexus 1000V VEM Server vsphere Nexus 1000V VEM Server vsphere vcentre Physical Switches Nexus 1000V VSM 22

FCOE Server I/O educe cables and interfaces Front-End Network Mgmt Network Backup Network Storage Network Back-End Network Unified Fabric Unified Fabric and I/O 23

Virtual Port Channel (vpc) To improve uplink efficiency Introduces some changes to the data plane Provides active/active redundancy Does not rely on STP (STP kept as safeguard) Limited to pair of switches (enough for most cases) VPC domain Redundancy handled by STP Blocked port (STP) Redundancy handled by vpc Data plane based loop prevention 24

FabricPath: Simple from the Outside Scalable, Uniform, Efficient Forwarding Multi-Domain Silos FabricPath Any App, Anywhere! Fabric Subnet X Subnet Y Subnet Z Silo 1 Silo 2 Silo 3 Subnet X Subnet Y Subnet Z FabricPath provides a Fabric that looks like a switch => No silos, workload mobility and maximum flexibility 25

Loop Mitigation with FabricPath Time To Live (TTL) and Reverse Path Forwarding (RPF) Check STP Domain Root TTL=2 Root S1 S2 TTL=1 L2 Fabric S10 Control protocol is the only mechanism preventing loops If STP fails -> loop no backup mechanism in the data plane Probable network-wide melt-down 26 TTL=3 TTL=0 TTL in FabricPath header Decrement by 1 at each hop Frames with TTL =0 are discarded RPF check for multicast based on tree info

FabricPath is Efficient Shortest path, Multi-Pathing, High-availability Shortest path for low latency Up to 256 links active between any 2 nodes High availability with N+1 path redundancy S1 S2 S3 S4 Switch S42 FabricPath Routing Table IF L1, L2, L3, L4 L1 L2 L3 S11 L4 S12 S42 L2 Fabric A 27 B

OTV Overlay Transport Virtualisation Intelligent LAN extensions over any transport Zero Impact to existing network design IP Failure isolation - Preserve L3 boundary failure containment with L2 protocols remain localised by site L3 L2 Optimised Operations Scalability Full BW utilisation & optimal traffic replication DC-1 Control Plane MAC learning + DC-2 Packet switched forwarding Stretch a VLAN between DCs in a protected way Intelligent MAC routing All the benefits of IP for L2 An Intelligent L2 shared segment 28 =

Case Studies 29

Switching Platform Positioning Avoiding Confusion General Position: Users plug into Catalysts, new Servers plug into Nexus If a user PC plugs into the switch then a Catalyst is likely best fit If it s going into a new, large or virtual Data Centre then it is probably a Nexus Cisco Catalyst Cisco Nexus 30

Enterprise Core Positioning Determined by Customer Requirements The Core of the LAN is the main place of overlap where both Nexus and Catalyst have fitting products. Core outside Data Centre PoE or WAN services VPLS/ Avanced MPLS Cisco IOS consistency Two terabit scalability Service modules VSS Decision Criteria Core inside Data Centre High 10GE density Data Centre Interconnect NX-OS capabilities Highest scalability Resiliency (hitless ISSU) Virtual Device Contexts (VDCs) Cisco Catalyst 6500 Series Nexus 7000 Series 31

Switching Positioning Recent Innovations in both familiies Catalyst Innovations for Nexus Innovations for the next Campus networks generation Data Centre Energywise StackPower Medianet VSS Fabric Extender DCB/ FCOE Unified Ports VPC & OTV & FabricPath Cisco Catalyst Cisco Nexus 32

Campus Catalyst Evolution Catalyst Evolution Medium 1G density Existing EoR Limited 10G Limited Scale Low Cost 1G Top of Rack Access Layer Transition Traditional POD? No Virtualised POD? No HPC POD? No Begin Here DC Switching? Yes New DC? No Access or Aggregation? High 1G density Higher resiliency Management ease ToR/EoR/V-EoR 10G/40G/100G 100G/40G/10G VM intelligence Management ease Higher resiliency FCoE/DCB Scale with ease Non-blocking Optimal buffer Predictable latency Yes Nexus Evolution Very High Density Yes Modular OS Process restart Hitless ISSU FabricPath: No STP Yes Simplicity Fast Failover Optimised BW DSN or Appliances Aggregation Layer Transition 10G/40G Density 100G Ready Higher Resiliency? Virtualised Switch? DCI L4-L7 Services Catalyst Evolution Limited No Virtual Switching System No Leverage your existing technology Integrated 33

Case Study 1 a new Campus for MediaInc What would you use? 1600 users across 8 floors in a new green building All areas have new GigE phones and WLAN coverage Single riser and Comms room per floor Small Computer Room on Ground floor Remote Data Centre connected over dual dark fibre Security and Loading Dock to connect also 34

Case Study 1 a new Campus Access Layer/ Wiring Closet Chassis or Stack both valid options Dual 10GE uplinks (power users) Redundant Power supplies (or PowerStack) Use Compact Switches for small spaces 35

Case Study 1 a new Campus Core & Distribution Able to collapse Distribution into Core? Multi-Chassis EtherChannel (vpc or VSS) would be ideal for a faster, simple topology High-availability a must and routing between sites 36

Case Study 2 refitting a Data Centre Facility What would you use? Business is virtualising computing onto 20 new blade chasis but support of existing environment must remain 100 racks of a mix of blade servers, 1RU servers and a couple of mainframes all connected to ilo, LAN(s) & SAN Existing switching is Cisco 6500 and MDS including heavy use of ACE and FWSM Business critical mainframes ideally on separate network Business Continuance planning calls for DR Data Centre in the future 37

Case Study 2 a new Data Centre Facility Aims Minimise cabling to reduce costs via 2 methods: Localise cable runs using Top-of-Rack (ToR) switching Utilise low-cost 10GE cabling options where possible (Twinax/CX1, FET, LRM) Reduce server cabling via Unified Fabric/FCOE where possible Flat topology to support Vmotion and aid in future DC Interconnect Continue use of ACE modules and FWSMs 38

Case Study 2 a new Data Centre Facility Options - Access Virtual switching leverage Nexus1000v on all ESXi hosts to gain granularity and consistency Connecting Blade Chassis preferably Unified Fabric/ FCOE using converged switch module (UCS, Nexus 4001i) Alternatively pass-thru modules to Nexus ToR switch can achieve same outcomes Where suitable, migrate existing cabling to ToR using FEX to distribution layer of Nexus 5500 39

Case Study 2 a new Data Centre Facility Options Distribution Distribution provided by Nexus 5548s x 8 upstream from blade switching and FEXs* SAN connection to existing MDS from 5548s FC ports Use vpc in pairs for STP-blocked free environment 40

Case Study 2 a new Data Centre Facility Options Core 40G usable (non-blocked) bandwidth to each Pod of 2 Nexus5548s Retain existing Catalyst6500s and move to side for Services (ACE and FWSM) Nexus 7000 seems the logical choice for 10G port density and resilience along with OTV feature for future DC interconnect and VDC capability for segmenting network 41

Data Centre Access options Nexus 7000 MDS 9000 Core Distribution LAN SAN Unified Access Layer Nexus 5000 Nexus 1000V Nexus 2000 Nexus 2000 Nexus 2000 Direct Attach 10GE Nexus 4000 Cisco UCS 1GE Rack Mount Servers 10GE Rack Mount Servers 1 & 10GE Blade Servers w/ Pass-Thru 10GE Rack Mount Servers 10GE Blade Switch w/ FCoE (IBM/Dell) UCS Compute Blade & Rack 42

Summary Traditional user switching continues to evolve with the Catalyst family New Data Centre switching technology is available on Nexus family Interoperation between the two segments is seamless but it is still a case of matching business/network needs 43

Q & A 44

Complete Your Online Session Evaluation Complete your session evaluation: Directly from your mobile device by visiting www.ciscoliveaustralia.com/mobile and login by entering your badge ID (located on the front of your badge) Visit one of the Cisco Live internet stations located throughout the venue Open a browser on your own computer to access the Cisco Live onsite portal 45