Data Center Design for the Midsize Enterprise. Silvo Lipovšek Systems Engineer slipovse@cisco.com



Similar documents
Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led

Data Center Network Evolution: Increase the Value of IT in Your Organization

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

The Evolving Data Center. Past, Present and Future Scott Manson CISCO SYSTEMS

Unleash the power of Cisco ACI and F5 Synthesis for Accelerated Application deployments. Ravi Balakrishnan Senior Marketing Manager, Cisco Systems

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang Nov 13, 2014

Unified Computing Systems

The Future of Cloud Networking. Idris T. Vasi

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Global Headquarters: 5 Speen Street Framingham, MA USA P F

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

Simplify IT. With Cisco Application Centric Infrastructure. Roberto Barrera VERSION May, 2015

Simplify Your Data Center Network to Improve Performance and Decrease Costs

DCICT: Introducing Cisco Data Center Technologies

Cisco Prime Network Services Controller. Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems

VMDC 3.0 Design Overview

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Data Center Use Cases and Trends

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

ANZA Formación en Tecnologías Avanzadas

Demystifying Cisco ACI for HP Servers with OneView, Virtual Connect and B22 Modules

Cisco Application Centric Infrastructure. Silvo Lipovšek Sistemski inženjer

Core and Pod Data Center Design

Building Scalable, Open, Programmable and Application Centric Data Center with Cisco ACI. 林 瑝 錦 / Jerry Lin Cisco Systems 2015 July

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Data Center Convergence. Ahmad Zamer, Brocade

CON Software-Defined Networking in a Hybrid, Open Data Center

Why Cisco for Cloud? IT Service Delivery, Orchestration and Automation

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Speeding Up Business By Simplifying the Data Center With ACI & Nexus Craig Huitema, Director of Marketing. Session ID PSODCT-1200

Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments

Brocade SDN 2015 NFV

Brocade Data Center Fabric Architectures

A 10 GbE Network is the Backbone of the Virtual Data Center

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

Private Cloud Computing

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Data Center Networking Designing Today s Data Center

Unified Computing System When Delivering IT as a Service. Tomi Jalonen DC CSE 2015

Empowering Private Cloud with Next Generation Infrastructure. Martin Ip, Head of Advanced Solutions and Services Macroview Telecom

EVOLVED DATA CENTER ARCHITECTURE

IT Agility Delivered: Cisco Unified Computing System

Brocade Data Center Fabric Architectures

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Virtualizing the SAN with Software Defined Storage Networks

PassTest. Bessere Qualität, bessere Dienstleistungen!

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT

UCS Network Utilization Monitoring: Configuration and Best Practice

How To Build A Cisco Ukcsob420 M3 Blade Server

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

SPEED your path to virtualization.

Virtualization, SDN and NFV

FCoCEE* Enterprise Data Center Use Cases

UCS M-Series Modular Servers

Switching Solution Creating the foundation for the next-generation data center

BUILDING A NEXT-GENERATION DATA CENTER

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

A Platform Built for Server Virtualization: Cisco Unified Computing System

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft

May 13-14, Copyright 2015 Open Networking User Group. All Rights Reserved Not For

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Brocade One Data Center Cloud-Optimized Networks

Juniper Networks QFabric: Scaling for the Modern Data Center

Data Centre of the Future

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

COMPLEXITY AND COST COMPARISON: CISCO UCS VS. IBM FLEX SYSTEM (REVISED)

Designing Cisco Data Center Unified Fabric Course DCUFD v5.0; 5 Days, Instructor-led

FCoE Deployment in a Virtualized Data Center

Insurance Company Deploys UCS and Gains Speed, Flexibility, and Savings

Software-Defined Networks Powered by VellOS

Use Case Brief CLOUD MANAGEMENT SOFTWARE AUTOMATION

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code

HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013

Datacenter Networking. Joy ABOIM Consulting System Engineer

NET ACCESS VOICE PRIVATE CLOUD

Data Center Evolution without Revolution

Cisco dan. Beograd, Srbija 1.april TOMORROW starts here.

Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led

HAWAII TECH TALK SDN. Paul Deakin Field Systems Engineer

Cisco Virtual Wide Area Application Services: Technical Overview

Building Tomorrow s Data Center Network Today

2013 ONS Tutorial 2: SDN Market Opportunities

Data Center Evolution and Network Convergence

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

Dell Networking ARGOS 24/03/2016. Nicolas Roughol. Networking Sales Engineer. Tel : nicolas_roughol@dell.com

How To Design A Data Centre

VMware and Brocade Network Virtualization Reference Whitepaper

Remote Voting Conference

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

David Lawler Vice President Server, Access & Virtualization Group

THE REVOLUTION TOWARDS SOFTWARE- DEFINED NETWORKING

Transcription:

Data Center Design for the Midsize Enterprise Silvo Lipovšek Systems Engineer slipovse@cisco.com

Designing Data Centers for Midsize Enterprises Client Access Midsize WAN / DCI Require Dedicated DC Switches Campus Mostly virtualized, some physical Transition from collapsed core Scalable Size for current needs Reuse components in larger designs Design Options Feature choice + priority = tradeoffs Driving efficiency: SDN, Programmability, Orchestration, Automation Cloud with Control L3 ----------- L2 FC FCoE iscsi / NAS

What are you ready for? Direction will depend on where you draw the line: Want to stay with existing toolsets for config & management? Interested in new toolsets to buy some efficiency? Capable of consuming a new set of tools? New or traditional operational model? Image Credit: In notes

Data Center Fabric Journey STP VPC FabricPath VXLAN MAN/WAN FabricPath /BGP VXLAN /EVPN MAN/WAN MAN/WAN

Common Building Blocks EXISTING 3-TIER DESIGNS PROGRAMMABLE SDN OVERLAY MODEL APPLICATION PROFILES & POLICIES APIC DC Core DC PODs Existing 2-Tier & 3-Tier Designs VPC FEX VXLAN Bridging & Routing Integrated Network Virtualization SDN Controllers Application Centric Infrastructure Policy Model Automation Nexus 3000, 5600, N7000 Nexus 9000

Single-Tier, Dual-Tier, Spine/Leaf Scalable Spine/Leaf DC Fabric VXLAN Dual Tier DC VXLAN Single Layer DC Small Spine/Leaf

Connectivity & Usage Needs Drive Design Choices Form Factor Unified Computing Fabric 3 rd Party Blade Servers Rack Servers (Non-UCS Managed) Storage Protocols Fibre Channel FCoE IP (iscsi, NAS) Virtualization Requirements vswitch/dvs/ovs/nexus 1000v Automation/Orchestration Abstraction APIs/Programmability/Orchestration Connectivity Model 10 or 1-GigE Server ports NIC/HBA Interfaces per-server NIC Teaming models FC FCoE iscsi NFS/ CIFS VM VM VM VM VM VM 7

Data Center Fabric Needs North-South : end-users and external entities. East-West : clustered applications, workload mobility. High throughput, low latency Increasing high availability requirements. Automation & Orchestration Public Cloud Mobile API Orchestration/ Monitoring Internet DATA CENTER FABRIC Server/Compute Enterprise Network EAST WEST TRAFFIC Services Offsite DC Site B FC FCoE iscsi / NAS Storage NORTH - SOUTH TRAFFIC 8

Why Spine-Leaf Design? Pay as You Grow Model Need even more more host host ports? ports? Add Add another a leaf leaf Per Spine Utilization Per Spine Utilization To speed up flow completion times, add more backplane, spread load across more spines FCT FCT 40G fabric ports FCT FCT FCT FCT * FCT = Flow Completion Times 10G host ports 144 96 ports ports 3x48 2x48 10G 10G 192 (1440 (960 ports Gbps Gbps total) 4x48 10G (1920 Gbps total) total) Lower FCT = FASTER APPLICATIONS

Spine/Leaf DC Fabric Large Non-Blocking Switch 1 5 4 2 6 3 1 2 3 4 5 6 7 7

Spine/Leaf DC Fabric Large Modular Switch Line Card Line Card 4 Line Card Fabric Module Line Card 1 5 Line Card Fabric Module Line Card 2 3 7 6 Line Card Fabric Module Line Card Line Card Line Card

Impact of Link Speed the Drive Past 10G Links 200G Aggregate Bandwidth 20 10Gbps Uplinks 5 40Gbps Uplinks 2 100Gbps Uplinks 200G Aggregate Bandwidth 20 10Gbps Downlinks 40 & 100Gbps fabric provide very similar performance for fabric links 40G provides performance, link redundancy, and low cost with BiDi 20 10Gbps Downlinks 20 10Gbps Downlinks

40G BiDi Optics Preserve Existing 10G Cabling SFP-10G-SR $995 MMF LC Patch cord OM4 Fiber Plant Used Fiber Pair MMF LC Patch cord SFP-10G-SR $995 QSFP-40G-SR4 $2995 MPO OM4 Fiber Plant Used Fiber Pair Used Fiber Pair Used Fiber Pair Used Fiber Pair MPO QSFP-40G-SR4 $2995 QSFP-40G-SR-BD $1095 MMF LC Patch cord OM4 Fiber Plant Used Fiber Pair MMF LC Patch cord Distance <= 125m with OM4 QSFP-40G-SR-BD $1095

Single Layer Data Center, Nexus 5500 Dedicated Nexus 5500-based switch pair Positive Negative Unified Port on all ports Max Flexibility Can work as FC/FCOE access transition switch Non-blocking, Line Rate 10Gpbs L2 ~2us Latency Supports Fabric Path, DFA* 160G Layer-3 with L3 daughter card or GEM Supports 24 FEX, A- FEX, VM-FEX L3 card: 160G max, not cumulative DFA L2 ONLY Leaf No VXLAN HW support No ACI support No native DCI support ISSU not supported w/l3 FEX count lower w/l3 Q: 5500 or 5600? Nexus 5500 1Gig/100M Servers Client Access 10-GigE UCS C-Series Campus WAN / DCI L3 ----------- L2 FC 10 or 1-Gig attached UCS C-Series FCoE iscsi / NAS Nexus 2000

Single Layer Data Center, Nexus 5600 Dedicated Nexus 5600-based switch pair Positive Negative Low Price/Performance Unified Ports Good Flexibility (not all ports) Supports VXLAN, Fabric Path, DFA Non-blocking, Line Rate L2/L3 Native 40G/10G, breakout ~1us Latency Supports 24 FEX, A-FEX, VM-FEX VXLAN support No ACI support No native DCI support ISSU not supported w/l3 Q: 5500 or 5600? Nexus 5600 1Gig/100M Servers Client Access 10-GigE UCS C-Series Campus WAN / DCI L3 ----------- L2 FC 10 or 1-Gig attached UCS C-Series FCoE iscsi / NAS Nexus 2000

Single Layer Data Center, Nexus 5696Q (6004) Positioned for rapid scalability and a 40-GigE Fabric Positive Negative Client Access WAN / DCI Unified Ports Good Flexibility with expansion No VXLAN support in HW in early models Campus Non-disruptive scale-up No ACI support 96*40G or 384*10G Supports VXLAN, Fabric Path, DFA Non-blocking, Line Rate L2/L3 No native DCI support FEX count Lower w/l3 ISSU not supported w/l3 Higher initial cost Nexus 6004 L3 ----------- L2 FC FCoE iscsi / NAS Native 100G/40G/10G, BiDi, breakout support ~1us Latency Supports 48 L2 FEX, 24 L3 FEX, A-FEX, VM-FEX 1Gig/100M Servers 10-GigE UCS C-Series Nexus 2000 10 or 1-Gig attached UCS C-Series

Single Layer Data Center, Nexus 9300 Dedicated Nexus 9300-based switch pair Positive Negative Low Price/Performance No Unified Ports Client Access Campus WAN / DCI VXLAN Support in HW FCoE will require SW ACI Leaf & Spine support Non-blocking, Line Rate L2/L3 Native 40G & 10G <1us Latency FEX Support - 16 FCoE Hardware Support* No FP, DFA support VXLAN Control plane is Mcast until EVPN No native DCI support Breakout on some 40G ports ACI Spine <> ACI Leaf Nexus 9300 1Gig/100M Servers 10-GigE UCS C-Series L3 ----------- L2 10 or 1-Gig attached UCS C-Series iscsi / NAS Nexus 2000

Single Layer Data Center, Nexus 7000 Highly Available Virtualized Chassis Access/Aggregation Model Positive More feature rich platform Modular, easy scale up Flexible L2/L3 with ISSU LISP*, OTV, FEX, FCoE, FP, VXLAN* Native 100G, 40G & 10G, breakout DFA Spine/Leaf Supports 32 FEX Negative Higher initial capital cost No Unified Ports VXLAN support in Future No ACI Support Physical Footprint Nexus 7000 Client Access Campus WAN / DCI L3 ----------- L2 FCoE iscsi / NAS VDC, PBR, WCCP, MACSec 1Gig/100M Servers 10-GigE UCS C-Series 10 or 1-Gig attached UCS C-Series Nexus 2000

Single Layer Data Center, Nexus 9500 Highly Available Chassis Access/Aggregation Model Positive Negative Client Access Modular, easy scale up Higher initial capital cost WAN / DCI Flexible L2/L3 with ISSU* FEX*, FCoE*, VXLAN* Native 100G, 40G & 10G, breakout Supports 32 FEX* No Unified Ports FEX, VXLAN, FCoE support in future No DFA, FP Support ISSU coming in future Nexus 9500 Campus L3 ----------- L2 ACI Spine/Leaf Support* VDC in future No native DCI iscsi / NAS 10 or 1-Gig attached UCS C-Series

Scaling with Spine/Leaf: 24x40G 36x40G 48x40G 60x40G 72x40G fabric ports needed for non-oversubscribed 72x40G available 40G fabric ports 10G host ports*** Three Four Five Six Two Racks, 288x10G 240x10G 192x10G 144x10G 96x10G ports (2880GB) (960GB) (2400GB) (1920GB) (1440GB) *** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or fewer fabric links. Server/Rack density dependent on load, power, cooling (geo-diverse)

When do you add/upgrade spines? 72x40G 96x40G fabric ports needed for non-oversubscribed 144x40G now 144x40G 72x40G available, available smaller failure impact 40G fabric ports 10G host ports*** Eight Six Racks, 288x10G 384x10G ports (2880GB) (3840GB) *** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or fewer fabric links. Server/Rack density dependent on load, power, cooling (geo-diverse)

When do you add/upgrade spines? 96x40G fabric ports needed for non-oversubscribed 2x36 in each modular spine, 140x40G 280x40G, available LC Redundancy, Spine ISSU, etc. 40G fabric ports 10G host ports*** Eight Racks, 384x10G ports (3840GB)

Scaling a VPC-based DC design Access Layer VLANs 100-150 L3 L2

Scaling a VPC-based DC design DC Core Layer Access Layer VLANs 100-150 L3 L2 Access Layer VLANs 151-200

Integrating Spine/Leaf with an existing network Core Layer ACI Pod New DC Data Row Upgrade New Application Spine Layer Agg Layer L3 L2 ACI Fabric (VXLAN based) Access Layer VLANs 100-150 Access Layer VLANS 151-200 Access Layer ACI Border VLAN 201-250 Leafs

Integrating Spine/Leaf with an existing network Core Layer ACI Pod New DC Data Row Upgrade New Application Spine Layer Agg Layer L3 L2 ACI Fabric (VXLAN based) Access Layer VLANs 100-150 Access Layer VLANs 151-200 ACI ACI Border Leafs Leafs and Border Leafs

Data Center Interconnect Options Options for L2 Interconnect ASR1000 Client Access WAN / DCI Client Access Campus WAN / DCI Campus L3 ----------- L2 ASR1000 N7K L3 ----------- L2 Virtual DC Services in Software VM VM VM VM VM VM VM VM VM VM VM VM Virtual DC Services in Software Virtualized Servers with Nexus 1000v, vpath, CSR 1000v CSR1000v Virtualized Servers with Nexus 1000v, vpath, CSR 1000v

Nexus Programmability Provisioning & Orchestration Nexus 7K Nexus 5K / 6K Nexus 9K Puppet/Chef Future Shipping Shipping PoAP Shipping Shipping Shipping OpenStack Shipping Shipping Shipping Protocols and Data Models Programmatic Interfaces XMPP Shipping Shipping Future LDAP Shipping Shipping Shipping NetConf/XML Shipping Shipping Shipping NXAPI (JSON/XML) Future Future Shipping YANG Future Future Future REST Future Future Shipping Native Python Shipping Shipping Shipping Integrated container Coming Future Shipping Guest Shell Future Future Shipping OnePK Future Shipping Roadmap OpenFlow Future Shipping Shipping OpFlex Future Future Future

Programming for Many Boxes Git Hub Repository https://github.com/datacenter/

Programming Examples Here s an example that uses the NXAPI for the N9K. It can automate mundane configuration tasks: you launch it remotely (from your Mac/PC) and use it to get an inventory of the switch, configure new interfaces, etc: https://github.com/datacenter/nexus9000/blob/master/nx-os/nxapi/ getting_started/nxapi_basics.py Here s another one that collects the output of several show commands and puts them together to create a super command whit nice NxOS-style formatting: https://github.com/datacenter/nexus9000/blob/master/nx-os/python/samples/ showtrans.py There are a few others such as a CRC error check here: https://github.com/datacenter/nexus7000/blob/master/crc_checker_n7k.py

Options for Spine-Leaf FabricPath FP hdr payload Fabric Encapsulation method Each Switch Managed Individually VXLAN hdr payload VXLAN Dynamic Fabric Automation (DFA) DCNM (CPoM) + Automation Managed as a SYSTEM Service Profiles Web for the network QoS QoS Outside (Tenant VRF) Filter SVC ACI Fabric Non-Blocking Penalty Free Overlay Application Centric Infrastructure (ACI) App QoS Filter Automation of Network & Services FP with CPoM for Simpler Admin APIC DB Policy-based Infrastructure Based on Application Modeling

UCS Manages Compute through Abstraction SAN SAN Connectivity Configuration LAN LAN Connectivity Configuration Service Profile Motherboard Firmware BIOS Configuration Adapter Firmware Boot Order RAID configuration Maintenance Policy 32

ACI Manages Communications through Abstraction Network Path Forwarding External Connectivity ACL QoS FW Configuration SLB Configuration Connectivity ACL QoS SLB Configuration Connectivity ACL QoS QoS QoS QoS FW Configuration Connectivity Application Network Profile 33

Cisco UCS Director for Compute & storage Consumption made easier Secure Cloud Container Converged Stack Control Panel Network Compute VMs Policy-Driven Provisioning Storage Single Pane of Glass for Virtual AND Physical Speed & Accuracy OS and Virtual Machines Compute VM VM Bare Metal Virtualized and Bare-Metal Compute and Hypervisor UCS Director More efficient use of People & Time Network A B C Network and Services Consistency, Less Error in Repetitive Tasks Storage Tenant A Tenant B Tenant C

Cisco InterCloud Architectural Details UCSD Administrator installs InterCloud Director SP Admin deploys ICPEP End Users IT Admins VM Manager InterCloud Director Installed and configured through InterCloud Director InterCloud Provider Enablement Platform Cisco Global InterCloud (or Partner White-Label) VM VM Cisco Global Intercloud Services VM VM InterCloud Secure Fabric Private InterCloud Extender InterCloud Switch InterCloud Services Public

InterCloud Components InterCloud Director UCSD-based, separate interface InterCloud Secure Fabric N1Kv-based, doesn t require a full N1Kv install vnic from intercloud connecter into the vswitch Optional services integration with CSR1000v InterCloud Provider Enablement Platform ICF-Provider Edition implemented by provider to provide

Key Takeaways Cisco has many options for building DC solutions All solutions can start small and grow Does not have to be a rip and replace Spine-Leaf does not have to be expensive Automated fabrics can provide new tools for simplified operations Cloud technologies can expose new operational models

Want more? Cisco Live sessions have been recorded and the slides are available on www.ciscolive365.com. Here are some related sessions: BRKDCT-3378 - Advanced - Building simplified, automated and scalable DataCenter network with Overlays (VXLAN/FabricPath) BRKDCT-2789 - Intermediate - End-to-End Application-Centric Infrastructure (ACI) Automation with UCS Director BRKDCT-2334 - Intermediate - Real World Data Center Deployments and Best Practice Session BRKDCT-2378 - Intermediate - VPC Best Practices and Design on NX OS BRKDCT-2328 - Intermediate - Evolution of Network Overlays in Data Center Clouds BRKDCT-2404 - Intermediate - VXLAN deployment models - A practical perspective BRKDCT-2049 - Intermediate - Overlay Transport Virtualization BRKAPP-9004 - Intermediate - Data Center Mobility, VXLAN & ACI Fabric Architecture BRKACI-2244 - Intermediate - Application Virtual Switch for Application Centric Infrastructure Overview BRKSEC-2133 - Intermediate - Deploying Security in ACI BRKACI-2001 - Intermediate - Integration and Interoperation of existing Nexus networks into an ACI architecture BRKACI-2006 - Intermediate - Integration of Hypervisors and L4-7 Services into an ACI Fabric BRKVIR-2931 - Intermediate - End-to-End Application-Centric Data Center 38