Data Center Use Cases and Trends Amod Dani Managing Director, India Engineering & Operations http://www.arista.com Open 2014 Open Networking Networking Foundation India Symposium, January 31 February 1, 2016, Bangalore
Agenda Data Center Trends Data Center challenges & needs Data Center Network Architecture Data Center Interconnect with VxLAN Data Center Network Programmability Data Center / CDN Edge use case 2
DC Trend Cloud growth for several years to come 3
DC Trend Server Adoption
DC Trend Bandwidth
Is your network faster today than it was 3 years ago?
It should be # of 10G Ports Millions of Transistors 140 120 100 80 60 40 20 0 2007 2010 2012 2013 4500 4000 3500 3000 2500 2000 1500 1000 500 0 2007 2010 2012 2013 2-3 Generations of Silicon 1G -> 100G Speed Transition Lowest Latency with 2 tier Leaf/Spine
Data Center challenges & needs A highly resilient network No downtime planned or unplanned High bandwidth Automate provisioning, change control and upgrades Legacy human network middleware can t scale to the demand! Supports all use cases and applications Client-server Modern distributed apps, Big Data Storage, Virtualization 8
Data Center challenges & needs Multi-tenancy Integrated Security Low and predictable latency Low power consumption Add racks over time Mix and match multiple generations of technologies 9
Data Center Network Architecture 10/40/100G IP Fabric Standards Based LACP/LAG/L3 Big Data IP Storage Cloud Web 2.0 Legacy Applications VM Farms VDI 10
Data Center Design L3LS 4-Way Layer 3 Leaf/Spine with ECMP L3LS ECMP Spine Design Spine 1 Spine 2 Spine 3 Spine 4 Link Legend 10GbE L3 L3 L3 40GbE Dual-Homed Leaf Dual-Homed Leaf Edge/Border Leaf MLAG Pair MLAG Pair MLAG Pair LAG... LAG Hosts Hosts... Edge Routers Metro A Metro B External Network Rack 1 Rack 2 Rack 1 MPLS CORE Rack 2 11 Spine redundancy and capacity Ability to grow/scale as capacity is needed Collapsing of fault/broadcast domains (due to Layer 3 topologies) Deterministic failover and simpler troubleshooting Readily available operational expertise as well as a variety of traffic engineering capabilities
Data Center Design L3LS with VxLAN Spine 1 Spine 2 Spine 3 Link Legend 10GbE L2 40GbE L3 Spine 4 Cloud Vision VXLAN Control Service Layer 2 VXLAN Overlay(s) Active/Active VTEP s + MLAG Layer 3 IP Fabric VXLAN Bridging & Routing VNI-5013 VNI-6829 VTEP VTEP VTEP VTEP VTEP VTEP Edge Routers Metro A Metro B MPLS Rack 1 Rack 2 Dual-Homed Compute Rack 3 Rack 4 Dual-Homed Compute Layer 3 Leaf/Spine with Layer 2 VXLAN Overlay 12 External Networks CORE Network Based Overlay Virtual Tunnel End Points (VTEP s) reside on physical switches at the Leaf, Spine or both Data plane learning is integrated into the physical hardware/software Hardware accelerated VXLAN encap/decap Support for all workload types, Baremetal or Virtual Machines, IP Storage, Firewalls, Load balancers/application Delivery Controllers etc.
Data Center Interconnect with VxLAN Enterprises looking to interconnect DCs across geographically dispersed sites Layer 2 connectivity between sites, providing VM mobility between sites Within the DC for server migration between PODs, for integrating new infrastructure VTEP VNI VNI VTEP DCI to provide Layer 2 connectivity between geographically disperse sites Server migration POD interconnect for connectivity between DC s PODs
Data Center Network Programmability Automation of repetitive configuration tasks VLAN and Interface State ACL Entries Software image management Configuration Templates Choose your level of Integration with network overlay solution Full hardware VTEP design Mixed VXLAN in hardware or hypervisor Fully hypervisor-based with underlying VXLAN-aware network Dynamic provisioning of VLANs Network wide visibility and monitoring Congestion management Virtual to physical connectivity Connectivity monitoring 14
Data Center / CDN Edge Use case BGP SDN Controller sflow collector BGP Default pmacct Traffic Flow analyzer Transit Provider Peer A Peer A Peer A Peer A IP Prefixes DC Edge Switch Caches Caches Caches Caches Caches DC Private Peering IP Prefixes 15
DC / CDN Edge Use case Receive IP prefixes Advertise BGP BGP Controller (peer) SRD Apply Policy Filter install-map with various match criteria supported RIB Best path selection Mark inactive BGP routes Install IP FIB 16
Data Center / CDN Edge Use case Limited requirement for large routing tables at DC/CDN edge 90% traffic hits less than 10% routes in RIB Channel higher bandwidth towards switch, away from expensive internet router ports Edge router (DC switch plays this role) only needs to program a small subset of prefixes in hardware (FIB) BGP Controller sflow information is sent to BGP Controller BGP information is sent to BGP Controller Computes Top N prefixes and instructs the router to install them in FIB Spotify/Netflix are already using this in their network https://media.readthedocs.org/pdf/sdn-internet-router-sir/latest/sdn-internet-routersir.pdf 17
Thank you! 18