How to Achieve True Active-Active Data Centre Infrastructures
|
|
|
- Karen Sheena Glenn
- 10 years ago
- Views:
Transcription
1
2 How to Achieve True Active-Active Data Centre Infrastructures Carlos Pereira Distinguished Systems Engineer II WW Data Centre / Cloud (with extensive credits and thanks to my fellow Cisco collegues: Victor Moreno, Patrice Bellagamba, Yves Louis, Mike Herbert) #clmel
3 Active / Active Data Centres Everybody Then try to figure wants that this out and Then, feel divide tired (or & conquer panic ) 3
4 Objectives Legend Understand the Active/Active Data Centre requirements and considerations Provide considerations for Active/Active DC Design inclusive for Metro areas - from storage, DCI, LAN extension, ACI and network services perspectives Brainstorm about ACI Fabric extension, stretch fabrics, application portability, VM mobility, policy synchronisation, etc. Load Balancer SVI / HSRP Default Gw SSL Offloader IDS / IPS APIC Application Policy Infrastructure Controller Share Experiences with State-full Devices placements and their impact within DCI environment WAN Accelerator Firewall Reference slides would be Quickly (if) covered during the session. 4
5 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 5
6 Some Important Trends Impacting the Data Centre Evolution 1 More Workloads are moving to Cloud Data Centres The increasing density of Business Critical workloads hosted in the Cloud is driving new Multi-site designs to handle Business Continuity, Workload Mobility, and Disaster Recovery 2 Cloud Data Centres include more virtualised workloads per server 3 Traffic in each area of the Data Centre is increasing dramatically Source: Cisco Global Cloud Index, Forecast and Methodology, Global Data Centre Traffic by Destination by
7 4 Two Market Transitions One DC Network Apps Portability, Cross- Platform & Automation Virtual Machines LXC / Docker Containers Applications PaaS Infrastructure Traditional Data Centre Networking Application Centric Infrastructure (ACI) HyperScale Data Centres Network DC Network + Services Abstraction & Automation Switching Apps Policy
8 5 The App Market ttansition From Monolithic To Cloud-aware Traditional Monolithic Multi-tier App Cloud-Aware App IaaS PaaS
9
10 Terminology The Terminology around Workload and Business Availability / Continuity is not always consistent Some examples: Availability Zone AWS - Availability Zones are distinct locations within a region that are engineered to be isolated from failures in other Availability Zones OpenStack - An availability zone is commonly used to identify a set of servers that have a common attribute. For instance, if some of the racks in your data centre are on a separate power source, you can put servers in those racks in their own availability zone.
11 Availability Zone and Regions AWS Definitions Regions are large and widely dispersed into separate geographic locations. Availability Zones are distinct locations within a region that are engineered to be isolated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same region
12 Availability Zone and Regions - OpenStack Definitions Regions - Each Region has its own full Openstack deployment, including its own API endpoints, networks and compute resources. Different Regions share one set of Keystone and Horizon to provide access control and Web portal. (Newer deployments do not share Keystone) Availability Zones - Inside a Region, compute nodes can be logically grouped into Availability Zones, when launching new VM instance, we can specify AZ or even a specific node in a AZ to run the VM instance.
13 In-Region and Out-of-Region Data Centres Business Continuity and Disaster Recovery Active/active Traffic intended for the failed node is either passed onto an existing node or load balanced across the remaining nodes. Active/passive Provides a fully redundant instance of each node, which is only brought online when its associated primary node fails. Out-of-Region Beyond the Blast Radius for any disaster
14 Business Continuity and Disaster Recovery Ability to Absorb the Impact of a Disaster and Continue to Provide an Acceptable Level of Service. Operational Continuity Planned / Unplanned Service Continuity within the Metro (including a single data centre loss) Disaster Recovery (DR) Loss of Regional data centres leads to recovery in a Remote data centre Applications and services extended across Metro and Geo distances are a natural next step. DC-1 DC-2 DC-3 Active-Active Metro Area Pervasive Data Protection + Infrastructure Rebalance
15 Industry Standard Measurements of Business Continuity Time to Recover Data Lost 7 p.m.
16 Application Resiliency and Business Criticality Levels Defining how a Service outage impacts Business will dictate a redundancy strategy (and cost) Each Data Centre should accommodate all levels Cost is important factor Lowest RTO/RPO Criticality Levels C1 RTO/RPO 0 to 15 mins Application Level Mission Imperative Impact Description Any outage results in immediate cessation of a primary function, equivalent to immediate and critical impact to revenue generation, brand name and/or customer satisfaction; no downtime is acceptable under any circumstances C2 Mission Critical Any outage results in immediate cessation of a primary function, equivalent to major impact to revenue generation, brand name and/or customer satisfaction; Typical App Distribution ~ 20% of Apps Typical Cost $ 100% Duplicate Resources, 2x Cost Multiplier (Most Costly) Highest RTO/RPO C3 RTO/RPO 15+ mins C4 C5 Business Critical Business Operational Business Administrative Any outage results in cessation over time or an immediate reduction of a primary function, equivalent to minor impact to revenue generation, brand name and/or customer satisfaction A sustained outage results in cessation or reduction of a primary function A sustained outage has little to no impact on a primary function ~ 20% of Apps ~ 40% of Apps ~ 20% of Apps Many-to-One Resource Sharing, Lower Cost Multiplier (Less Costly) Two of the most common RTO/RPO Targets
17 Mapping Applications to Business Criticality Levels
18 Application Centric View of Data Centre Interconnect (DCI) Applications consume resources across the Cloud DC infrastructure If an Application moves between sites, each element of the Application Environment must also adjust to the new location DCI extends the Application Environment between Geographic sites within Private Clouds and Public Clouds Critical IT Use Cases including Business Continuity, Workload Mobility, and Disaster Recovery within Public and Private Clouds, impact each element of the Application Environment Multi DC WAN and Cloud DC Fabric Networking L4 L7 Services Hypervisors and Virtual Networking Compute Storage Application Environment DCI EXTENDS THE APPLICATION ENVIRONMENT ACROSS MULTIPLE SITES, SUPPORTING PHYSICAL AND VIRTUAL ELEMENTS
19 DCI Enables Critical Use Cases within Private Clouds and Public Clouds Including Business Continuity and Workload Mobility between Metro/Geo Sites Site 1 Site 2 Business Continuity Workload Mobility Disaster Recovery and Avoidance Site Migrations Business Use Cases Load Balanced Workloads Operations Maintenance Operations Rebalancing Application Geo-Clusters Application Environment Multi DC WAN and Cloud DC Fabric Networking L4 L7 Services Hypervisors and Virtual Networking Compute Storage
20 The Application Environment Spans Many Cloud Resources DCI Extends Cloud Resources to Support Multi-site Use Cases Multi DC WAN & Cloud DC Fabric Networking L4 L7 Services Hypervisors and Virtual Networking Compute Storage WAN Connectivity IP Internet Access MPLS VPN Access Physical or Virtual WAN router IP Path Optimisation (LISP, DNS, Site Selector) L3 Routing and IGP OSPF ISIS BGP Data Centre Interconnect Overlay Transport Virtualisation (OTV) EoMPLS, VPLS E-VPN Data Centre Fabrics Virtual Port Channel (vpc) VxLAN based (with our without SDN Controller) FabricPath Application Centric Infrastructure (ACI) Fabric Services Tenancy Secure Segmentation (VRF, VLAN, VxLAN) Traffic QoS Bandwidth Reservation Physical and Virtual Services Firewalls Load Balancers IPSec VPN Termination WAN Acceleration Service Network Analysis Data Encryption Hypervisors VMware vsphere Microsoft Hyper-V KVM (ex.: RedHat) Hypervisor Services Live and Cold Application Migrations Extended Clusters High Availability and Recovery Services Site Affinity Services Virtual Switching Nexus 1000v Virtual Interfaces APPLICATION TEAMS CHOOSE FROM AVAILABLE DESIGN OPTIONS THESE FUNCTIONS ARE EXTENDED TO SUPPORT MULTI -SITE USE CASES Unified Compute System (UCS) C-Series Rack Servers B-Series Blade Servers Physical and Virtual Interfaces Port and Security Profiles Integrated PoDs FlexPod vblock Low Cost Compute PoDs Storage NetApp EMC Direct Attached Storage Storage Fabrics FC FCoE 10GE Data Replication Synchronous Asynchronous Hypervisor Based DWDM / IP / FCIP
21 DCI Extensions Impact Each Tier of the Cloud Data Centre 21
22 Virtualised Workload Mobility Inter-DCs Deployment Core Network Cisco-VMware With EMC & NetApp Validated Design & Certification for Virtualised Workload Mobility DCI LAN extension DC 1 DC 2 ESX-A source ESX-B target Virtualised Workload Mobility across geographically dispersed sites Requires to stretch VLANs and to provide consistent LUN ID access Disaster Recovery Applications. Ex.: VMware Site Recovery Manager (SRM)
23 Data Centre Interconnect LAN Extension STP Isolation is the key element Multipoint Loop avoidance + Storm-Control Unknown Unicast & Broadcast control Link sturdiness Scale & Convergence DC 1 DC 2 ESX-A source ESX-B target 23
24 Data Centre Interconnect Path Optimisation Options Egress Addressed by FHRP Filtering DC 1 Ingress: 1. LISP Mobility 2. IGP Assist (A_la_RHI) 3. GSLB DC 2 ESX-A source ESX-B target 24
25 Ex.1: vsphere Redundancy and Mobility Options Can Extend Across Geographies VM High Availability VMware vsphere High Availability (HA) VM Mobility VMware vmotion and Storage vmotion Live Migration of VMs and VM storage Non-disruptive migration of VMs Non-disruptive migration of Virtual storage Site / VM Recovery VMware Site Recovery Manager (SRM) Fully automated site recovery and migration Simple management of recovery and migration plans Non-disruptive testing DC-1 DC-2 DC-1 DC-2 DC-1 DC-2 VMware vsphere Fault Tolerance (FT) Shared Nothing VMware vmotion Live Migration of VMs and storage WITHOUT shared storage Simple management of recovery and migration plans Non-disruptive testing VMware vsphere Replication Creates VM snapshot copies available for restoration through the vcenter Continuous replication to another location, within or between clusters Hypervisor based replication, VM granularity DC-1 DC-2 DC-1 DC-2 DC-1 DC-2
26 Ex.2: Hyper-V Redundancy and Workload Mobility Options Can Extend Across Geographies Hyper-V Live Migration with Shared SMB Storage Hyper-V Shared Nothing Live Migration DC-1 DC-2 DC-1 DC-2 Hyper-V Storage Migration Hyper-V Replica Application Consistent Snapshots DC-1 DC-2 DC-1 DC-2
27 VMware vsphere and Microsoft Hyper-V environments are concurrently supported in Cisco Cloud solutions Data Centre 1 Data Centre 2
28 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 28
29 Data Centre Interconnect SAN Extension DC 1 Synchronous I/O implies strict distance limitation Localisation of Active Storage is key Consistent LUN ID must be maintained for stateful migration DC 2 ESX-A source ESX-B target 29
30 SAN Extension Synchronous vs. Asynchronous Data Replication Synchronous Data replication: The Application receives the acknowledgement for I/O complete when both primary and remote disks are updated. This is also known as Zero data loss data replication method (or Zero RPO) Metro Distances (depending on the Application can be kms max) Asynchronous Data replication: The Application receives the acknowledgement for I/O complete as soon as the primary disk is updated while the copy continues to the remote disk. Unlimited distances Synchronous Data Replication Asynchronous Data Replication
31 Synchronous Data Replication Network Latency Speed of Light is about Km/s Speed is reduced to Km/s 5 μs per Km (8 μs per Mile) That gives us an average of 1ms for the light to cross 200 Km of fibre Synchronous Replication: SCSI protocol (FC) takes a four round trips For each Write cmd a two round trips is about 10 μs per kilometer 20μs/km for 4 round trips for Synch data replication 50 Kilometers 1ms 250 μs : Rec_Ready? μs : Wait for response? μs : Send data 1 Local Storage Array 250 μs : Wait for Ack? 2 Remote Storage Array
32 Storage Deployment in DCI Option 1 - Shared Storage Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source Initiator ESX-B target Virtual Center Volumes Target
33 Storage Deployment in DCI Shared Storage Improvement Using Cisco IOA Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source ESX-B target Virtual Center Improve Latency using Cisco Write Acceleration feature on MDS Fabric
34 Storage Deployment in DCI Option 2 - NetApp FlexCache (Active/Cache) Core Network L2 extension for vmotion Network DC 1 DC 2 NAS Read Write? ESX-A source data 3 ACK 2 Temp Cache data data 2 4 ACK Write Read 1 data ESX-B target Virtual Center FlexCache does NOT act as a write-back cache FlexCache responds to the Host only if/when the original subsystem ack ed to it No imperative need to protect a Flexcache from a power Failure 2015 Cisco and/or its affiliates. All rights reserved.
35 Storage Deployment in DCI Option 3 - EMC VPLEX Metro (Active/Active) Hosts at both sites instantly access Distributed Virtual Volume Synchronisation starts at Distributed Volume creation Synchronous Latency Distributed Virtual Volume Fibre Channel WRITEs are protected on storage at both Site A and B READs are serviced from VPLEX cache or local storage. DC A DC B
36 From the Host Storage Deployment in DCI Option 3 - EMC VPLEX Metro (Active/Active) Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source Initiator ESX-B target Virtual Center VPLEX Virtual Layer EMC VMAX From the Storage Initiator Target Target VPLEX Engine LUNv Synchronous Latency requiments VPLEX Engine Cisco and/or its affiliates. All rights reserved. EMC/dciEmc.html LUNv 5ms max EMC CLARiiON
37 Storage Deployment in DCI Option 4 - EMC VPLEX Geo (Active/Active) Active/Active Storage Virtualisation Platform for the Private and Hybrid Cloud Enables workloads Mobility over Long distance at Asynchronous distances using Microsoft Hyper-V. OTV WAAS L3 Network Infrastructure WAAS OTV SAN VPLEX Geo cluster Distributed Virtual Volumes Asynchronous latency up to 50ms max Uses existing WAN bandwidth which can be optimised VPLEX Geo cluster SAN EMC or non-emc Storage Windows Failover Cluster is used to provide HA features, Live Migration and Cluster Shared volume capability EMC or non-emc Storage DC A DC B
38 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 38
39 Extending Virtual Tenant Space Outside the Fabric Logical view of Multi-tier Applications Layer 3 Edge Gateway IP network Maintain the L3 segmentation of the public networks (VRF) to the outside world IP network Layer 3 Edge Gateway Maintain the L2 segmentation End to End toward the remote DC Web Segment public network Web Segment public network Web Web App Segment private network App App Segment private network App DB Segment private network DB Segment private network DB DB L3 connectivity outside the fabric L2 connectivity outside the fabric
40 LAN Extension with DCI VLAN Types Type T0 Limited to a single access layer device Type T1 Extended inside an aggregation block (POD) Type T2 Extended between PODs part of the same DC site Type T3 Extended between PODs part of twin DC sites connected via dedicated dark fibre links Type T4 T4 (DWDM) Extended between PODs part of twin DC sites connected via xwdm links Type T5 T1 T3 (Dark Fibre) T2 Extended between PODs part of distant remote DC sites T0 40
41 LAN Extension for DCI Technology Selection Over dark fibre or protected D-WDM VSS & vpc Ethernet Dual site interconnection FabricPath Multiple site interconnection Metro style MPLS IP MPLS Transport EoMPLS Transparent point to point VPLS Large scale & Multi-tenants, Point to Multipoint E-VPN Large scale & Multi-tenants, Point to Multipoint IP Transport OTV Enterprise style Inter-site MAC Routing LISP For Subnet extension and Path Optimisation VXLAN (future for DCI) Emerging limited A/A site interconnect (requires anycast gateway) SP style IP style
42 Dual Sites Interconnection Leveraging MECs Between Sites At DCI point: STP Isolation (BPDU Filtering) Broadcast Storm Control FHRP Isolation Layer 2 only And/ Or Static L3 Link utilisation with Multi-Chassis EtherChannel Enable Fast LACP with DWDM DCI port-channel - 2 or 4 links Requires protected DWDM or Direct fibres Primary Root L2 WAN L2 Primary Root L3 L3 Validated design: 200 Layer 2 VLANs VLAN SVIs 1000 VLAN SVI (static routing) Si Si Check if vpc supports L3 peering on your Nexus device If possible, use dedicated L3 Links for Inter-DC routing! Support for L3 peering across all Nexus gear by CY15 Requires F2 Line Card (minimum) NX-OS (Roadmap) Server Cabinet Pair 1 Server Cabinet Pair N Server Cabinet Pair 1 Server Cabinet Pair N
43 Dual Sites Use Case Summary Cisco Validated Design on CCO Test Case Hardware failure Ucast Hardware failure Mcast Hardware restore Ucast Hardware restore Mcast Link Failure Ucast Link failure Mcast Link Restore Ucast Link Restore Mcast VSS-VSS <1.7 <2.3 <1.1 <2.8 <1.3 <1.2 <1.7 <1.2 VSS-vPC <1.3 <1.7 <2.0 <2.6 <1.2 <1.6 <1.5 <1.4 vpc-vpc <1.5 <1.6 <2.8 <2.5 <1.2 <0.2 <0.2 <0.2
44 MACSec for Secure Data Centre Interconnect Single Access dark Fibre Connectivity 7Ks as bulk encrypters for Self managed MPLS DCI Cores Bump in the wire Dual Access with dark Fibre Connectivity No support of MACSec on F3 Modules 40GE/100GE or Nexus Nexus 7000 M2 & Future M3 modules support 10G / 40G / 100G MACSec
45 MPLS for DCI Large Choice of Devices $ VPLS EoMPLS Mbps Gbps 10Gbps 100Gbps Scale 45
46 EoMPLS Control Plane interfac e PE1 xconnect <PE2> <VCID> NH: PE1 VC: VCID Label: 8 Circuit type: Ethernet type 4/5 Targeted-LDP LDP/RSVP NH: PE2 VC: VCID Label: 27 Circuit type: Ethernet type 4/5 PE2 interfac e Core transport relies on LDP + IGP (OSPF/ISIS) or RSVP for Traffic-Engineering xconnect <PE1> <VCID> Xconnect can be applied to LAN Port LAN Sub-interface SVI
47 EoMPLS Usage with DCI End-to-End Loop Avoidance Using Edge to Edge LACP Active PW Si MPLS Core Si Aggregation Layer DC1 DCI Active PW DCI Aggregation Layer DC2 BPDU Filtering to maintain STP domains isolation Storm-control for data-plane protection Configuration applied at aggregation layer on the logical port-channel interface interface port-channel70 description L2 PortChannel to DC 2 spanning-tree port type edge trunk spanning-tree bpdufilter enable storm-control broadcast level 1* storm-control multicast level x *Value to be tuned, min is
48 Multi-Point Topologies What Is VPLS? VLAN SVI VFI PW PW MPLS Core PW VFI SVI VLAN One extended bridge-domain built using: VFI = Virtual Forwarding Instance ( VSI = Virtual Switch Instance) PW = Pseudo-Wire SVI = Switch Virtual Interface xconnect VFI SVI VLAN Mac address table population is pure Learning-Bridge 48
49 VPLS L2 Signalling and Forwarding (aka Transparent-Bridging) a VSI/VFI operates like a conventional L2 switch! A B A A B Ea VFI VPN 1 - Ea : A - VCID 111 : B VCID 111 VFI VPN 1 - VCID 111 : A - Eb : B Eb A B B A B B A B A VCID 333 VCID 222 A B VFI VPN1 - VCID 333 : A A B C
50 VPLS Cluster Solutions Using clustering mechanism Two devices in fusion as one VSS Sup720 VSS Sup2T ASR9K nv virtual cluster One control-plane / two data-planes Dual node is acting as one only device Native redundancy (SSO cross chassis) Native load balancing Capability to use port-channel as attachment circuit SUP720+ES SUP2T ASR9K nv PW
51 Cluster VPLS Redundancy Making Usage of Clustering X Si Si mpls ldp graceful-restart Si Si VSS Bridged traffic Failover (msec) Fallback (msec) If failing slave node: PW state is unaffected If failing master node: PW forwarding is ensured via SSO PW state is maintained on the other side using Graceful restart Edge Ether-channel convergence in sub-second Traffic is directly going to working VSS node Traffic exits directly from egress VSS node Quad sup SSO for SUP2T since 1QCY13 51
52 VPLS - Deployment Consideration ECMP Core Requirements VPLS Cluster - Deployment Consideration ECMP Core Requirements DC-1 DC-2 Build a symmetric core with two ECMP paths between each VSS 52
53 Nexus 7000 Data Centre Interconnect with VPLS Since 6.2(2) Layer 2 switchport Trunk Portchannel Primary N7K WAN Edge VLAN tied to Active VFI with neighbours to remote DC sites Vlan X VLAN X VFI Virtual Port Channel (vpc) MCT VLAN tied to Standby VFI with neighbours to remote DC sites Vlan X VLAN X VFI Secondary N7K WAN Edge MCT = Multi-chassis Trunk Interface VFI = Virtual Forwarding Instance
54 Nexus Data Centre Interconnect with VPLS Sample Configuration Nexus 7000 vlan 80-81! Primary VFI owner for EVEN vlans Secondary owner for ODD vlans PE 1 PE 2 vlan configuration 80 member vfi vpls-80! vlan configuration 81 member vfi vpls-81! l2vpn vfi context vpls-80 vpn id 80 redundancy primary member encapsulation mpls member encapsulation mpls! l2vpn vfi context vpls-81 vpn id 81 redundancy secondary member encapsulation mpls member encapsulation mpls! interface port-channel50 switchport mode trunk switchport trunk allowed vlan 80,81 vlan 80-81! vlan configuration 80 member vfi vpls-80! vlan configuration 81 member vfi vpls-81! l2vpn vfi context vpls-80 vpn id 80 redundancy secondary Primary VFI owner for ODD vlans Secondary owner for EVEN vlans member encapsulation mpls member encapsulation mpls! l2vpn vfi context vpls-81 vpn id 81 redundancy primary member encapsulation mpls member encapsulation mpls! interface port-channel50 switchport mode trunk switchport trunk allowed vlan 80,81 vpc vpc PE VFI VFI PE PE VFI EVEN VLANs ODD VLANs PE VFI VFI PE PE VFI VFI VFI Note: Virtual Port Channel (vpc) configuration not shown PE PE
55 E-VPN Main Principles Control-Plane Distribution of Customer MAC-Addresses using BGP PE continues to learn C-MAC over AC When multiple PEs announce the same C-MAC, hash to pick one PE PE BGP PE MP2MP/P2MP LSPs for Multicast Traffic Distribution MP2P (like L3VPN) LSPs for Unicast Distribution PE PE Full-Mesh of PW no longer required!!
56 Solution Overview (draft-ietf-l2vpn-pbb-evpn-) DF Election with VLAN Carving Prevent duplicate delivery of flooded frames. Uses BGP Ethernet Segment Route. Performed per Segment rather than per (VLAN, Segment). Non-DF ports are blocked for flooded traffic (multicast, broadcast, unknown unicast). Split Horizon for Ethernet Segment Prevent looping of traffic originated from a multihomed segment. Performed based on B-MAC source address rather than ESI MPLS Label. Aliasing PEs connected to the same multi-homed Ethernet Segment advertise the same B-MAC address. Remote PEs use these MAC Route advertisements for aliasing load-balancing traffic destined to C-MACs reachable via a given B-MAC. PE PE PE PE PE PE B-MAC1 B-MAC1 PE PE PE PE PE
57 Supported Access Topologies Single Home Device (SHD) Single Home Network (SHN) Dual Home Device (DHD) Active / Active Per-Flow LB Dual Home Device (DHD) Active / Active Per-Service LB Dual Home Network (DHN) Active / Active Per-Service LB CE1 VID X Bundle- Eth1 VID X PE1 MPLS Core CE1 VID X VID X Bundle- Eth25 Bundle- Eth25 PE1 PE2 MPLS Core CE1 VID X VID Y Bundle- Eth25 Bundle- Eth25 PE1 PE2 MPLS Core CE1 CE2 Bundle- Eth25 MST-AG / REP-AG MST on N-PE G.8032 P-mLACP Bundle- Eth25 PE1 PE2 MPLS Core Physical Interfaces Bundle Interfaces (shown) Bundle Interface Physical Interfaces Bundle Interfaces (shown) Physical Interfaces Bundle Interfaces (shown) 57
58 PBB-EVPN Principle (Provider Backbone Bridge) BGP MAC Routing Virtual B-MAC: X Shared by both PEs Per each multi-home segment M1 PE PE1 BGP update: ibgp L2-NLRI next-hop: PE1 X, Label 100 PE PE3 Virtual B-MAC: Y Shared by both PEs Per each multi-home segment M3 Host 1 S 1 PE2 BGP update: ibgp L2-NLRI next-hop: PE2 X, Label 100 PE4 S 3 Host 3 VPN B-MAC NH RT1 B-MAC:X PE1 RT1 B-MAC:X PE2 PE1 and PE2 share the same virtual B-MAC (: X), for the same multi-homing segment. In case of mclag, PE can learn the virtual B-MAC according to Switch 1 LACP system MAC automatically Both PE1 and PE2 advertise virtual B-MAC to the remote PEs via BGP Remote PE3 and PE4 receive the BGP route, and install the virtual B-MAC in its L2FIB table. BGP policy could be used for active/active path or primary/standby path PE3 and PE4 does the same thing, PE1 and PE2 learn the virtual B-MAC: Y
59 PBB-EVPN Principle (Provider Backbone Bridge) Packet Forwarding M1 PE PE1 PE PE3 M3 Host 1 S 1 S 3 Host 3 PE2 PE4 VPN C-MAC NH B-MAC NH Host 3 send packet to Host 1, D-MAC=M1 S3 does per-flow load balancing, and choose PE4 BD1 M1 B-MAC:X PE1, PE2 BD1 M2 B-MAC: PE4 look up the C-MAC table, find M1 and its associated B-MAC=X, and the two ECMP next-hop: PE1 and PE2 PE4 does per-flow load balancing, choose PE1 PE1 receive the packet, de-cap the PBB, learn the C-MAC: M3 and the B-MAC association PE1 look up its C-MAC table, and forward packet to S1
60 Overlay Transport Virtualisation (OTV) in a Nutshell OTV is a MAC-in-IP method that extends Layer 2 connectivity across a transport network infrastructure OTV supports both multicast and unicast-only transport networks OTV uses ISIS as the control protocol OTV offers a Build-in Loop Prevention OTV is Site Independence OTV doesn t rely on Circuit State, it s a Point-to-Cloud model. OTV doesn t rely on Flood-&-Learn, it s a push model to distribute host MAC reachability, preserving the failure Boundary. 60
61 Overlay Transport Virtualisation OTV Control Plane Neighbour discovery and adjacency over Multicast (Nexus 7000 and ASR 1000) Unicast (Adjacency Server Mode: Nexus 7000 (5.2) an ASR 1000 (3.9S)) OTV proactively advertises/withdraws MAC reachability (control-plane learning) IS-IS is the OTV Control Protocol - No specific configuration required VLAN MAC IF 4 3 New MACs are learned on VLAN 100 Vlan 100 MAC A 1 OTV updates exchanged via the L3 core MAC A IP A 100 MAC B IP A 300 MAC C IP A Vlan 100 Vlan 300 MAC B MAC C 2 East West IP A 3 VLAN MAC IF 100 MAC A IP A 100 MAC B IP A MAC C IP A South 61
62 OTV Overview Inter-Sites Packet Flow MAC TABLE VLAN MAC IF 100 MAC 1 Eth 2 4 Transport Infrastructure 100 OTV MAC 2 Eth 1 OTV OTV OTV 100 MAC 2 IP A 100 MAC 3 IP B 100 MAC 4 IP B IP A MAC 1 MAC 3 3 Encap IP A IP B Decap 5 MAC 1 MAC 3 IP B IP A IP B MAC TABLE VLAN MAC IF 100 MAC 1 IP A 100 MAC 3 Eth MAC 4 Eth 4 6 Layer 2 Lookup MAC 1 MAC 3 1 MAC 1 MAC 3 West East Server 1 Site Site Server
63 New Features F1/F2E used as Internal Interfaces Selective Unicast Flooding Dedicated Data Broadcast Group OTV VLAN Translation OTV Fast Convergence Starting 6.2(2) Tunnel Depolarisation with Secondary IP Loopback Join-Interface Starting 6.2(6)* Targeted Future * F3: recommended 6.2.(8)
64 New Features 6.2(2) and above F1 or F2E can be mixed with M-series VDC and used as OTV internal interface F3 (6.2(6) and above) can be used as OTV Internal and Join interface Unicast MAC address can be statically configured to flood across OTV Dedicated Broadcast Group Allows separation and prioritisation of control traffic vs. data plane broadcast traffic otv flood mac AC9.00A2 vlan 202 interface Overlay1 otv join-interface port-channel100 otv broadcast-group otv control-group otv data-group /24 otv extend-vlan
65 Improving OTV Design Physical Join interface Internal interface Single physical Join interface hello Internal interface OTV Single physical Join interface hello OTV OTV OTV hello hello hello hello hello hello OTV OTV Def GWY Def GWY OTV OTV Traditional deployment of OTV in the stick with SVI in the aggregation layer 65
66 Improving OTV Design Loopback Join interface The Source interface is now a logical loopback address and takes over the original physical Join interface Internal interface OTV hello Join interface using a loopback address hello hello hello OTV All available layer 3 uplinks Join ing the OTV overlay can be used to reach destination DC Hash will be calculated with the sourceinterface as the source address OTV OTV Secondary interface can be leveraged to better load distribute the traffic over multiple layer 3 paths Def GWY Def GWY 66
67 Placement of the OTV Edge Device Option 1 - OTV in the DC Aggregation OTV VDC deployment replicated in each POD Layer 2 Link Layer 3 Link OTV Virtual Link Inter-PoD & inter-dc LAN extension with a pure L3 core Data Centre Isolated STP domain in each POD STP filtered across the OTV overlay by default Independent STP root bridge per POD STP Root L3 L2 STP Root vpc facing the access layer devices Loop free topology inside each POD Loop free topology per POD can be also be through FabricPath.
68 Placement of the OTV Edge Device Option 2 - OTV at the Aggregation with L2-L3 Boundary on External Firewalls The Firewalls host the Default Gateway No SVIs at the Aggregation Layer No Need for the OTV VDC L3 L2 Def GWY OTV Core OTV Def GWY Aggregation Firewall Firewall
69 DCI Convergence Summary Robust HA is the guiding principle Common Failures: 1. Core failures Multipath routing (or TE FRR) sub-sec 2. Join interface failures Link Aggregates across line-cards sub-sec 3. Internal Interfaces failures Multipath topology (vpc) & LAGs sub-sec 4. ED component failures HW/SW resiliency sub-sec Extreme failures (unlikely): 1x. Core partition 3x. Site partition 4x. Device down Require OTV reconvergence < 5s OTV VDC East-B Access Aggregation OSFP Core 1x VPC 1 4 4x 3 1 1x 2 OTV VDC 4 East-A Other Layer 2 uplink Internal Interfaces Join Interfaces Other Layer 3 Uplink 69
70 Location Identity Separation Protocol What do we mean by Location and Identity IP core Today s IP Behaviour Loc/ID Overloaded Semantic When the Device Moves, It Gets a New IPv4 or IPv6 Address for Device IPv4 or IPv6 Its New Identity and Location Address Represents Identity and Location Device IPv4 or IPv6 Address Represents Identity Only. Its Location Is Here! IP core Only the Location Changes LISP Behaviour Loc/ID Split When the Device Moves, Keeps Its IPv4 or IPv6 Address. It Has the Same Identity 70
71 A LISP Packet Walk How does LISP operate? 1 DNS Entry: D.abc.com A > > > > /24 LISP Site S ETR West-DC D 3 Mapping Entry ITR /24 EID-prefix: /24 Locator-set: Non-LISP site , Non-LISP priority: site 1, weight: 50 (D1) , priority: 1, weight: 50 (D2) IP Network /24 PITR EID-to-RLOC mapping This Policy Controlled by Destination Site East-DC
72 A LISP Packet Walk After moving the host 1 DNS Entry: D.abc.com A > /24 LISP Site 4 S > > ETR West-DC D 3 Mapping Entry ITR /24 EID-prefix: /24 Locator-set: Non-LISP site , Non-LISP priority: site 1, weight: 50 (D1) , priority: 1, weight: 50 (D2) IP Network /24 PITR EID-to-RLOC mapping > Detect the move and update mappings East-DC
73 A LISP Packet Walk How about Non-LISP Sites? 1 DNS Entry: D.abc.com A Non-LISP Site S Non-LISP Site 3 Mapping Entry EID-Prefix: /24 Locator-Set: , priority: 1, weight: 50 (D1) , priority: 1, weight: 50 (D2) > ETR West-DC > D > > /24 IP Network PITR / EID-to-RLOC mapping East-DC
74 LISP Roles and Address Spaces What are the Different Components Involved? LISP Roles Tunnel Routers - xtrs Edge devices encap/decap Ingress/Egress Tunnel Routers (ITR/ETR) Proxy Tunnel Routers - PxTR Coexistence between LISP and non-lisp sites Ingress/Egress: PITR, PETR EID to RLOC Mapping DB RLOC to EID mappings Distributed across multiple Map Servers (MS) Address Spaces EID = End-point Identifier Host IP or prefix RLOC = Routing Locator IP address of routers in the backbone Non-LISP PxTR 74 EID Space Prefix w.x.y.1 x.y.w.2 z.q.r.5 z.q.r.5 Next-hop e.f.g.h e.f.g.h e.f.g.h e.f.g.h ITR Mapping DB ETR EID a.a.a.0/24 b.b.b.0/24 c.c.c.0/24 d.d.0.0/16 RLOC w.x.y.1 x.y.w.2 z.q.r.5 z.q.r.5 EID a.a.a.0/24 b.b.b.0/24 c.c.c.0/24 d.d.0.0/16 RLOC Space EID Space EID ALT a.a.a.0/24 b.b.b.0/24 c.c.c.0/24 d.d.0.0/16 RLOC w.x.y.1 x.y.w.2 z.q.r.5 z.q.r.5 RLOC w.x.y.1 x.y.w.2 z.q.r.5 z.q.r.5
75 LISP Host-Mobility Needs: Global IP-Mobility across subnets Optimised routing across extended subnet sites LISP Site Non-LISP Sites PXTR LISP Solution: Automated move detection on XTRs Dynamically update EID-to-RLOC mappings Traffic Redirection on ITRs or PITRs Benefits: LAN Extensions LISP-VM (XTR) XTR IP Network Mapping DB Direct Path (no triangulation) Connections maintained across move No routing re-convergence No DNS updates required Transparent to the hosts Global Scalability (cloud bursting) IPv4/IPv6 Support West-DC RLOC EID LISP Encap/Decap East-DC 75
76 Host-Mobility Scenarios Moves Without LAN Extension Moves With LAN Extension LISP Site XTR Non-LISP Site LISP Site XTR Mapping DB Internet or Shared WAN DR Location or Cloud Provider DC LAN Extension IP Network Mapping DB LISP-VM (XTR) LISP-VM (XTR) West-DC East-DC West-DC East-DC IP Mobility Across Subnets Routing for Extended Subnets Disaster Recovery Cloud Bursting Active-Active Data Centres Distributed Clusters Application Members in One Location 76 Application Members Distributed (Broadcasts across sites)
77 Summary - Where to Deploy LISP and OTV Roles and Places in the Network XTR: Branch LISP Sites Customer-managed/owned SP-Managed CE service PXTR: Border Transit Points Customer backbone routers Customer co-location SP provided router/service LISP-VM XTR: Aggregation Data Centre Customer-managed/owned LISP Site XTR Data Centre IP Backbone LISP-VM (XTR) Internet / WAN Backbone DC-Aggregation Non-LISP Sites PXTR Mapping DB Mapping Servers/Routers: Distributed Across Data Centres Customer-managed/owned SP provided service OTV DC-Access OTV: Aggregation Data Centre Customer-managed/owned West-DC East-DC RLOC EID LISP Encap/Decap 77
78 LISP References LISP Information Cisco LISP Site. (IPv4 and IPv6) Cisco LISP Marketing Site... LISP Beta Network Site or LISP DDT Root... IETF LISP Working Group... LISP Mailing Lists Cisco LISP Questions IETF LISP Working Group LISP Interest (public). LISPmob Questions
79 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 79
80 Example: 3-Site Data Centre Interconnect Model 80
81 Metro Virtual Data Centre High-availability application and data solution architecture which leverages a dual data centre physical infrastructure Addresses all levels of the data centre stack physical layer, network layer, server platform resources, storage resources, application networking services, data tier structure No physical single points of failure Optimised use of capacity through virtualisation Management and interaction of applications in a paired data centre environment Disaster Avoidance and Prevention by Pro-actively migrates seamlessly Virtual Machines with NO interruption Support Disaster Recovery capability beyond dual data centre resiliency Active-active capability and workload rotation to accelerate incident response time and increase confidence Capable of no data loss within Metro (synchronous replication, RPO=0)
82 Active / Active Metro Design Cool! What if they are both ACI on the Metro sites? To come on the 2 nd half of this session These Sites are Operating as ONE Logical Data Centre
83 Live Workload Mobility Requirements for the Active-Active Metro Design Move an Active Virtual Workload across Metro Data Centres while maintaining Stateful Services Business Continuity Use Cases for Live Mobility Most Business Critical Applications (Lowest RPO/RTO) Live Workload Migrations Operations Rebalancing / Maintenance / Consolidation of Live Workloads Disaster Avoidance of Live Workloads Application Geo-Clusters spanning Metro DCs Hypervisor Tools for Live Mobility VMware vmotion or Hyper-V Live Migration Stretched Clusters across Metro DCs Host Affinity rules to manage resource allocation Distributed vcenter or System Centre across Metro DCs Metro DC Infrastructure to support Live Workload Mobility Network: Services: Compute: Storage: Data Centre Interconnect and IP Path Optimisations Virtual Switches Distributed across Metro Maintain Multi-Tenant Containers Maintain Stateful Services for active connections Minimise traffic tromboning between Metro DCs Support Single-Tier and Multi-Tier Applications Storage extended across Metro, Synchronous Data Replication Distributed Virtual Volumes Hyper-V Shared Nothing Live Migration (Storage agnostic) 83
84 Example Active-Active Metro Topology
85 Extended Storage Example: Multi-Hop FCoE using NetApp Fabric MetroCluster Optical Network
86 Nexus 1000v Extensions Across Geographies VSMs and VEMs can span Metro distances for enhanced availability VSM Extended Across Data Centres: Supports splitting Active and Standby Nexus 1000V Virtual Supervisor Modules (VSMs) across two data centres to implement cross-dc clusters and VM mobility while ensuring high availability. VEM support across Metro distance: VSM s in the data centre can support VEM s at remote branch offices. DC-1
87 Live Workload Migration Baseline Configuration for a 3-Tier Application The Application, Data, and Services are Operating in Data Centre 1 (Microsoft SharePoint, SQL example) 3-Tier Application (Web, App, Database) in a Palladium Container in DC-1 Public Zone for Web Tier Protected Private Zone for App Tier and Database Tier Mix of Physical and Virtual Services (FW, SLB, VSG) Application, Data, and all Services operating in DC-1 WEB LAN extensions, extended ESXi clusters, and Synchronous Storage Replication between Metro sites APP DATABASE Extended Host Clusters Synchronous Storage
88 Step 1 of a Live Workload Migration: Live Migrate the Application, Data, and Virtual Services from Data Centre 1 to Data Centre 2 Live vmotion all 3-Tiers of the Application to new hosts in DC-2 (Web, App, Database) Virtual Services follow VMs to DC-2, using Port Profile and Security Profile (N1Kv, VSG) Layer 2 Extensions permit tromboning back to DC-1 to maintain Stateful Services for physical appliances (FW, SLB) WEB Palladium Container is extended between Metro sites, maintaining Security, Tenancy, QoS, IP addressing, and User connections Live Migration with minimal disruption (2 seconds or less) for IP/MAC learning of new hosts over network extensions + vmotion quiesce Extended Host Clusters Synchronous Storage APP DATABASE
89 Step 2 of a Live Workload Migration: Cutover to a new Network Container in DC-2 the Application, Data, and all Services are Moved to Data Centre 2 Migrate Physical Services (FW, SLB) to a new Palladium container in DC-2 Redirect external users to DC- 2, using a routing update (or LISP future) The Application, Data, and all Services are now operating in DC-2 WEB Layer 2 Extensions allowed the preservation of IP Addressing for Apps and Services during migration Network Container move with minimal disruption (Sub-second) Extended Host Clusters Synchronous Storage APP DATABASE
90 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 90
91 Network Service Placement for Metro Distances A/S stateful devices stretched across 2 locations nominal workflow Historically this has been well accepted for most of Metro Virtual DC (Twin-DC) L3 Core Outside VLAN FW FT and session synch Inside VLAN Network Services are usually active on primary DC Distributed pair of Act/Sby FW & SLB on each location Additional VLAN Extended for state synchronisation between peers Source NAT for SLB VIP VIP Src-NAT VIP VLAN SLB session synch Note: With traditional pair cluster this scenario is limited to 2 sites Front-end VLAN Subnet A Back-end VLAN Subnet A Primary DC-1 Secondary DC-2
92 Network Service Placement for Metro Distances Ping-Pong impact with A/S stateful devices stretched across 2 locations L3 Core Outside VLAN Historically limited to Network services and HA clusters offering stateful failover & fast convergences Not optimum, but has been usually accepted to work in degraded mode with predictable mobility of Network Services under short distance VIP Src-NAT Subnet A Primary DC-1 Inside VLAN VIP VLAN Front-end VLAN Back-end VLAN 100 km +/- 1 ms per round trip Secondary DC-2 Subnet A FW failover to remote site Source NAT for SLB VIP Consider +/- 1 ms for each round trip for 100 km For Secured multi-tier software architecture, it is possible to measure + 10 round-trips from the initial client request up to the result. Interface tracking optionally enabled to maintain active security and network services on the same site
93 Network Service Placement for Metro Distances Additional Ping-Pong impact with IP mobility between 2 locations Network team is not necessarily aware of the Application/VM mobility Uncontrolled degraded mode with unpredictable mobility of Network Services and Applications VIP Src-NAT L3 Core Outside VLAN Inside VLAN VIP VLAN FW failover to remote site Front-end server farm moves to remote site Source NAT for SLB VIP maintains the return path thru the Active SLB Partial move of a server farm is not optimised Understand and identify the multitier frameworks Front-end VLAN Subnet A Back-end VLAN Subnet A Primary DC km +/- 1 ms per round trip Secondary DC-2
94 Network Service Placement for Metro Distances Stateful Devices and Trombone effect for IP Mobility between 2 locations L3 Core Limited relation between server team (VM mobility) and Network Team (HSRP Filtering) and Service Team (FW, SLB, IPS..) Ping-Pong effect with active services placement may impact the performances Src-NAT Outside VLAN Inside VLAN VIP VLAN Front-end VLAN HSRP Filter It is preferred to migrate the whole multitier framework and enable FHRP filtering to reduce the trombone effect FHRP filtering is ON on the Frontend & Back-end side gateways Source NAT for SLB VIP maintains the return path thru the Active SLB Understand and identify the multi-tier frameworks Subnet A Back-end VLAN Subnet A Primary DC km +/- 1 ms per round trip Secondary DC-2
95 Network Service Placement for Metro Distances Intelligent placement of Network Services based on IP Mobility localisation L3 Core Improving relations between sillo ed organisations increases workflow efficiency Reduce trombon ing with active services placement VIP Src-NAT Subnet A Primary DC-1 VIP VLAN Outside VLAN Inside VLAN Front-end VLAN Back-end VLAN 100 km +/- 1 ms per round trip HSRP Filter Secondary DC-2 VIP Src-NAT Subnet A Move the FW Context associated to the application of interests Interface Tracking to maintain the stateful devices in same location when possible Return traffic keeps symmetrical via the stateful devices and source-nat Intra-DC Path Optimisation almost achieved, however Ingress Path Optimisation may be required Sillo ed organisations Server/app Network/HSRP filter service & security Storage
96 Network Service Placement for Long Distances Active/Standby Network Services per Site with Extended LAN (Hot Live migration) IP Localisation L3 Core Subnet is extended with the VLAN Src NAT on each FW is mandatory No Config nor State Synch Per-box per-site configuration Ingress Path Optimisation can be initiated to reduce trombone effect due to active services placement With Ingress Path optimisation the current Active TCP sessions are interrupted and re-established on the remote site routed mode Src NAT Inside VLAN HSRP Filter Front-End VLAN routed mode Src NAT Extend the VLAN of interests FW and SLB maintain stateful session per DC. No real limit in term of number of DC Granular migration is possible only using LISP or IGP Assist or RHI (if the Enterprise owns the L3 core) Subnet A Back-End VLAN Subnet A DC-1 Move the whole apps framework (Front-End and Back-End) DC-2
97 Can Cisco ASA Firewall help here? Current Statement: ASA clustering spanned across multiple locations is now supported in all modes: 9.1(4) Individual Interface mode (L3) ASA in Routed mode (Firewall Routed Mode) 9.2(1) Spanned Interface mode (L2) North-South Insertion (Firewall Transparent Mode) o NO extended VLAN used by the Cluster LACP 9.3(2) Spanned Interface mode (L2) East-West Insertion (Firewall Transparent Mode) Non currently supported: Spanned Interface mode with ASA in Firewall Routed mode 97
98 ASA Clustering (9.0) TCP SYN cookies with Asymmetrical Traffic workflows Cluster Control Link (CCL) Director 1) State Update 1) Encodes the owner information into SYN cookies 2) forwards SYN packet encoded with the cookie toward the server Owner SYN SYN/ACK SYN/ACK SYN SYN/ACK Client Outside Network Inside Network Server 3) SYN/ACK arrives at non-owner unit 4) decodes the owner information from the SYN cookie 5) forward packet to the owner unit over CCL It is possible that the SYN/ACK from the server arrives at a non-owner unit before the connection is built at the director. As the owner unit processes the TCP SYN, it encodes within the Sequence # which unit in the cluster is the owner Other units can 2015 decode Cisco and/or that its affiliates. information All rights reserved. and forward Cisco the PublicSYN/ACK directly to the owner without having to query the director
99 App has moved Single ASA Clustering Stretched Across Multiple DC Case 1: LISP Extended Subnet Mode with ASA Clustering (Stateful Live migration with LAN extension) M-DB Update your Table L3 Core ITR One Way Symmetric Establishment is achieved via the CCL Current active sessions are maintained stateful New Sessions are optimised dynamically Up to 10ms max one-trip latency, extend the ASA cluster for config and state synch Subnet A HRSP Filter ETR Owner HRSP Filter App is located in ETR-DC-2 CCL Extension Data VLAN Extension Subnet A HRSP Filter ETR HRSP Filter Director 1 - End-user sends Request to App 2 - ITR intercepts the Req and check the localisation 3 - MS relays the Req to the ETR of interests which replies location for Subnet A being ETR DC ITR encaps the packet and sends it to RLOC ETR-DC-1 4 LISP Multi-hop notifies ETR on DC-2 about the move of the App 5 Meanwhile ETR DC-2 informs MS about new location of App 6 MR updates ETR DC-1 7 ETR DC-1 updates its table (App:Null0) 8 ITR sends traffic to ETR DC-1 9 ETR DC-1 replies with a Solicit Map Req 8 ITR sends a Map Req and redirects the Req to ETR DC- 2 DC-1 DC-2
100 Recommendations 1. Layer 2 extensions represent a challenge for optimal routing 2. Consider the implications of stretching the network and security services over multiple DCs 3. For migration over long distances, when possible enable network path optimisation for traffic : Client to server communication (Ingress Optimisation) Server to Client communication for symmetrical return traffic (Egress optimisation) Server to Server communication (bandwidth and Latency optimisation) 4. Otherwise provision enough bandwidth (2 times the needs) and compute the total latency due to ping-pong workflows 5. When moving a VM /Tier, move all the framework 6. Network and security policies must be maintained 7. Consider FW clustering stretched across DC s to reduce the hair-pining workflow (Post 27.n) 100
101 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 101
102 ACI Introduces Logical Network Provisioning of Stateless Hardware with Application Network Profile (ANP) Web App DB Outside (Tenant VRF) QoS Filter QoS Service QoS Filter ACI Fabric Application Policy Infrastructure Controller 102
103 Fabric Infrastructure Important Concepts Inside and Outside APIC Outside EPG associated with external network policies (OSPF, BGP, peering) Forwarding Policy for inside EPG s defined by associated Bridge Domain network policies Web App DB Outside (Tenant VRF) QoS Filter QoS Service QoS Filter Location for Endpoints that are Outside the Fabric are found via redistributed routes sourced from the externally peered routers (Network Level Granularity) Location for Endpoints that are Inside the Fabric are found via the Proxy Mapping DB (Host Level Granularity)
104 Fabric Hierarchy Tenant Private Network Private Network Bridge Domain Bridge Domain Application Profile Application Profile Application Profile EPG EPG EPG EPG EPG EPG EP EP EP EP EP EP EP EP EP EP EP EP
105 ACI Connection to Outside Network Relationship to rest of components (Connectivity view) L2 Network MPLS/IP Network Extended L2 connection for EPG- Web Extended L2 connection for Bridge Domain 2 L3 connection for Sophia L3 connection for Sophia L3 connection for Antipolis Private Network Sophia (VRF 10) Private Network Antipolis (VRF 20) Bridge Domain 1 Bridge Domain 2 Bridge Domain 1 Bridge Domain 2 EPG-Web EPG-App EPG-DB EPG-A EPG-B EPG-X EPG-Y EPG-Video EPG-Archive Bridge domain is in Non-IP / :Layer 2 IP Bridge domain / non-flooding Flooding IP Bridge domain / non Flooding Bridge Domain offers the Default gateway for each Leaf of interest
106 ACI Connection to Outside Network Reference topology and options APIC L3 connection to outside (Internet access or DCI/L3 for DRP) L3 (L3 port, sub-interface, SVI) connection for tenants connecting to existing DC network VRF-lite for tenant isolation OSPFv2 (NSSA),iBGP and static route at FCS L2 connection to outside (DCI/L2 for DRP or DAP) Extend L2 domain outside of ACI fabric brownfield migration L2 extend across POD/site vpc and STP connection WAN Edge focus: ASR 9000, Nexus 7000, ASR 1000 Existing principles of Inbound, Outbound traffic flows, security, DNS/GSS still apply Standard Data Plane (VLAN) L2 connection Web Any leaf switch can be Border-leaf ACI Fabric For L3 outside (Interface, sub-if, SVI) 1 sub-if per Tenant (VRF) OSPF, ibgp, static For L2 connectivity to outside the fabric Interface ACI VLAN Fabric dot1q Extend to local legacy PoD Including remote DCI Bind external network entities (IP, subnet,.1q) with ACI fabric (EPG) L3 connection L3 DCI L2 DCI Border Leaf WAN
107 Extend L2 Domain Out of ACI Fabric 3 options (2 available) Three ways of extend L2 domain beyond ACI fabric 1. Manually assign a port to a VLAN which in turn mapped to an EPG. This extend EPG beyond ACI fabric 2. Create a L2 connection to outside network. Extend bridge domain beyond ACI fabric. Allow contract between EPG inside ACI and EPG outside of ACI 3. Remote VTEP (future) Layer-2 DCI (OTV, VPLS, xwdm, Dark fibre) ACI Fabric VLAN 10 trunk WEB EPG (VLAN10)
108 ACI L2 Connection to Outside Network Integration with brownfield DC and Migration purpose EPG A ACI Fabric 1 AS#300 EPG A EPG APP EPG B EPG B BGP Route Reflector ACI Fabric Bridge Domain DC1-green Bridge Domain DC1-yellow L2 Bridge EPG Domain DC1-blue EPG external APP EPG WEB EPG Z WEB 2 1 L2 external EPG X L2 external EPG Y 3 DC-1 Bind a physical Leaf/Interface/vlan_ID to an Ext. EPG X 1. Extends the Bridge Domain beyond ACI fabric mapping the Leaf/Interface/VLAN to the External EPG 2. Apply contract (filter, QoS ) 3. Interface-VLAN trunk to outside 4. L2 adjacency establishment between the BD and outside the fabric 5. Traffic is policed in one or both ways according to the contract Via the Extended Bridge Network configuration Map the interface VLAN to an External EPG One External EPG per Bridge Domain Ethernet: Interface, VLAN, VxLAN (future) Standard Dot1Q VLAN VLAN Use cases Extend to legacy Access PoD Migration purposes UCS/F.I. attachment
109 Single Fabric Scenarios Single Site (Single Pod) HYPERVISOR HYPERVISOR HYPERVISOR Single Fabric + Single Site Single Operational Zone (Network, VMM Cluster, Storage, FW/LB are all treated as if it is one zone) Can support multiple Openstack Availability Zones (Multiple VMM, Compute Zones)
110 Single Fabric Scenarios Multi-Site (Stretched) Fabric Site/Room A Site/Room B Interconnect Leaf Nodes Partially Meshed Leaf Nodes 10 msec. Round Trip Partially Meshed Leaf Nodes Single Fabric + Multi-Site Use Cases Multi-Building cross campus and metro distances (Dual site cross-metro design is a very common topology) Multi-Floor, Multi-Room Data Centres (cabling restrictions prevent full mesh)
111 Single Fabric Scenarios Multi-Site (Stretched) Fabric Site/Room A Site/Room B Interconnect Leaf Nodes HYPERVISOR HYPERVISOR HYPERVISOR Single Fabric + Multi-Site Single Operational Zone (VMM, Storage, FW/LB are all treated as if it is one zone) e.g. Single vcenter with Synchronized Storage Interconnect between sites Direct Fibre (40G), DWDM (40G or multiple 10G), Pseudo Wire (10G or 40G)
112 Single Fabric Scenarios Multi-Site (Stretched) Fabric Site/Room A Site/Room B Interconnect Leaf Nodes Standby APIC Group VTEP IP VNID Tenant Packet Policy Single APIC Cluster (Leverages 3 Active + 1 Standby APIC for Site Failure Scenarios) Single Operational Domain (Changes are immediately propagated to all nodes in the fabric) Single Infrastructure Single VRF, Bridge Domain and EPG name space (one network) WAN links shared
113 Multi-Fabric Scenarios APIC Controller Domain Fabric A APIC Controller Domain Fabric B H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR Multi-Fabric Use Cases Multiple Fabrics within a single site (includes Multi-Pod, Multi-Floor, Multi-Room Data Centres) Multi-Building cross campus and metro distances (Majority of larger Enterprise and SP/Cloud customers require a dual site active/active in region design) Multiple Fabrics across out of region Data Centres (almost all customers leverage some form of out of region Data Centre)
114 Multi-Fabric Current Options L2/L3 Classification Site A L3_Outside Classify Based on Network/Mask L2_Outside Classify Based on VLAN Site B H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR Web1 Web2 App1 App2 db1 Classify traffic arriving from a remote site (fabric) based on the incoming VLAN or layer 3 prefix (LPM) db2
115 Multi-Fabrics Current Options External Synchronisation of Fabric Policy Site A Site B H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR Symmetrical XML Configuration will maintain consistent operation between fabrics Externally triggered Export and Import between Fabrics is another option to maintain consistency
116 Multi-Fabrics Current Options Multiple Domains Site A Site B H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR H Y P E R V I S OR Multiple APIC Clusters (N+1 Redundancy for each Fabric) Multiple Operational Domains (Changes are immediately propagated to all nodes within each fabric, external tools correlate across fabrics) Replication of VRF, Bridge Domain and EPG name spaces (multiple networks) Layer 2 and/or Layer 3 interconnect between fabrics VMM Clusters can be extended if not integrated with APIC
117
118 Multi-site Abstraction and Portability of Network Metadata and Docker-based Applications ACI Application Network Profile ACI Fabric DC 01 ACI Fabric DC 02 Data Centre 01 Data Centre 02 Docker-based Cloud Application Docker-based Cloud Application
119 Agenda Active-Active (A/A) Data Centre: Market & Business Drivers Terminology, Criticality levels and Solutions Overview A/A Data Centre Design Considerations: Storage Extension Data Centre Interconnect (DCI) L2 & L3 scenarios A/A Metro Data Centres Designs - Network Services and Applications (Path optimisation) Cisco ACI and Active / Active Data Centre Q&A 119
120 Social Media Collateral for Cisco DCI CVD Cloud Business Continuity and Workload Mobility content 15,000+ views Blogs on Cisco.com TechWise TV Show 5,000+ downloads Design Guide Business Continuity and Workload Mobility for Cloud s-continuity-and-workload-mobility-forthe-private-cloud-cisco-validated-designpart-1/ TechWise TV Business Continuity and Workload Mobility prise/data_center/vmdc/dci/1-0-1/dg/dci.html
121 Q & A
122 Complete Your Online Session Evaluation Give us your feedback and receive a Cisco Live 2015 T-Shirt! Complete your Overall Event Survey and 5 Session Evaluations. Directly from your mobile device on the Cisco Live Mobile App By visiting the Cisco Live Mobile Site Visit any Cisco Live Internet Station located throughout the venue T-Shirts can be collected in the World of Solutions on Friday 20 March 12:00pm - 2:00pm Learn online with Cisco Live! Visit us online after the conference for full access to session videos and presentations.
123
124
Stretched Active- Active Application Centric Infrastructure (ACI) Fabric
Stretched Active- Active Application Centric Infrastructure (ACI) Fabric May 12, 2015 Abstract This white paper illustrates how the Cisco Application Centric Infrastructure (ACI) can be implemented as
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T
White Paper Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T Introduction to Virtual Private LAN Service The Cisco Catalyst 6500/6800 Series Supervisor Engine 2T supports virtual
Preserve IP Addresses During Data Center Migration
White Paper Preserve IP Addresses During Data Center Migration Configure Cisco Locator/ID Separation Protocol and Cisco ASR 1000 Series Aggregation Services Routers 2015 Cisco and/or its affiliates. All
ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES
ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES Alastair JOHNSON (AJ) February 2014 [email protected] AGENDA 1. EVPN Background and Motivation 2. EVPN Operations 3. EVPN
Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**
Course: Duration: Price: $ 4,295.00 Learning Credits: 43 Certification: Implementing and Troubleshooting the Cisco Cloud Infrastructure Implementing and Troubleshooting the Cisco Cloud Infrastructure**Part
ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES
ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES Greg Hankins RIPE 68 RIPE 68 2014/05/12 AGENDA 1. EVPN Background and Motivation 2. EVPN Operations 3. EVPN
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure
White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Interconnecting Data Centers Using VPLS
Interconnecting Data Centers Using VPLS Nash Darukhanawalla, CCIE No. 10332 Patrice Bellagamba Cisco Press 800 East 96th Street Indianapolis, IN 46240 viii Interconnecting Data Centers Using VPLS Contents
Cloud Computing and the Internet. Conferenza GARR 2010
Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,
VXLAN Bridging & Routing
VXLAN Bridging & Routing Darrin Machay [email protected] CHI-NOG 05 May 2015 1 VXLAN VM-1 10.10.10.1/24 Subnet A ESX host Subnet B ESX host VM-2 VM-3 VM-4 20.20.20.1/24 10.10.10.2/24 20.20.20.2/24 Load
Using LISP for Secure Hybrid Cloud Extension
Using LISP for Secure Hybrid Cloud Extension draft-freitasbellagamba-lisp-hybrid-cloud-use-case-00 Santiago Freitas Patrice Bellagamba Yves Hertoghs IETF 89, London, UK A New Use Case for LISP It s a use
Cisco Prime Network Services Controller. Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems
Cisco Prime Network Services Controller Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems Agenda Cloud Networking Challenges Prime Network Services Controller L4-7 Services Solutions
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Simplify Your Route to the Internet:
Expert Reference Series of White Papers Simplify Your Route to the Internet: Three Advantages of Using LISP 1-800-COURSES www.globalknowledge.com Simplify Your Route to the Internet: Three Advantages of
Distributed Virtual Data Center for Enterprise and Service Provider Cloud
White Paper Distributed Virtual Data Center for Enterprise and Service Provider Cloud Contents Goal of this Document... 2 Audience... 2 Introduction... 2 Disaster Recovery... 3 Traditional DR Solution
Outline VLAN. Inter-VLAN communication. Layer-3 Switches. Spanning Tree Protocol Recap
Outline Network Virtualization and Data Center Networks 263-3825-00 DC Virtualization Basics Part 2 Qin Yin Fall Semester 2013 More words about VLAN Virtual Routing and Forwarding (VRF) The use of load
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
LISP Functional Overview
CHAPTER 2 This document assumes that the reader has prior knowledge of LISP and its network components. For detailed information on LISP components, their roles, operation and configuration, refer to http://www.cisco.com/go/lisp
JUNIPER DATA CENTER EDGE CONNECTIVITY SOLUTIONS. Michael Pergament, Data Center Consultant EMEA (JNCIE 2 )
JUNIPER DATA CENTER EDGE CONNECTIVITY SOLUTIONS Michael Pergament, Data Center Consultant EMEA (JNCIE 2 ) AGENDA Reasons to focus on Data Center Interconnect MX as Data Center Interconnect Connectivity
Cisco Nexus 1000V and Cisco Nexus 1110 Virtual Services Appliance (VSA) across data centers
Cisco Nexus 1000V and Cisco Nexus 1110 Virtual Services Appliance (VSA) across data centers With the improvement in storage, virtualization and L2 extension technologies, customers can now choose to have
Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang [email protected]. Nov 13, 2014
Simplify IT With Cisco Application Centric Infrastructure Barry Huang [email protected] Nov 13, 2014 There are two approaches to Control Systems IMPERATIVE CONTROL DECLARATIVE CONTROL Baggage handlers follow
Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
ENHANCED BUSINESS CONTINUITY WITH APPLICATION MOBILITY ACROSS DATA CENTERS
ENHANCED BUSINESS CONTINUITY WITH APPLICATION MOBILITY ACROSS DATA CENTERS August 2011 WHITE PAPER 2011 VCE Company, LLC. All rights reserved. 1 Table of Contents Executive summary...3 Drivers for application
PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation
Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
Next-Gen Securitized Network Virtualization
Next-Gen Securitized Network Virtualization Effective DR and Business Continuity Strategies Simplify when the lights go out www.ens-inc.com Your premiere California state government technology provider.
Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments
Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments March 13, 2015 Abstract To provide redundancy and disaster recovery, most organizations deploy multiple data
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Data Center Use Cases and Trends
Data Center Use Cases and Trends Amod Dani Managing Director, India Engineering & Operations http://www.arista.com Open 2014 Open Networking Networking Foundation India Symposium, January 31 February 1,
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
MPLS VPN Services. PW, VPLS and BGP MPLS/IP VPNs
A Silicon Valley Insider MPLS VPN Services PW, VPLS and BGP MPLS/IP VPNs Technology White Paper Serge-Paul Carrasco Abstract Organizations have been demanding virtual private networks (VPNs) instead of
Network Virtualization and Data Center Networks 263-3825-00 DC Virtualization Basics Part 3. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 DC Virtualization Basics Part 3 Qin Yin Fall Semester 2013 1 Outline A Brief History of Distributed Data Centers The Case for Layer 2 Extension
Virtual Machine Mobility with VMware VMotion and Cisco Data Center Interconnect Technologies
Virtual Machine Mobility with VMware VMotion and Cisco Data Center Interconnect Technologies What You Will Learn VMware has been the industry leader in virtualization technologies for the past decade and
Deployment Considerations with Interconnecting Data Centers
Deployment Considerations with Interconnecting Data Centers 2 Session Objectives The main goals of this session are: Highlighting the main business requirements driving Data Center Interconnect (DCI) deployments
Routed VPLS using BGP draft-sajassi-l2vpn-rvpls-bgp-00.txt
Routed VPLS using BGP draft-sajassi-l2vpn-rvpls-bgp-00.txt IETF 77, Anaheim, CA March 2010 Authors: Ali Sajassi, Samer Salam, Keyur Patel Requirements 1. Load balancing on L2/L3/L4 flows 2. Flow-based
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
HCS Redundancy and High Availability
Data Center Infrastructure, page 1 Data Center Interconnect, page 2 Geo-Redundancy, page 2 Geo-Redundancy for Cisco Unified Communications Domain Manager, page 7 Data Center Redundancy, page 15 Application
TRILL for Data Center Networks
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview
Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER
Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER Table of Contents 1 Executive Summary....3 2 Scope and Design Goals....3 2.1 NSX VMkernel
VMware. NSX Network Virtualization Design Guide
VMware NSX Network Virtualization Design Guide Table of Contents Intended Audience... 3 Overview... 3 Components of the VMware Network Virtualization Solution... 4 Data Plane... 4 Control Plane... 5 Management
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
Cisco FabricPath Technology and Design
Cisco Technology and Design 2 Agenda Introduction to Concepts Technology vs Trill Designs Conclusion 3 Introduction to By Francois Tallet 5 Why Layer 2 in the Data Centre? Some Applications / Protocols
Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led
Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Course Description Understanding Cisco Cloud Fundamentals (CLDFND) v1.0 is a five-day instructor-led training course that is designed
RESILIENT NETWORK DESIGN
Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, [email protected] Campus Best Practices - Resilient network design Campus
Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center
Solution Overview Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center What You Will Learn The data center infrastructure is critical to the evolution of
VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000
VMware NSX Network Virtualization Design Guide Deploying VMware NSX with Cisco UCS and Nexus 7000 Table of Contents Intended Audience... 3 Executive Summary... 3 Why deploy VMware NSX on Cisco UCS and
Deploy Application Load Balancers with Source Network Address Translation in Cisco Programmable Fabric with FabricPath Encapsulation
White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco Programmable Fabric with FabricPath Encapsulation Last Updated: 5/19/2015 2015 Cisco and/or its affiliates.
Introduction to MPLS-based VPNs
Introduction to MPLS-based VPNs Ferit Yegenoglu, Ph.D. ISOCORE [email protected] Outline Introduction BGP/MPLS VPNs Network Architecture Overview Main Features of BGP/MPLS VPNs Required Protocol Extensions
VMware and Brocade Network Virtualization Reference Whitepaper
VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM
VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date 2012-10-30
Issue 01 Date 2012-10-30 HUAWEI TECHNOLOGIES CO., LTD. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of
DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch
DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby
Network Architecture Validated designs utilizing MikroTik in the Data Center
1-855-MIKROTIK Network Architecture Validated designs utilizing MikroTik in the Data Center P R E S E N T E D B Y: K E V I N M Y E R S, N E T W O R K A R C H I T E C T / M A N AG I N G PA R T NER I P A
Networking Topology For Your System
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Multi-site Datacenter Network Infrastructures
Multi-site Datacenter Network Infrastructures Petr Grygárek rek 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity) Disaster recovery Easier handling of planned outages
Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles.
Data Networking and Architecture The course focuses on theoretical principles and practical implementation of selected Data Networking protocols and standards. Physical network architecture is described
Private Cloud Computing
Private Cloud Computing Consolidation, Virilization, and Service-Oriented Infrastructure Stephen R. Smoot Nam K. Tan ELSEVIER AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO M< SAN FRANCISCO
Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability.
Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability. VMware TechTalk Live Yury Magalif, Principal Architect Cloud Computing March 11, 2014 What is CDI? Value
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing
White Paper Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing What You Will Learn The data center infrastructure is critical to the evolution of IT from a cost center to a business
SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT
BROCADE SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT Rajesh Dhople Brocade Communications Systems, Inc. [email protected] 2012 Brocade Communications Systems, Inc. 1 Why can t you do these things
Network Virtualization
Network Virtualization Petr Grygárek 1 Network Virtualization Implementation of separate logical network environments (Virtual Networks, VNs) for multiple groups on shared physical infrastructure Total
Availability Digest. www.availabilitydigest.com. Redundant Load Balancing for High Availability July 2013
the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
Module: Business Continuity
Upon completion of this module, you should be able to: Describe business continuity and cloud service availability Describe fault tolerance mechanisms for cloud infrastructure Discuss data protection solutions
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage
Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation
VMware vcloud Architecture Toolkit for Service Providers Architecting a Hybrid Mobility Strategy with VMware vcloud Air Network
VMware vcloud Architecture Toolkit for Service Providers Architecting a Hybrid Mobility Strategy with VMware vcloud Air Network Version 1.0 October 2015 This product is protected by U.S. and international
Disaster Recovery Design with Cisco Application Centric Infrastructure
White Paper Disaster Recovery Design with Cisco Application Centric Infrastructure 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 46 Contents
How To Design A Cisco Multi-Tenant Data Center
White Paper Cisco Virtualized Multiservice Data Center Framework: Deliver IT as a Service What You Will Learn This document provides an overview of the Cisco Virtualized Multiservice Data Center (VMDC)
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD
EMC VPLEX FAMILY Transparent information mobility within, across, and between data centers A STORAGE PLATFORM FOR THE PRIVATE CLOUD In the past, users have relied on traditional physical storage to meet
Virtual Subnet: A Scalable Cloud Data Center Interconnect Solution
Virtual Subnet: A Scalable Cloud Data Center Interconnect Solution draft-xu-virtual-subnet-06 Xiaohu Xu ([email protected]) IETF82, TAIWAN Why VM Mobility across Data Centers Data center maintenance Applications
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair
Data Center Network Virtualisation Standards Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair May 2013 AGENDA 1. Why standardise? 2. Problem Statement and Architecture
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Network Technologies for Next-generation Data Centers
Network Technologies for Next-generation Data Centers SDN-VE: Software Defined Networking for Virtual Environment Rami Cohen, IBM Haifa Research Lab September 2013 Data Center Network Defining and deploying
Demonstrating the high performance and feature richness of the compact MX Series
WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table
Nutanix Tech Note. VMware vsphere Networking on Nutanix
Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking
NSX TM for vsphere with Arista CloudVision
ARISTA DESIGN GUIDE NSX TM for vsphere with Arista CloudVision Version 1.0 August 2015 ARISTA DESIGN GUIDE NSX FOR VSPHERE WITH ARISTA CLOUDVISION Table of Contents 1 Executive Summary... 4 2 Extending
How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)
Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations
Cisco Nexus 1000V Switch for Microsoft Hyper-V
Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.
Scalable Approaches for Multitenant Cloud Data Centers
WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,
Interconnecting Cisco Networking Devices Part 2
Interconnecting Cisco Networking Devices Part 2 Course Number: ICND2 Length: 5 Day(s) Certification Exam This course will help you prepare for the following exam: 640 816: ICND2 Course Overview This course
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
VMUG - vcloud Air Deep Dive. 2014 VMware Inc. All rights reserved.
VMUG - vcloud Air Deep Dive 2014 VMware Inc. All rights reserved. Agenda 1 Overview of vcloud Air 2 Advanced Networking Capabilities 3 Use Cases 4 Overview of Disaster Recovery Service 5 Questions 2 VMware
Evolution of Software Defined Networking within Cisco s VMDC
Evolution of Software Defined Networking within Cisco s VMDC Software-Defined Networking (SDN) has the capability to revolutionize the current data center architecture and its associated networking model.
Top-Down Network Design
Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,
Next Gen Data Center. KwaiSeng Consulting Systems Engineer [email protected]
Next Gen Data Center KwaiSeng Consulting Systems Engineer [email protected] Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
VMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
Virtualization, SDN and NFV
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
The evolution of Data Center networking technologies
0 First International Conference on Data Compression, Communications and Processing The evolution of Data Center networking technologies Antonio Scarfò Maticmind SpA Naples, Italy [email protected]
Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
