Deploy Application Load Balancers with Source Network Address Translation in Cisco Programmable Fabric with FabricPath Encapsulation



Similar documents
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

VXLAN: Scaling Data Center Capacity. White Paper

MPLS VPN over mgre. Finding Feature Information. Prerequisites for MPLS VPN over mgre

Installing Intercloud Fabric Firewall

Internet Firewall CSIS Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS net15 1. Routers can implement packet filtering

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

MPLS-based Layer 3 VPNs

Multiprotocol Label Switching Load Balancing

Disaster Recovery Design with Cisco Application Centric Infrastructure

MPLS VPN Route Target Rewrite

Using OSPF in an MPLS VPN Environment

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Using the Advanced GUI

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

: Interconnecting Cisco Networking Devices Part 1 v2.0 (ICND1)

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

Juniper / Cisco Interoperability Tests. August 2014

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang Nov 13, 2014

Interconnecting Cisco Network Devices 1 Course, Class Outline

Interconnecting Cisco Networking Devices Part 2

VXLAN Bridging & Routing

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

- Multiprotocol Label Switching -

BGP Best Path Selection Algorithm

CCT vs. CCENT Skill Set Comparison

How To Import Ipv4 From Global To Global On Cisco Vrf.Net (Vf) On A Vf-Net (Virtual Private Network) On Ipv2 (Vfs) On An Ipv3 (Vv

Cisco FabricPath Technology and Design

IPv6 over MPLS VPN. Contents. Prerequisites. Document ID: Requirements

Course Overview: Learn the essential skills needed to set up, configure, support, and troubleshoot your TCP/IP-based network.

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Extending Networking to Fit the Cloud

PRASAD ATHUKURI Sreekavitha engineering info technology,kammam

Cisco Certified Network Associate Exam. Operation of IP Data Networks. LAN Switching Technologies. IP addressing (IPv4 / IPv6)

How To Make A Network Secure

"Charting the Course...

Outline VLAN. Inter-VLAN communication. Layer-3 Switches. Spanning Tree Protocol Recap

Availability Digest. Redundant Load Balancing for High Availability July 2013

Border Gateway Protocol (BGP)

Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led

Virtualization, SDN and NFV

How Routers Forward Packets

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM

Using the Border Gateway Protocol for Interdomain Routing

Table of Contents. Cisco How Does Load Balancing Work?

BGP Link Bandwidth. Finding Feature Information. Contents

Cisco Prime Network Services Controller. Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems

Troubleshooting Bundles and Load Balancing

Configuring a Load-Balancing Scheme

Introduction Inter-AS L3VPN

Load balancing and traffic control in BGP

basic BGP in Huawei CLI

BGP Link Bandwidth. Finding Feature Information. Prerequisites for BGP Link Bandwidth

Frame Mode MPLS Implementation

Configuring Auto Policy-Based Routing

Configuring MPLS Hub-and-Spoke Layer 3 VPNs

Dynamic L4-L7 Service Insertion with Cisco ACI and A10 Thunder ADC REFERENCE ARCHITECTURE

Computer Networks Administration Help Manual Sana Saadaoui Jemai Oliver Wellnitz

Installation Guide Avi Networks Cloud Application Delivery Platform Integration with Cisco Application Policy Infrastructure

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

IPv4/IPv6 Transition Mechanisms. Luka Koršič, Matjaž Straus Istenič

Juniper Exam JN0-343 Juniper Networks Certified Internet Specialist (JNCIS-ENT) Version: 10.1 [ Total Questions: 498 ]

Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures

Implementing MPLS VPNs over IP Tunnels on Cisco IOS XR Software

How To Learn Cisco Cisco Ios And Cisco Vlan

INTERCONNECTING CISCO NETWORK DEVICES PART 1 V2.0 (ICND 1)

Cisco Data Centre: Introducing Cisco Data Center Networking

Overlay Transport Virtualization

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES

Cisco Configuring Basic MPLS Using OSPF

NX-OS and Cisco Nexus Switching

Understanding Route Redistribution & Filtering

Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance

VMDC 3.0 Design Overview

Application Note. Failover through BGP route health injection

Implementing MPLS VPNs over IP Tunnels

Understanding Virtual Router and Virtual Systems

Virtual PortChannel Quick Configuration Guide

Route Discovery Protocols

Cisco ACI Simulator Release Notes, Release 1.2(1i)

How To Understand Bg

Deployment Guide AX Series for Palo Alto Networks SSL Intercept and Firewall Load Balancing

How To Set Up Bgg On A Network With A Network On A Pb Or Pb On A Pc Or Ipa On A Bg On Pc Or Pv On A Ipa (Netb) On A Router On A 2

: Interconnecting Cisco Networking Devices Part 2 v1.1

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

MPLS. Cisco MPLS. Cisco Router Challenge 227. MPLS Introduction. The most up-to-date version of this test is at:

Note: This case study utilizes Packet Tracer. Please see the Chapter 5 Packet Tracer file located in Supplemental Materials.

Chapter 3 Configuring Basic IPv6 Connectivity

Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.

Configuring Network Address Translation

Configuring Tunnel Default Gateway on Cisco IOS EasyVPN/DMVPN Server to Route Tunneled Traffic

> Technical Configuration Guide for Microsoft Network Load Balancing. Ethernet Switch and Ethernet Routing Switch Engineering

Inter-domain Routing Basics. Border Gateway Protocol. Inter-domain Routing Basics. Inter-domain Routing Basics. Exterior routing protocols created to:

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

SDN CONTROLLER. Emil Gągała. PLNOG, , Kraków

Introduction to MPLS-based VPNs

Routing Protocols. Interconnected ASes. Hierarchical Routing. Hierarchical Routing

Introducing Basic MPLS Concepts

Transcription:

White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco Programmable Fabric with FabricPath Encapsulation Last Updated: 5/19/2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 32

Contents Introduction... 3 Target Audience... 3 Prerequisites... 3 Placing the Application Load Balancer in the Fabric... 3 Choosing the Load Balancer Deployment Type... 3 Deployment Scenario 1: Application Load Balancer with Virtual IP Address Directly Attached to Fabric... 4 Data Traffic Path in the Fabric... 5 Configuring Autoconfiguration Profiles... 6 Deployment Scenario 2: Application Load Balancer with Host Route Injection and Dynamic Routing between Load Balancer and Fabric... 10 Data Traffic Path in the Fabric... 11 Configuring Autoconfiguration Profiles... 13 Deployment Scenario 3: Application Load Balancer with Static Routing Between Load Balancer and Fabric... 18 Data Traffic Path in the Fabric... 19 Configuring Autoconfiguration Profiles... 21 Deployment Scenario 4: Shared Hardware-Accelerated Application Delivery Controller with VIP Address Directly Attached to Fabric... 25 Data Traffic Path in the Fabric... 26 Configuring Autoconfiguration Profiles... 28 Deployment Considerations for vpc+ Dual-Attached Appliances... 28 Appendix: CLI Configurations for the Profiles... 29 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 32

Introduction The primary goal of this document is to provide guidelines about how to implement application load balancers in the data center using Cisco Programmable Fabric with FabricPath Encapsulation. Readers will learn how to integrate load balancers into the Programmable Fabric using network autoconfiguration on Cisco Nexus Family switches. The network integration deployment scenarios covered in this document are not specific to any vendor and can accommodate any application load balancer available on the market today. Target Audience This document is written for network architects; network design, planning, and implementation teams; and application services and maintenance teams. Prerequisites This document assumes that the reader is already familiar with the mechanisms of the programmable fabric autoconfiguration feature. The reader should be familiar with mobility domain, virtual switch interface (VSI) Discovery and Configuration Protocol (VDP), network profile, and services-network profile configurations. Please refer to the following configuration guide for more information: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/dfa/configuration/b-dfa-configuration.html. Placing the Application Load Balancer in the Fabric Load-balancer appliances can be connected in several places in the network. Network autoconfiguration on Cisco Nexus switches allows dynamic instantiation of the necessary configuration on leaf nodes, so the recommended approach is to connect load balancers at the leaf level. Spine nodes do not contain any classical ethernet (CE) host ports and should not be used as service attachment points. With the dynamic autoconfiguration feature, load balancers, in both hardware and virtual machine form factors, can be connected anywhere in the network. Network utilization and forwarding can be optimized when relevant service appliances are attached to a single pair of leaf nodes, referred to as the service leaf. The logical role of the service leaf does not change the configuration or enable additional features on this set of leaf nodes. It is used essentially as a central location for attaching service nodes. If your organization chooses to use the service leaf and needs to use virtual load balancers or virtual appliances, you will need to follow certain guidelines. With automated or orchestrated virtual services deployment mechanisms, the automation or orchestration tool must help ensure the location of deployed virtual services and virtual machines. For example, in Cisco UCS Director, you can specify a set of hypervisors, on which virtual services can be created. Attaching this set of hypervisors to the service leaf will help ensure the location of deployed services in the network. Choosing the Load Balancer Deployment Type In a network, a load balancer can be deployed in the following scenarios: One or more load balancers for a given tenant: Load balancers can be virtual or physical. One or more load balancers shared across multiple tenants: Here, the load balancer is most likely a hardware platform, and depending on the vendor and software, the load balancer may provide built-in virtualization features, such as traffic domains, Virtual Routing and Forwarding (VRF) functions, and virtual contexts. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 32

One or more hardware offload appliance shared across multiple tenants: This appliance would primarily be used with SSL offload or other resource-intensive applications. This document focuses on deployment scenarios in which a given load balancer is used by a single tenant. The availability of multitenancy mechanisms allows you to easily expand the single-tenant scenario described here to multitenant deployments by using VLAN and VRF separation. Deployment Scenario 1: Application Load Balancer with Virtual IP Address Directly Attached to Fabric This scenario walks through a one-arm application load balancer. The virtual IP (VIP) address of the load balancer is directly attached to the switch and will be visible in a similar way to an end host in the fabric. This very general and frequently seen use case is shown in Figure 1. Figure 1. Logical Schema of One-Arm Load Balancer, Web Servers, and Clients Internal and External to Fabric For this and all other deployment scenarios in this document, the load balancer is configured with Source Network Address Translation (SNAT) to facilitate the server return path through the load balancer. The load balancer is configured with one or more VIP addresses depending on the application requirements. These addresses have their respective default gateways on the Leaf-1 node, which maintains the Address Resolution Protocol (ARP) cache for all directly attached IP addresses. Each VIP address entry in the ARP cache of the leaf node is then converted to the /32 IP address prefix and is distributed throughout the fabric using the fabric control plane (Multiprotocol Border Gateway Protocol [MP-BGP]). The default gateway for the VIP subnet is a switch virtual interface (SVI), which is automatically configured with the autoconfigure feature of the fabric. Network segments, which host web servers and internal to fabric clients, are configured with their respective autoconfiguration profiles and can use the expedited forwarding or traditional forwarding mode. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 32

Data Traffic Path in the Fabric Clients that access the load-balanced application can be located within the fabric or external to fabric. Figures 2 and 3 show how application data traffic is load-balanced in the network fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the web application, which can be reached through VIP1. 2. On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding to one of the real web servers on the configuration list. The load balancer performs a NAT operation and swaps out the client s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure that the return traffic passes back through the load balancer. The packet is then forwarded to the real server. In most deployment scenarios, VIP addresses and real web servers reside on different subnets. Figure 2. Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path 3. When the load balancer receives the return traffic from the web server, the traffic is subjected to SNAT. This process helps ensure that the client maintains the TCP session of a current web transaction or the User Datagram Protocol (UDP) data stream of a given application. 4. The load balancer then forwards the return traffic back to the client. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 32

Figure 3. Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path Configuring Autoconfiguration Profiles You can use the autoconfiguration feature of the Cisco Nexus switches and the related fabric to dynamically instantiate the necessary configuration wherever end hosts or services appliances are attached to the fabric. In this deployment scenario, the load balancer, as a services appliance, is configured so that the VIP address of the load-balanced service is in the same subnet as the physical load-balancer network interface. The VIP address is seen directly in the ARP table of the switch and redistributed to the fabric as a host /32 prefix. Moreover, there is no need for any static or dynamic routing adjacency in this case. The load balancer must be properly configured in the IP subnet (with the correct default gateway IP address). The autoconfiguration profile defaultnetworkuniversaltfprofile 1 will be used here to attach the load balancer in exactly the same way as you attach regular hosts. With the autoconfiguration feature, you can attach the load-balancer appliance from any vendor to fabric. Note: This example does not cover out-of-band (OOB) management-port configuration. If an OOB management interface is connected to the fabric and needs to be configured, you also need to create a separate autoconfiguration profile in Cisco Prime Data Center Network Manager (DCNM). First, you need to determine which tenant will be hosting the load balancer (Figures 4 and 5). If the organization and partition for the tenant do not exist, you will need to define them in DCNM. When you create the partition, note that with DCNM and Cisco NX-OS Software Release 7.1 and later, you can use universal autoconfiguration profiles. For this and the next deployment scenarios, use vrf-commonuniversal-dynamic-lb-es 2 as the partition profile. This specific partition profile is needed to facilitate the redistribution of leaf-local routing information to the fabric. Please refer to the appendix for details about the command-line interface (CLI) commands. 1 The CLI command details for this profile can be found in the appendix. 2 The CLI command details for this profile can be found in the appendix. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 32

Figure 4. Organization Creation Figure 5. Partition Creation Next, you need to provision the autoconfiguration profile to which you intend to attach the load balancer (Figure 6). Note that the functions described in this deployment scenario are verified only for matching network and partition autoconfiguration profiles. You should use the traditional forwarding mode profile, defaultnetworkuniversaltfprofile, to help ensure that VIP addresses are discovered throughout the fabric and do not go silent, which may happen as a result of various vendor implementations. Also note the VLAN and mobility domain being used. You will need to use this exact VLAN ID in the load-balancer configuration. In the example used here, the global mobility domain is used to uniquely derive the virtual network ID (VNI) value for a bridge domain to which the load balancer is attached. However, customers can use the multiplemobility-domain feature, which allows the choice of a value from the drop-down menu for the network profile configuration. If a virtual appliance with a VDP-capable virtual switch is used (for example, Cisco Nexus1000V Switch or Kernel-based Virtual Machine [KVM] Open Virtual Switch [OVS]), the mobility domain is not needed. Please refer to the configuration guide for details. http://www.cisco.com/c/en/us/td/docs/switches/datacenter/dfa/configuration/b-dfaconfiguration/auto_configuration.html. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 32

Figure 6. Autoconfiguration Profile Creation for Load-Balancer VIP-Attached Subnet. After you plug in your load-balancer appliance or, in case of a virtual appliance, spin up the virtual machine and launch a service, the SVI default gateway is instantiated on the leaf node using autoconfiguration. Then the VIP address for a configured service is learned on the leaf node along with the IP address of the main interface of a load balancer in one-arm mode. The instantiated autoconfiguration profile can be checked from the CLI of the leaf node to which the load balancer is attached: show fabric database host detail Active Host Entries flags: L - Locally inserted, V - vpc+ inserted, R - Recovered, X - xlated Vlan VLAN VNI STATE FLAGS PROFILE(INSTANCE) 100 30003 Profile Active L defaultnetworkuniversaltfprofile(instance_def_100_1) Displaying Data Snooping Ports Interface Encap Flags State Eth1/1 100 L Profile Active VIP addresses configured on the load balancer are learned and can be seen from the MAC address table on the leaf node: 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 32

show mac address-table vlan 100 Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vpc Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+------------------ * 100 2020.0000.00aa static 0 F F sup-eth2 * 100 d867.d903.f345 dynamic 0 F F Eth1/1 As the configuration of the load balancer dictates, all VIP addresses use the same subnet and terminate on the leaf node: show ip arp vrf OrganizationABC:PartitionABC Flags: * - Adjacencies learnt on non-active FHRP router + - Adjacencies synced via CFSoE # - Adjacencies Throttled for Glean D - Static Adjacencies attached to down interface IP ARP Table for context OrganizationABC:PartitionABC Total number of entries: 4 Address Age MAC Address Interface 172.16.10.10 00:02:11 d867.d903.f345 Vlan100 172.16.10.11 00:03:02 d867.d903.f345 Vlan100 172.16.10.12 00:03:02 d867.d903.f345 Vlan100 172.16.10.13 00:03:02 d867.d903.f345 Vlan100 The leaf node converts each of the ARP entries for the corresponding VIP addresses to /32 IP address prefixes and shares them with the fabric: sh ip route vrf OrganizationABC:PartitionABC IP Route Table for VRF "OrganizationABC:PartitionABC" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 0.0.0.0/0, ubest/mbest: 1/0 *via 10.201.4.21%default, [200/0], 00:13:50, bgp-65510, internal, tag 65510, segid 50003 172.16.10.0/24, ubest/mbest: 1/0, attached *via 172.16.10.1, Vlan100, [0/0], 00:14:01, direct, tag 12345, 172.16.10.1/32, ubest/mbest: 1/0, attached 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 32

*via 172.16.10.1, Vlan100, [0/0], 00:14:01, local, tag 12345, 172.16.10.10/32, ubest/mbest: 1/0, attached *via 172.16.10.10, Vlan100, [190/0], 00:06:18, hmm 172.16.10.11/32, ubest/mbest: 1/0, attached *via 172.16.10.11, Vlan100, [190/0], 00:06:18, hmm 172.16.10.12/32, ubest/mbest: 1/0, attached *via 172.16.10.12, Vlan100, [190/0], 00:06:18, hmm 172.16.10.13/32, ubest/mbest: 1/0, attached *via 172.16.10.13, Vlan100, [190/0], 00:06:18, hmm sh ip bgp vrf OrganizationABC:PartitionABC BGP routing table information for VRF OrganizationABC:PartitionABC, address fami ly IPv4 Unicast BGP table version is 10, local router ID is 172.16.10.1 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I- injected Origin codes: i - IGP, e - EGP,? - incomplete, - multipath, & - backup Network Next Hop Metric LocPrf Weight Path *>i0.0.0.0/0 10.201.4.21 100 0 i *>r172.16.10.0/24 0.0.0.0 0 100 32768? *>r172.16.10.10/32 0.0.0.0 0 100 32768? *>r172.16.10.11/32 0.0.0.0 0 100 32768? *>r172.16.10.12/32 0.0.0.0 0 100 32768? *>r172.16.10.13/32 0.0.0.0 0 100 32768? The load balancer s network connectivity is now provisioned. The load balancer is now ready for further service policy configuration, which can be performed through its CLI or GUI, depending on the vendor of the load balancer in use. Such configuration is beyond the scope of this document. Deployment Scenario 2: Application Load Balancer with Host Route Injection and Dynamic Routing between Load Balancer and Fabric In this scenario, the virtual or physical load-balancer appliance is directly attached to a leaf switch, However, the VIP address for the load-balanced application appears to be attached behind a virtual router inside the load balancer. The reachability information about the configured VIPs addresses is shared with the fabric using the Open Shortest Path First (OSPF) dynamic routing protocol. The load balancer establishes dynamic routing protocol peering with the leaf device to facilitate the exchange of route information (Figure 7). 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 32

Figure 7. Logical Schema Showing Dynamic Routing Adjacency Between the Load Balancer and the Fabric Just as in deployment scenario 1, the load balancer is configured with SNAT to facilitate the server return path through the load balancer. Using the OSPF dynamic routing protocol, the load balancer shares reachability information about the entire subnet on which VIP addresses reside. When the leaf node receives this reachability information, it is redistributed to the MP-BGP control plane and shared throughout the fabric. As a result, the entire fabric will know how to reach the VIP addresses for the applications. Note: Configuration of the dynamic routing protocol and peering is handled using the autoconfiguration profile and is discussed later in this document. Data Traffic Path in the Fabric Scenario 2 is similar in many ways to scenario 1. Figures 8 and 9 show how application data traffic is loadbalanced in the programmable fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the web application, which can be reached through the VIP address (VIP1). The VIP addresses are already configured on the load balancer and shared with the fabric, so any workload or device attached to the fabric in the same VRF instance will be able to reach the desired VIP address. 2. On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding to one of the web servers on the configuration list. The load balancer performs a NAT operation and swaps out the client s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure that the return traffic passes through the load balancer. The packet is then forwarded to web server selected earlier. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 32

Figure 8. Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path 3. When the load balancer receives the return traffic from the web server, the traffic is subjected to NAT. This process helps ensure that the client maintains the TCP session of a current web transaction. 4. The load balancer then forwards the return traffic back to client. Figure 9. Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 32

Configuring Autoconfiguration Profiles In this deployment scenario, the fabric needs to establish dynamic routing adjacency with the load balancer. In other words, the leaf node must automatically establish OSPF routing adjacency with the load balancer, receive prefixes from the load balancer, and then redistribute the prefixes to the BGP control plane of the fabric. In contrast to the first scenario, there is no need to configure distributed anycast gateway, when establishing dynamic routing protocol adjacency between the load balancer and the leaf node. The network autoconfiguration profile that meets this requirement and that is created specifically for such a scenario is servicenetworkuniversaldynamicroutinglbprofile 3. Note that this autoconfiguration profile must be deployed in the partition defined with the vrf-common-universal-dynamic-lb-es 4 partition profile. Using these two profiles in parallel facilitates the redistribution of the correct route information between the fabric and the load balancer (Figures 10 and 11). Figure 10. Configuring the Partition Using the vrf-common-universal-dynamic-lb-es Profile 3 The CLI command details for this profile can be found in the appendix. 4 The CLI command details for this profile can be found in the appendix. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 32

Figure 11. Configuring the Network Segment Used for Dynamic Routing Peering Between the Fabric and Load Balancer The OSPF routing protocol configuration on the load balancer itself needs to be specified separately, using either the load balancer s CLI or GUI. The following options need to be configured: Peering with the fabric using backbone area 0 (equivalent to area 0.0.0.0) Default route (0.0.0.0/0) with the next hop pointing to the gateway: in the example here, 10.10.15.1 OSPF router ID according to the load-balancer-specific syntax Advertisement of the VIP addresses in OSPF VLAN ID value that matches the value configured in the autoconfiguration profile in DCNM: in the example here, 301 After the load balancer is connected to the fabric, the leaf node will detect on the host port the data traffic tagged with VLAN ID 301. This detection will trigger the instantiation of the autoconfiguration profile. The following configuration is instantiated on the leaf or added to the existing configuration as part of the autoconfiguration process: show run ospf feature ospf router ospf 5 vrf OrganizationA:PartitionA router-id 10.10.15.1 interface Vlan301 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 32

ip router ospf 5 area 0.0.0.0 sh run bgp router bgp 65510 vrf OrganizationA:PartitionA address-family ipv4 unicast redistribute hmm route-map FABRIC-RMAP-REDIST-HOST redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET redistribute ospf 5 route-map ospfmap maximum-paths ibgp 2 address-family ipv6 unicast redistribute hmm route-map FABRIC-RMAP-REDIST-V6HOST redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET maximum-paths ibgp 2 vrf context OrganizationA:PartitionA rd auto address-family ipv4 unicast route-target import 65510:9999 route-target both auto address-family ipv6 unicast route-target import 65510:9999 route-target both auto show run int vlan 301 expand-port-profile interface Vlan301 no shutdown vrf member OrganizationA:PartitionA ip address 10.10.15.1/24 tag 12345 ip router ospf 5 area 0.0.0.0 Note the redistribute ospf 5 command in the BGP configuration. This command helps ensure that all VIP address prefixes received from the load balancers are redistributed to the fabric BGP control plane and shared with the rest of the fabric: that is, that the entire fabric will learn these prefixes through BGP. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 32

The instantiated autoconfiguration profile can be checked from the CLI of leaf node to which the load balancer is attached: sh fabric database host detail Active Host Entries flags: L - Locally inserted, V - vpc+ inserted, R - Recovered, X - xlated Vlan VLAN VNI STATE FLAGS PROFILE(INSTANCE) 301 30001 Profile Active L servicenetworkuniversaldynamicroutinglbprofile(instance_def_301_1) Displaying Data Snooping Ports Interface Encap Flags State Eth1/1 301 L Profile Active As seen in the following CLI output, the load balancer successfully established a routing adjacency with the fabric leaf: sh ip ospf neighbors vrf OrganizationA:PartitionA OSPF Process ID 5 VRF OrganizationA:PartitionA Total number of neighbors: 1 Neighbor ID Pri State Up Time Address Interface 10.10.15.2 1 FULL/DR 00:00:03 10.10.15.2 Vlan301 The next CLI output confirms that the leaf received valid /32 IP routes through OSPF. Here, each such IP route represents a VIP address configured on the load balancer: sh ip route vrf OrganizationA:PartitionA IP Route Table for VRF "OrganizationA:PartitionA" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 0.0.0.0/0, ubest/mbest: 1/0 *via 10.201.4.21%default, [200/0], 00:45:15, bgp-65510, internal, tag 65510, segid 50005 10.10.15.0/24, ubest/mbest: 1/0, attached *via 10.10.15.1, Vlan301, [0/0], 00:45:28, direct, tag 12345, 10.10.15.1/32, ubest/mbest: 1/0, attached *via 10.10.15.1, Vlan301, [0/0], 00:45:28, local, tag 12345, 172.16.10.10/32, ubest/mbest: 1/0 *via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra 172.16.10.11/32, ubest/mbest: 1/0 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 32

*via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra 172.16.10.12/32, ubest/mbest: 1/0 *via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra 172.16.10.13/32, ubest/mbest: 1/0 *via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra The following CLI output shows that redistribution from OSPF to BGP works as expected: sh ip bgp vrf OrganizationA:PartitionA BGP routing table information for VRF OrganizationA:PartitionA, address family I Pv4 Unicast BGP table version is 35, local router ID is 10.10.15.1 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I- injected Origin codes: i - IGP, e - EGP,? - incomplete, - multipath, & - backup Network Next Hop Metric LocPrf Weight Path *>i0.0.0.0/0 10.201.4.21 100 0 i *>r10.10.15.0/24 0.0.0.0 0 100 32768? *>r172.16.10.10/32 0.0.0.0 41 100 32768? *>r172.16.10.11/32 0.0.0.0 41 100 32768? *>r172.16.10.12/32 0.0.0.0 41 100 32768? *>r172.16.10.13/32 0.0.0.0 41 100 32768? In addition, the next two sets of CLI output show the MAC address and the respective ARP entry of the load balancer s interface: sh mac address-table vlan 301 Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vpc Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+------------------ * 301 d867.d903.f345 dynamic 10 F F Eth1/1 sh ip arp vrf OrganizationA:PartitionA Flags: * - Adjacencies learnt on non-active FHRP router + - Adjacencies synced via CFSoE # - Adjacencies Throttled for Glean D - Static Adjacencies attached to down interface 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 32

IP ARP Table for context OrganizationA:PartitionA Total number of entries: 1 Address Age MAC Address Interface 10.10.15.2 00:16:51 d867.d903.f345 Vlan301 As a summary, Figure 12 depicts the logical routing topology of this scenario. Figure 12. Logical Routing Topology Deployment Scenario 3: Application Load Balancer with Static Routing Between Load Balancer and Fabric This scenario is very similar to scenario 2: that is, the VIP address for the load-balanced application is configured on the load balancer. However, in scenario 3 the load balancer does not establish dynamic routing protocol adjacency with the leaf node in the fabric. Instead, the reachability information about VIP addresses is configured on the leaf node and the load balancer using static routes (Figure 13). Figure 13. Logical Schema Showing the Static Routing Between the Load Balancer and the Fabric 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 32

Just as in the previous deployment scenarios, the load balancer is configured with SNAT to facilitate the server return path through the load balancer. Static routes toward VIP addresses need to be configured on a directly attached leaf node: in the example here, on Leaf-1. The next hop for these prefixes should point to the load balancer s interface IP address: in the example here, 10.10.20.2. In addition, these static routes must be redistributed to the MP-BGP control plane of the fabric to facilitate fabricwide reachability to VIP addresses. Static routes to VIP addresses together with their redistribution are configured in DCNM as part of the autoconfiguration profile and are dynamically instantiated when the load balancer is attached to the network. As a result, the entire fabric will know how to reach VIP addresses for the respective applications. Please note, that automated configuration of the static routes happens as part of the partition profile autoconfiguration. This means, that any network autoconfiguration profile, which is associated with such partition profile or VRF, will also trigger automated configuration of static routes on a given Leaf node. Data Traffic Path in the Fabric Figures 14 and 15 show how application data traffic is load-balanced in the programmable fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the web application, which can be reached through the VIP address (VIP1). The VIP addresses are already configured on the load balancer. Static routes to the VIP addresses are configured on the Leaf-1 node and are redistributed to the fabric control plane, so any workload or device attached to the fabric in the same VRF instance will be able to reach the desired VIP address. 2. On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding to one of the web servers on the configuration list. The load balancer performs a NAT operation and swaps out the client s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure that the return traffic passes through the load balancer. The packet is then forwarded to the web server selected earlier. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 32

Figure 14. Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path 3. When the load balancer receives the return traffic from the web server, the traffic is subjected to NAT. This process helps ensure that the client maintains the TCP session of a current web transaction or UDP data stream of a given application. 4. The load balancer then forwards the return traffic back to the client. Figure 15. Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 32