Integration of Hypervisors & L4-7 Services with ACI Maurizio Portolani, Distinguished Technical Marketing Engineer Azeem Suleman, Architect Engineering BRKACI-2006
BRKACI-2006 ABSTRACT
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
Cisco ACI Logical Network Provisioning of Stateless Hardware Web App DB Outside (Tenant VRF) QoS Filter QoS Service QoS Filter APIC ACI Fabric Scale-Out Penalty Free Overlay Application Policy Infrastructure Controller
ACI Network Profile Policy-Based Fabric Management Application Extend the principle of Cisco UCS Manager service profiles to the entire fabric Network profile: stateless definition of application requirements - Application tiers - Connectivity policies - Layer 4 7 services - XML/JSON schema Fully abstracted from the infrastructure implementation - Removes dependencies of the infrastructure - Portable across different data center fabrics Web Tier Storage App Tier ## Network Profile: Defines Application Level Metadata (Pseudo Code Example) <Network-Profile = Production_Web> <App-Tier = Web> <Connected-To = Application_Client> <Connection-Policy = Secure_Firewall_External> <Connected-To = Application_Tier> <Connection-Policy = Secure_Firewall_Internal & High_Priority>... <App-Tier = DataBase> <Connected-To = Storage> <Connection-Policy = NFS_TCP & High_BW_Low_Latency>... Storage DB Tier The network profile fully describes the application connectivity requirements
Multi-Hypervisor-Ready Fabric Virtual Integration APIC Network Admin Integrated gateway for VLAN and VxLAN, networks from virtual to physical APIC ACI Fabric Normalization for VXLAN, and VLAN networks VLAN VXLAN VLAN VLAN VXLAN VLAN Customer not restricted by a choice of hypervisor Fabric is ready for multi-hypervisor VMware Microsoft ESX Hyper-V KVM VMware Microsoft Red Hat PHYSICAL SERVER Red Hat Application Admin Hypervisor Management
Providers Service Profile Service Graph ACI Layer 4-7 Service Integration Centralized, Automated, And Supports Existing Model Service insertion architecture for: physical virtual services Helps enable administrative separation between application tier policy and service definition APIC as central point of network control with policy coordination Automation of service bring-up/tear-down through programmable interface Application Admin Web Tier A Web Server Server begin Stage 1.. Stage N inst Chain Security 5 Security 5 Chain Defined inst end App Tier B Web App Server Server Service Admin inst.. inst Firewall Load Balancer
Basics of ACI Forwarding How to Create a L2 domain? Create a Bridge Domain Disable Unicast Routing on the Bridge Domain Associate the Bridge Domain with a VRF The association with the VRF is because of the object model The hardware won t program any VRF if the Bridge Domain is configured only as L2 Bridge Domain 1 VRF
Basics of ACI Forwarding: How to Create a L2 domain with Routing? Create a Bridge Domain Enable Unicast Routing Associate the Bridge Domain with the VRF Configure the default gateway by creating the Subnet on the Bridge Domain VRF Subnets Bridge Domain 1
How does the Bridge Domain Forward Traffic? Based on the mapping database Or With flood and learn This is configured by configuring hardware-proxy and by disabling ARP flooding This is configured by enabling unknown unicast flooding and by enabling ARP flooding
Bridge Domain Configuration Association with the VRF Enables or disables the use of the Mapping Database Enables or disables Routing
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
Hypervisor Interaction with ACI The main advantage of Integrated mode is the dynamic allocation of the encapsulation Non-Integrated Mode Integrated Mode VLAN 10 VLAN 10 VXLAN 10000 APP WEB DB DB ACI Fabric as an IP-Ethernet Transport Encapsulations manually allocated Separate Policy domains for Physical and Virtual ACI Fabric as a Policy Authority Encapsulations Normalized and dynamically provisioned
Hypervisor Integration with ACI Control Channel - VMM Domains Relationship is formed between APIC and Virtual Machine Manager (VMM) Multiple VMMs likely on a single ACI Fabric Each VMM and associated Virtual hosts are grouped within APIC vcenter DVS vcenter AVS SCVMM Called VMM Domain VMM Domain 1 VMM Domain 2 VMM Domain 3 There is 1:1 relationship between a Virtual Switch and VMM Domain
ACI Fabric Integrated Overlay Data Path - Encapsulation Normalization IP Fabric Using VXLAN Tagging Normalized Encapsulation Any to Any VTEP VXLAN IP Payload Localized Encapsulation VXLAN VNID = 5789 802.1Q VLAN 50 VXLAN VNID = 11348 NVGRE VSID = 7456 All traffic within the ACI Fabric is encapsulated with an extended VXLAN header External VLAN, VXLAN tags are mapped at ingress to an internal VXLAN tag Forwarding is not limited to, nor constrained within, the encapsulation type or encapsulation overlay network External identifies are localized to the Leaf or Leaf port, allowing re-use and/or translation if required Outer IP 802.1Q VXLAN Eth MAC Eth IP IP Eth IP Normalization of Ingress Encapsulation Payload Payload Payload Payload
Hypervisor Integration with ACI VMM Domains & VLAN Encapsulation 16M Virtual Networks VLAN ID only gives 4K EPGs (12 bits) Scale by creating domains of 4K EPGs Map EPGs to VMM Domain based on scope of live migration EP EP EP EP EP EP EP EP EP EP EP EP EP EP Place VM anywhere Live migrate within VMM domain VMM Domain 1: 4K EPGs VMM Domain 2: 4K EPGs
Hypervisor Integration with ACI VMM Domains & VLAN Encapsulation EP EP VLAN 5 16M Virtual Networks VNID 6032 EP VLAN 16 VMM Domain 1: 4K EPGs VMM Domain 2: 4K EPGs EP VLAN ID only gives 4K EPGs (12 bits) Scale by creating pockets of 4K EPGs Map EPGs to VMM Domain based on scope of live migration Place VM anywhere Live migrate within VMM domain
Hypervisor Integration with ACI Endpoint Discovery Virtual Endpoints are discovered for reachability & policy purposes via 2 methods: APIC Control Plane Learning: - Out-of-Band Handshake: vcenter APIs - Inband Handshake: OpFlex-enabled Host (AVS, Hyper-V, etc.) Data Path Learning: Distributed switch learning Control (vcenter API) LLDP used to resolve Virtual host ID to attached port on leaf node (non- OpFlex Hosts) Control (OpFlex) Data Path Data Path VMM DVS Host OpFlex Host
Policy Provisioning Resolution Immediacy: Determines when the EPG policies are pushed to the software (Policy Element) the leaf switch: can be Immediate or On-Demand Deployment Immediacy: Determines when the Policy Element programs the policies in the hardware:can be Immediate or On-Demand EPG Resolution Immediacy Deployment Immediacy VMM Domain Deployment Immediacy Static Node/Port
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
ACI and vsphere Integration Three options: Deployment with vds Deployment with AVS Deployment with vshield Configuration Steps: Preparing the fabric Defining the VMM Adding Hosts uplinks to the APIC vds Connecting vnic to port-group
VMWare Integration Three Different Options Distributed Virtual Switch (DVS) vcenter + vshield Application Virtual Switch (AVS) + Encapsulations: VLAN Installation: Native VM discovery: LLDP Software/Licenses: vcenter with Enterprise+ License Encapsulations: VLAN, VXLAN Installation: Native VM discovery: LLDP Software/Licenses: vcenter with Enterprise+ License, vshield Manager with vshield License Encapsulations: VLAN, VXLAN Installation: VIB through VUM or Console VM discovery: OpFlex Software/Licenses: vcenter with Enterprise+ License
EPGs are Port-Groups
Hypervisor Integration with ACI 11.0 EPG Classification via Port-Groups VTEP GBP VXLAN IP Payload Port-groups Created for Each EPG (+ ( VXLAN VNID = 5789 VXLAN VNID = 11348 802.1Q VLAN 50 802.1Q VLAN 125 Leaf VTEP VXLAN IP Payload WEB PORT GROUP vswitch APP PORT GROUP WEB PORT GROUP vswitch APP PORT GROUP 802.1Q IP Payload VM s are placed within the port-group defined for each EPG Traffic is encapsulated with the specific VLAN or VXLAN assigned to that port-group on that port and forwarded upstream to the TOR
Each VMM controller is a vcenter, Datacenter
Supporting Multiple vcenter DVS The VMM controller represents a Datacenter within a vcenter A VMM Domain represents a DVS inside a vcenter. Same vcenter can be added into different VMM domains. Each will create a corresponding DVS. Domain1 VMware vcenter, Datacenter VC IP 1, DC1 Virtual Distributed Switch Domain1 Domain2 Virtual Distributed Switch Domain2 VC IP 1, DC1
Can I have more than 1 APIC vds per Datacenter? Yes Can I have more than 1 vcenter Datacenter per APIC? Yes vcenter as Seen by APIC VMM domain 1 VMM domain 2 VMM Controller 1: vcenter1, Datacenter1 VMM Controller 1: vcenter1, Datacenter1 VMM domain 3 VMM Controller 1: vcenter1, Datacenter2 APIC vds as seen by vcenter vds1 under Datacenter 1 vds2 under Datacenter 1 vds3 under Datacenter 1
ACI Hypervisor Integration VMware DVS/vShield APIC 5 Create Application Policy F/W Application Network Profile EPG WEB L/B EPG APP EPG DB APIC Admin 9 Push Policy ACI Fabric 1 Cisco APIC and VMware vcenter Initial Handshake 6 Automatically Map EPG To Port Groups 4 Learn location of ESX Host through LLDP 2 Create VDS VIRTUAL DISTRIBUTED SWITCH VI/Server Admin vcenter Server / vshield 8 Instantiate VMs, Assign to Port Groups 7 3 Create Port Groups Attach Hypervisor to VDS WEB PORT GROUP APP PORT GROUP DB PORT GROUP Web App HYPERVISOR DB Web Web HYPERVISOR DB
ACI Hypervisor Integration AVS APIC 5 Create Application Policy F/W Application Network Profile EPG WEB L/B EPG APP EPG DB APIC Admin 9 Push Policy ACI Fabric 1 Cisco APIC and VMware vcenter Initial Handshake 6 Automatically Map EPG To Port Groups 4 Learn location of ESX Host through OpFlex OpFlex Agent OpFlex Agent VI/Server Admin vcenter Server 8 Instantiate VMs, Assign to Port Groups 2 7 3 Create AVS VDS Create Port Groups Attach Hypervisor to VDS Application Virtual Switch (AVS) WEB PORT GROUP APP PORT GROUP DB PORT GROUP Web App HYPERVISOR DB Web Web HYPERVISOR DB
Fabric Preparation for ACI integration with vds AEP associates Domain to interfaces and switches Provides: Span of VLAN pool and Domains to the leaf ports Physical Interface Policies for vcenter vds. E.g. LACP, LLDP, CDP, MTU vcenter Virtual Distributed Switch 4K VLAN Space Leaf Ports + Policies AEP VMM Domain Physical Domain External Domain VLAN Pool 1 VLAN Pool 2 VLAN Pool 3
ACI Configuration of VMM Domain for vds Name of VMM Domain Type of vswitch (DVS or AVS) Associated Attachable Entity Profile (AEP) VLAN Pool vcenter Administrator Credentials vcenter server information
DVS Uplink Interface Policies LLDP/CDP: vds supports only one of CDP/LLDP, not both at the same time LLDP takes precedence if both LLDP and CDP are defined. To enable CDP, AEP should contain AccPortGrp with LLDP disabled and CDP enabled By Default LLDP is enabled and CDP is disabled LACP: Enables/disables LACP on uplink port-group Changes the Load-balancing algorithm on VM port-groups LACP Mode Tenant Port group Hash Algorithm Active Passive Off Mac-pin Ip-hash Ip-hash Ip-hash Src port
ACI view of vsphere
Preparing the Fabric for VXLAN integration: configuration steps for AVS and vshield The vmk interface provides the VTEP The VTEP IP address is obtained via a DHCP request The AVS becomes a member of the infrastructure Tenant You need to configure the infrastructure Tenant for DHCP relay vmk gets its address via DHCP Infrastructure VLAN VTEP on infra VLAN vmk AVS
Configure AEP with Enable Infra VLAN
VTEP addresses are allocated to nodes in the fabric automatically based on the pool configured at fabric inititalization Infra range: 10.0.0.0/16 Spine 1 10.0.104.92 Spine 2 10.0.104.93 Leaf 1 10.0.104.95 Leaf 2 10.01.104.96 Leaf 3 10.0.104.97 Without AVS, these addresses are internal to the fabric and won t overlap with anything in the existing network.
With AVS or with vshield Infra range: 10.0.0.0/16 Spine 1 10.0.104.92 Spine 2 10.0.104.93 Leaf 1 10.0.104.95 Leaf 2 10.0.104.96 Leaf 3 10.0.104.97 ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.0.0.0 255.255.0.0 Local Subnet vmk1 By default VLAN 4093 ESXi AVS VMK1: 10.0.104.98 With AVS, a VMK interface is created on the host and an address from the ACI infra pool is used. The ESXi host puts a route to the infra network in its table, pointing to the new VMK interface.
DHCP configuration In the infra tenant, create a DHCP relay policy as follows:
Application Virtual Switch (AVS) Integration Overview OpFlex Control protocol - Control channel - VM attach/detach, link state notifications VEM extension to the fabric vsphere 5.0 and above BPDU Filter/BPDU Guard SPAN/ERSPAN Port level stats collection Southbound OpFlex API VM VM VM VM Hypervisor Manager vsphere AVS
AVS forwarding Unicast Traffic: - Locally originated unicast traffic to local destinations is forwarded by the AVS - Locally originated traffic destined to unknown MAC is forwarded to the Leaf If encap type is VXLAN, packet sent to the Fabric tunnel end point (FTEP) IP Multi-Destination Traffic: Traffic from the leaf with source MAC belonging to a local endpoint is dropped Locally originated broadcast and multicast traffic are flooded locally and forwarded to the leaf after adding appropriate encap If encap type is VXLAN, destination IP of the outer packet is set to VMM GIPo multicast address
Multicast configuration Multicast Configuration is required on ACI in order to support Broadcast and Multicast from the VMs Broadcast and Multicast traffic from a VM is sent to a unique GiPO mcast group this is VMM-wide Broadcast and Multicast traffic to a VM is sent to an EPG-specific mcast group That is why you configure a pool of mcast IPs Multicast Address: Assign a multicast address that represents the AVS VMMwide. This is used from the VM to the fabric Multicast Address Pool: Create a unique Multicast Address Pool large enough to accommodate all the EPGs This is used by the Fabric to the destination EPG
AVS Switching Modes Non switching mode Local Switching mode Punt to Leaf for all traffic Hypervisor Punt to Leaf for intra-epg traffic Hypervisor VM EPG Web VM VM EPG App VM VM EPG Web VM VM EPG App VM
Switching Mode and Encapsulation Local Switching: Supports VXLAN or VLAN encapsulation This switching mode allows Inter-EPG traffic to be switched locally on the AVS FEX mode (no Local Switching) only supports VXLAN encapsulation. This switching mode sends all traffic (Inter-EPG included) to the leaf switch VLAN versus VXLAN encapsulation: When using VXLAN encapsulation, only the infra VLAN is required to be extended to the AVS hosts. All traffic is encapsulated with VXLANs and carried on the infrastructure VLAN. If you use VLAN encapsulation you have more VLANs to trunk
ACI configuration - VMM Domain for AVS Name of VMM Domain Type of vswitch (DVS or AVS) Switching mode (FEX or Normal) Associated Attachable Entity Profile (AEP) VXLAN Pool Multicast Pool vcenter Administrator Credentials vcenter server information
AVS with ACI 11.1 EPG Classification via VM Attributes Before this feature, EPG is derived based on the encapsulation id (VLAN, VXLAN VNID) in the packet This feature allows granular EPG derivation based on various attributes. (VM Name, Guest OS, MAC, IP etc.) This feature is only available for VMs connected to Cisco AVS Future support for other Hypervisor switches (VMware DVS, Microsoft vswitch, OVS) VM Attribute Guest OS VM Name VM (id) VNIC (id) Hypervisor DVS port-group DVS Datacenter Custom Attribute MAC Address IP Address vcenter VM Attributes VM Traffic Attributes
AVS with ACI 11.1 EPG Classification via VM Attributes (2) Two categories of Attributes supported with the 11.1 release VM Attributes (set by the server administrator at the creation time of the VM) VM Traffic Attributes (VM MAC/IP address or L4 port being used by the application) Any endpoint placed within a Port Group on the vswitch can be further classified based on the specific VM Attributes Dynamic classification or re-classification VM Attribute Guest OS VM Name VM (id) VNIC (id) Hypervisor DVS port-group DVS Datacenter Custom Attribute MAC Address IP Address vcenter VM Attributes VM Traffic Attributes
AVS with ACI 11.1 EPG Classification via VM Attributes 4 APIC Distributes VM Attribute Policies to Leaf nodes APIC Admin 3 Create an EPG == VM Attribute x on VMM Domain A 9 7 AVS forwards traffic with the correct EPG label (encapsulation) Leaf Determines Attribute to EPG Classification 802.1Q VLAN 50 vswitch (AVS) 6 AVS notifies Leaf of VM Attach via OpFlex Channel 8 Leaf Pushes EPG encapsulation binding to AVS via OpFlex Channel 2 APIC Retrieves Hypervisor State (VM State & VM Attributes) & Initiate a Listener Process for any changes/updates Port Group EPG == VM Attribute x EPG == VM Attribute y 1 Administrator Creates new vds (AVS) 5 Boot new VM with desired VM Attributes and connect them to the Base port-group VI/Server Admin
EPG Classification via VM Attributes Use Case 1 - Isolate a Malicious VM Problem: Vulnerability is detected in a particular type of operating system (e.g. Windows). Network security administrator would like to isolate all Windows VM Solution: Define Security EPG with criterion as Operating System = Windows. No contracts are provided or consumed by this EPG. It will stop all inter-epg communication for the matching VMs No VM attach/detach or placement of VM to a different port-group is needed Web Web01 Linux Web02 Linux Web03 Win App App01 Linux App02 Linux App03 Win X Win EPG Criterion Attribute (OS = Windows) DB DB01 Linux DB02 Linux DB03 Win
EPG Classification via VM Attributes Use Case 2 - Security across zones Problem: VMs belonging to different departments (e.g. HR, Sales) or different roles (Production, Test) are placed in the port-group. But isolation across departments are required. (e.g. HR-Web-VM should not be able to talk to Sales-Web-VM) Solution: Define EPGs, which match if the VM Name contains a matching string (e.g. HR, Sales etc). Each Attribute based EPG can have their own security policies. Web Web01 HR- Web01 Sales- Web01 App App01 App02 App03 DB DB01 DB02 DB03 HR-Web X Sales-Web Criterion Attribute (VM name contains HR) Criterion Attribute (VM name contains Sales)
Distributed Firewall Implementation ACI Leaf Reflexive ACL in the hardware is programmed to allow TCP packets only if ACK flag set This is done when setting the stateful flag in the filter AVS vswitch Maintains connection table to track the flow In receiving the first TCP SYN packet, AVS creates a flow table entry Packets belonging to a flow that does not have an entry are dropped by the AVS In a typical Firewall solution, the first packet is always sent to a policy engine. In ACI fabric, hardware acts as a policy store, so it doesn t incur performance penalty for policy lookup
Hardware Assisted Stateful firewall Leaf evaluates stateless policy Policy Enforcement done at Leaf Src class Src port Dest Class Dest port Flag Action A * B 80 * Allow B 80 A * ACK Allow Hardware policy permits the packet Reflexive Hardware policy permits the packet Connection Tracking at AVS Create flow table entry Forward packet to Leaf VLAN Proto Src ip Src port Dst IP Dst port On flow table hit forward packet to Leaf Create flow state only for TCP SYN packet received from PNIC VLAN Proto Src ip Src port Dst IP Dst port A tcp IP_A 1234 IP_B 80 A tcp IP_B 80 IP_A 1234 B tcp IP_A 1234 IP_B 80 B tcp IP_B 80 IP_A 1234 Consumer A Packet received from VM Lookup flow table Response from VM Perform flow table lookup Provider B Deliver packet to destination VM
What is the ACI vcenter Plugin? A VMware vcenter Web Client plugin (vsphere 5.5) for ACI Empowers virtualization admins to define network connectivity independently of the networking team while sharing the same infrastructure Virtualization admin is able to configure network connectivity (subnets, port-groups) with tenant isolation Focused on simplicity User does not need to understand the ACI Policy Model Follows the vcenter Web Client GUI standards No configuration of in-depth networking stuff this is done through APIC by the networking expert
ACI-vCenter Interactions Current Model Network Team APIC Cluster ACI Port Group Virtualization Team ESXi Hypervisors vcenter
ACI-vCenter Interactions vcenter Plugin Model Network Team 3 2 APIC Cluster ACI Port Group Virtualization Team Virt. Admin 1 ESXi Hypervisors vcenter ACI Plugin
VMware vcenter Plugin View
VMware vcenter Plugin View
VMware vcenter Plugin View
Summary Two deployment modes with VMware Using ACI with vds You don t need AVS in order to deploy ACI ACI interacts with Vmware vds to create port-groups that provide the extension of EPG You can have one tier of switches between the servers and the leaf Using ACI with AVS You can get some additional features with AVS You can have more hops in the path between the ESX servers and the leaf
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
Microsoft Interaction with ACI Two modes of Operation Integration with SCVMM Integration with Azure Pack APIC APIC + Policy Management: Through APIC Software / License: Windows Server with HyperV, SCVMM VM Discovery: OpFlex Encapsulations: VLAN Plugin Installation: Manual Superset of SCVMM Policy Management: Through APIC or through Azure Pack Software / License: Windows Server with HyperV, SCVMM, Azure Pack (free) VM Discovery: OpFlex Encapsulations: VLAN, NVGRE Plugin Installation: Integrated
EPG are VM Networks
Microsoft Interaction with ACI ACI with SCVMM only without Azure Pack: In this case the APIC controller and SCVMM communicate with each other for network management and Azure Pack is not involved. Networks are created in APIC and are published as VM Networks in SCVMM. Compute is provisioned in SCVMM and can consume these networks. ACI with SCVMM and Azure Pack: ACI integrates in Azure Pack to provide a self-service experience for tenant. ACI resource provider in Azure pack drives APIC controller for network management. Networks are created in SCVMM and are available in Azure Pack for respective tenants. ACI L4-L7 capabilities for Load balancers (F5 and Citrix) and stateless firewall are provided for tenants.
ACI Hypervisor Integration MSFT SCVMM APIC 5 Create Application Policy F/W Application Network Profile EPG WEB L/B EPG APP EPG DB APIC Admin 9 Push Policy ACI Fabric 1 Cisco APIC and MSFT SCVMM Initial Handshake 6 Automatically Map EPG To VM Networks 4 OpFlex Agent Learn location of HyperV Host through OpFlex OpFlex Agent MSFT SCVMM 2 7 Create Virtual Switch Create VM Networks HYPERV VIRTUAL SWITCH WEB VM NETWORK APP VM NETWORK DB VM NETWORK HYPERVISOR HYPERVISOR SCVMM Admin 8 Instantiate VMs, Assign to VM Networks 3 Attach Hypervisor to Virtual Switch Web App Web App DB
Building Blocks for ACI Integration with Microsoft Hyper-V (1) APIC Agent on SCVMM This component is a Windows System Service (EXE) that runs on SCVMM with administrative credentials. It communicates with the APIC controller over the ACI REST interface and with SCVMM through powershell interface. It performs the following functions: Setup APIC logical switch, networks and encapsulations Add\Delete VMNetworks in SCVMM based on the information in the APIC controller. Publish operations faults in APIC when it encounters any errors in its operation.
Building Blocks for ACI Integration with Microsoft Hyper-V (2) Opflex Agent in Hyper-v Host This component is a Windows System Service (EXE) that runs on each host with administrative credentials. It communicates with the directly attached ACI leaf via REST interface. It uses the Infra-channel and runs on the VTEP interface in the host. The IP for the VTEP interface is given by APIC DHCP service. This agent performs the following functions: Send VM attach and detach events to the leaf Send inventory information like VM name, NIC name, statistics to the leaf which is then displayed in the APIC UI The APIC agent is installed on each Hyper-v host and it communicates with the nearest ACI leaf over Opflex Protocol. The leaf on receiving EP attach and detach notifications downloads policies from the policy manager and applies it to local leaf ports.
Microsoft Azure Pack Integration Integration with Microsoft requires: - Windows Server 2012 - Systems Center 2012 R2 with SPF - Windows Azure Pack Azure Pack provides single pane of glass for Definition, creation, management of their cloud service Divided into Provider (Admin) portal and Consumer Self-Service (Tenant) portal Cisco ACI Service Plugin enables management of Network Infrastructure through APIC REST API Service Plans Users Web Sites Service Provider Provider Portal VMs SQL Web Sites Apps Database VMs ACI Service Bus Customer Consumer Self-Service Portal R2 w/ Service Provider Foundation
ACI Azure Pack Integration 1 APIC APIC Admin (Basic Infrastructure) 7 ACI Fabric 3 2 Pull Policy on leaf where EP attaches Get VLANs allocated for each EPG Push Network Profiles to APIC Create Application Policy 1 Create VM Networks 4 4 5 Instantiate VMs 6 Indicate EP Attach to attached leaf when VM starts APIC Plugin SCVMM Plugin OpFlex Agent OpFlex Agent OpFlex Agent HYPERVISOR HYPERVISOR HYPERVISOR Azure Pack Tenant Azure Pack \ SPF Web App Web App DB Web Web DB
Microsoft Azure Pack Integration Admin Experience Add & Configure service providers for this deployment (APIC IP Address, Login Credentials, etc.) Usage & Billing statistics per user and other admin functions
Microsoft Azure Pack Integration Tenant Experience Services this account has access to Resources of ACI service currently created and consumed by this tenant Application Network Profiles are created through Azure Pack, and pushed to APIC using REST APIs
ACI as a Resource Provider The ACI resource provider has three sub-components: Tenant Portal Extension: This implements the user interface and APIs for the tenant portal to expose ACI capabilities Admin Portal Extension: This implements the user interface and APIs for the admin portal Resource Provider Backend. This is the component that interacts with the SCVMM via Service pack Foundation power shell and with APIC using the Rest interface.
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
OpenStack Networking Neutron Model Tenant Router Network: external Network Security Group Security Group Rule Subnet Port Abstraction Description Logical Network L2 / Broadcast domain Logical Router L3 domain Subnet Subnet (OpenStack IPAM / DHCP) Security Group Group-based ACL
OpenStack with ACI APIC ML2 Driver Mapping Neutron Object APIC Object Project Network Subnet Security Group + Rule Router Network:external Tenant EPG + BD Subnet IP Tables (outside of APIC) Contract L3Out / Outside EPG
ACI OpenStack Integration with ML2 Plugin APIC 3 Create Application Policy APIC Admin (Performs Steps 3) 5 Push Policy ACI Fabric 2 Automatically Push Network Profiles to APIC Create Network, Subnet, Security Groups, Policy 1 NETWORK ROUTING SECURITY OPEN VIRTUAL SWITCH OPEN VIRTUAL SWITCH OPEN VIRTUAL SWITCH NEUTRON NOVA 4 Web App Web App DB Web Web DB OpenStack Tenant (Performs Steps 1,4) Instantiate VMs HYPERVISOR HYPERVISOR HYPERVISOR
OpenStack Networking Limitations of the Networking Model Cloud Application Model Neutron Model Service A Router External Network Service B Service C Network / subnet Network / subnet No broadcast / multicast Resilient / Fault Tolerant Scalable Tiers Built around loosely coupled services Don t care about IP addresses L2 / Broadcast is the base API! Network / routers / subnets Based on existing networking models No concept of dependency mapping or intent
OpenStack Networking Why a Policy Model is better Separation of Concerns Service A Abstract Application API Service C Low level / Detailed API OpenStack Tenant Dependency Mapping Service A consumes service B and Service C Service A Service C Service A Enable Network Services FIREWALL Service C Operator / Admin Service B Separate application requirements from low level APIs Separate tenant from operator Build self-documenting dependency maps of tiers of an application Define network service chains between tiers of an application without low level configuration
OpenStack Networking Introducing Group-Based Policy Intent-based API for describing application requirements Separates concerns of tenants and operators FIREWALL Captures dependencies between tiers of an application Plugin model Web Group Policy Rules Set Classifier Action DB Group Built by a community of developers across a range of companies Classifier Action Service Chain
ACI OpenStack Integration with Group-based Policy Create Application Network Profile 1 F/W L/B Application Network Profile EPG WEB L/B EPG APP EPG DB NEUTRON NOVA 4 Web App Web App DB Web Web DB OpenStack Tenant (Performs step 1,4) Instantiate VMs HYPERVISOR HYPERVISOR HYPERVISOR 2 Automatically Push Network Profiles to APIC APIC 3 Create Application Policy F/W L/B Application Network Profile EPG WEB L/B EPG APP EPG DB ACI Admin (manages physical network, monitors tenant state) 5 Push Policy ACI Fabric
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
You can do service chaining with ACI without any service graph EPG 10.0.0.x 20.0.0.x peer to the outside outside router no ip routing enable ARP flood no ip routing enable ARP flood no ip routing enable ARP flood In order not to learn outside IPs you need to use a l2ext
With service graph the operational model changes Without Service Graph: Network admin: configures the ports, VLANs to connect the FW or the LB FW admin day 0: configures ports and VLANs FW admin day 1: configures ACLs and so on The three configurations are spread over multiple phases / days With Service Graph ACI admin: configures the ports, VLANs to connect the FW or the LB FW admin day 0: configures ports and VLANs FW admin day 1: configures ACLs and so on All configurations are performed in a single step.
With service graph the operational model changes Without Service Graph: configuration for fabric With Service Graph configuration configuration for the firewall
Device Definition and Package APIC requires a Device Package to communicate with service devices. A Device Package is a zip file containing two parts Device Specification (xml): The configuration of the APIC is represented as an object model consisting of a large number of Managed Objects (MOs). A Device type is defined by a tree of MOs with a Meta Device (MDev) at the root. Configuration through UI or North Bound APIs APIC EPG level L4-L7 config Service Graph Function Node level L4-L7 config DeviceScript (py): The integration between the APIC and a Device is performed by a DeviceScript, which maps APIC events function calls defined in Device Script Python Device Script Device Package Device Specification <dev type= f5 > <service type= slb > <param name= vip > <dev ident= 210.1.1.1 <validator= ip <hidden= no > <locked= yes > icontrol/south Bound API BIG-IP Physical or VE
With the Service Graph APIC communicates with the L4L7 Device Device APIs def devicevalidate (device, version) def devicemodify (device, interfaces, configuration) def deviceaudit (device, interfaces, configuration) def devicehealth (device, interfaces, configuration) def devicecounters (device, interfaces, configuration) Cluster APIs def clustermodify (device, configuration) def clusteraudit (device, configuration) Service APIs def servicemodify (device, configuration) def serviceaudit (device, configuration) def servicehealth (device, configuration) def servicecounters (device, configuration) When Concrete device is added, APIC invokes devicevalidate API to validate the device as per the specification in Device package. Device must have appropriate software version and vendor information Upon successful validation, APIC invokes devicemodify API to let script perform one time device configuration function When Logical device is added, APIC invokes clustermodify API to let script perform one time cluster configuration function When the graph is attached to a contract or any parameter change event occur in APIC, it invokes the servicemodify API callouts to the service device
Service Graph Definition Abstract graph concept mapping to Service Graph Consumes Functions rendered on the same device Service Graph: web-application Provides EXT EXT EXT EXT Func: Firewall Func: SSL offload Func: Load Balancing WEB WEB WEB WEB EPG - EXT Terminals Firewall params Permit ip tcp * dest-ip <vip> dest-port 80 Deny ip udp * Connectors SSL params Ipaddress <vip> port 80 Terminals Load-Balancing params virtual-ip <vip> port 80 Lb-aglorithm: round-robin EPG - WEB Service graph is an ordered set of functions between a set of terminals e-g; Firewall Function, Load balancer Function A function has one or more connectors Network connectivity like VLAN/VNID tag is assigned to these connectors A function within a graph may require one or more parameters Parameters can be scoped by an EPG or an application profile or tenant context Parameter values can be locked from further changes
Purpose of the Service Graph By using the Service graph you can install a service, like the ASA firewall once and deploy it multiple times in different logical topologies Each time the graph is deployed ACI takes care of changing the configuration on the firewall to enable the forwarding in the new logical topology. ACI takes care of dynamically provisioning VLANs, IP addresses while re-using the same graph template The benefits of the service graph are: a configuration template that can be reused multiple times a more logical / application-related view of services provisioning a device that is shared across multiple departments
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
Configurations with Service Graph All configurations performed in a single operation: ACI Fabric Fabric configuration: Bridge Domains, VLANs, Routing, EPGs Firewall configuration: VLANs, Interfaces ACLs Files Users
To understand the service graph, consider the following topology Spine Switches Port-channel? vpc? Separate L3 Interfaces? Routed or Transparent Mode? Leafs Switches
A Template may define vpc based connectivity Spine Switches Template 1 (Cdev1) vpc and Transparent mode Leafs Switches
Another Template may define two routed portchannels Template 2 (Cdev2) 2 separate Port Channels and Routed mode Spine Switches Leafs Switches
Which VLANs should be trunked? The VLANs and IP addresses Are dependent on which topology The firewall is associated with. Spine Switches VLANs? IP addresses? Leafs Switches
In a Given Tenant the Graph is placed between EPGs IP addresses on the firewalls and security rules depend on the EPG/ Bridge Domain BD 1 BD 2 EPG1 EPG2 vlans are dynamically allocated and derived from the physical domain You don t need to configure them
Example Use Case BD 1 BD 2 EPG1 EPG2 10.10.10.x 20.20.20.x BD 3 BD 4 EPG1 EPG2 30.30.30.x 40.40.40.x
Applying Service Graph (1) L4L7 rules are configured (e.g. virtual address) BD 1 BD 2 10.10.10.5 20.20.20.5 EPG1 10.10.10.x EPG2 20.20.20.x subinterface and VLANs are automatically created
Applying the same service graph (2) L4L7 rules are specific to this set of EPGs 30.30.30.5 40.40.40.5 EPG1 30.30.30.x EPG2 40.40.40.x subinterface and VLANs are automatically created
If the L4L7 device is a virtual appliance ACI creates shadow EPGs (port-groups) shadow EPG on the producer side ACI automatically connects the vnics to the correct port-groups EPG1 30.30.30.x EPG2 40.40.40.x vnic1 vnic2 shadow EPG on the consumer side
Service Configuration before the Service Graph Network Admin Remove client 192.168.1.1, no other action necessary 2 Security Admin Files Add client 172.18.20.13, call Security Admin to enable access 1 3 172.18.20.13 192.168.1.1 192.168.1.100 Users Add ASA rules for client 172.18.20.13 10.1.1.1 172.16.1.1 HTTP (TCP/80) HTTPS (TCP/443) DCERPC (TCP/135) SSH (TCP/22) ICMP 30 ACL Rules access-list OUT permit tcp host 192.168.1.1 host 10.1.1.1 eq 80 access-list OUT permit tcp host 192.179.1.1 host 10.1.1.1 eq 443 [ ] access-list OUT permit icmp host 192.168.1.100 host 192.168.100.1 192.168.100.1 2 15 ACL Rules access-list OUT permit tcp host 172.18.20.13 host 10.1.1.1 eq 80 access-list OUT permit tcp host 172.18.20.13 host 10.1.1.1 eq 443 [ ] access-list OUT permit icmp host 172.18.20.13 host 192.168.100.1 4 Original ASA rules never change 4 45 ACL Rules
Automatic endpoint addition/removal with ACI Network Admin Clients Security Admin Insert ASA instance in the service graph with desired policies Same 5 service rules and actions Remove client 192.168.1.1 Add client 172.18.20.13, use existing ASA instance 192.168.1.100 192.168.1.1 ASA1 access-list OUT permit tcp any any eq 80 access-list OUT permit tcp any any eq 443 access-list OUT permit tcp any any eq 135 access-list OUT permit tcp any any eq 22 access-list OUT permit icmp any any Servers Source Leaf 1, port 1 Leaf 1, port 10 Port Rules EPG Users Users 172.18.20.13 Destination EPG Leaf 3, port 2 Servers Leaf 4, port 8 Servers 10.1.1.1 172.16.1.1 HTTP (TCP/80) HTTPS (TCP/443) DCERPC (TCP/135) SSH (TCP/22) ICMP Leaf 5, port 12 Servers 192.168.100.1 Leaf 2, port 12 Users
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
ACI Terminology L4L7 Parameters Consumer Side Concrete Device Provider Side consumer EPG Logical Device Provider EPG
Firewall or Load Balancer in Routed Mode For Consistency with ACI Policy Model For Consistency with ACI Policy Model Bridge Domain Outside Consumer Client EPG Contract Bridge Domain Inside Provider Server EPG ARP flooding Uknown unicast flooding No IP Routing Consumer Side ARP Flooding Unknown Unicast Flooding No IP Routing Provider Side Service Graph Default Gateway for the Servers
Firewall or Load Balancer in Transparent Mode For Consistency with ACI Policy Model For Consistency with ACI Policy Model External Router Bridge Domain Outside Consumer Bridge Domain Inside Provider IP Client EPG Contract Server EPG ARP flooding Uknown unicast flooding No IP Routing ARP Flooding Unknown Unicast Flooding No IP Routing Consumer Side Provider Side Default gateway for the servers Service Graph
Load Balancer in Routed Mode and Routed Outside VRF For Consistency with ACI Policy Model Bridge Domain Outside Bridge Domain Inside L3Out L3InstP Consumer Subnet: Default Gateway for L4-L7 Device Hardware Proxy Consumer Side Contract Provider ARP Flooding Unknown Unicast Flooding No IP Routing Provider Side Server EPG Service Graph Default Gateway for the Servers
Load Balancer in One-Arm mode VRF L3Out SVI Bridge Domain 1 SVI Bridge Domain 3 SVI Bridge Domain 2 L3InstP Unicast Routing Enabled Subnet Is Default Gateway for Load Balancer Service Graph Server EPG Subnet Is Default Gateway for Server
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
Install the Device Package on the APIC APIC interfaces with the device using python scripts APIC calls device specific python script function on various events APIC APIC uses device configuration model provided in the device package to pass appropriate configuration to the device scripts Device script handlers interface with the device using its REST or CLI interface Device Script (Python / CLI) Device Spec (XML) Uses Device s native API
Make sure that the L4L7 Device has management access Enable SSH Enable HTTP access Configure Credentials No need to configure: Interfaces VLANs IPs
Prepare the Fabric Define a VLAN pool Define a Physical domain for Physical Appliances Define a Virtual domain for Virtual Appliances Map them to the ports where the service device are going to attach Users
Bridge Domains Configuration Creation of the link between the L4L7 Devices Create Bridge Domains BD1 BD2 BD3 Service Graph EPG client EPG Server
Bridge Domain Configuration Which configuration options are available?
Bridge Domains Configuration Setting the correct flooding and routing parameters Bridge Domains used to connect Service Devices are configured as follows: Disable Unicast Routing (Make sure that the Fabric doesn t provide the SVI on this bridge domain) Enable Unknown unicast flooding Enable ARP flooding The subnet definition on the Bridge Domain is irrelevant Associate with the VRF
Bridge Domains Configuration Establishing the relation with the VRF BDs association with the VRF is because of the Object Model. The VRF is not doing anything unless the admin enables routing on the BD and configures the Subnet for the default gateway function Association with VRF Bridge Domain 1 Bridge Domain 2
Configure The Concrete Device Routed or bridged Select the portgrups for producer and consumer side Physical or Virtual Domain Firewall credentials
Concrete and Logical Device Definition Configure the Connectivity between the L4L7 device and ACI vpc Both producer and consumer E.g. Po11 (must be a syntax which works on ASA)
Concrete and Logical Device Definition On APIC enter the port-channel configuration for the L4L7 device The interface name must follow the syntax that is used on the Firewall
Concrete and Logical Device Definition Example of Port-Channel Configuration
Configuration of the Function/s on the L4L7 Device Create Function Profiles Function Profiles are XML schema similar to iapp with F5 Click to configure L4-L7 Service Node Configurations Function Profiles can be: WebProfile HTTPS Application-1
Configuration of the Function/s on the L4L7 Device Choose a Template Templates gives you option to choose simple Service Graph based on your requirement
Configuration of the Function/s on the L4L7 Device Deploy a Template
Configuration of the Function/s on the L4L7 Device Select the EPGs you cannot select a L3InstP here
Configuration of the Function/s on the L4L7 Device Enter the L4L7 Parameters Function Node configuration will be pushed from this pane
Or use Function Profiles Function Profiles are XML schema similar to iapp with F5 Click to configure L4-L7 Service Node Configurations Function Profiles can be: WebProfile HTTPS Application-1
Configuration of the Function/s on the L4L7 Device Device Configurations versus Functions Configurations Device Config: configuration pertain to the BIG-IP partition, for example, pool config that can be utilized by multiple virtual servers Function Config: configuration specific to the virtual server, like the VIP; or it can reference device config, in this case, the Pool
Configuration Checklist These are the steps to verify to successfully configure the Service Graph: Verify that you created bridge domains to create the path between the service devices Make sure that the bridge domain has an association with a VRF Verify that the bridge domain is operating in flood and learn mode Make sure you created a physical or virtual domain to enable the VLANs on the leaf devices Make sure that there is a client and a server side EPG Make sure that the service graph is associated with a contract that is connecting the client and the server side EPG Make sure that you entered the mandatory parameters of the device package
Agenda Introduction to ACI Hypervisor Integration Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration Service Insertion Service insertion versus Service Graph Example Design Options How to configure Ecosystem
Device Package Sourcefire
Upcoming L4-L7 device packages Partner ETA Certified APIC release Q3 CY15 Q3 CY15 Q3 CY15 FCS+6 FCS+6 FCS+9 Q3/Q4 CY15 FCS+6 TBD FCS+6 TBD
Complete Your Online Session Evaluation Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card. Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect. Don t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
Continue Your Education Demos in the Cisco campus Walk-in Self-Paced Labs Table Topics Meet the Engineer 1:1 meetings Related sessions
Thank you
Data Center / Virtualization Cisco Education Offerings Course Description Cisco Certification Cisco Data Center CCIE Unified Fabric Workshop (DCXUF); Cisco Data Center CCIE Unified Computing Workshop (DCXUC) Implementing Cisco Data Center Unified Fabric (DCUFI); Implementing Cisco Data Center Unified Computing (DCUCI) Introducing Cisco Data Center Networking (DCICN); Introducing Cisco Data Center Technologies (DCICT) Prepare for your CCIE Data Center practical exam with hands on lab exercises running on a dedicated comprehensive topology Obtain the skills to deploy complex virtualized Data Center Fabric and Computing environments with Nexus and Cisco UCS. Learn basic data center technologies and how to build a data center infrastructure. CCIE Data Center CCNP Data Center CCNA Data Center Product Training Portfolio: DCAC9k, DCINX9k, DCMDS, DCUCS, DCNX1K, DCNX5K, DCNX7K Get a deep understanding of the Cisco data center product line including the Cisco Nexus9K in ACI and NexusOS modes For more details, please visit: http://learningnetwork.cisco.com Questions? Visit the Learning@Cisco Booth or contact ask-edu-pm-dcv@cisco.com
Cloud Cisco Education Offerings Course Description Cisco Certification Designing the FlexPod Solution (FPDESIGN); Implementing and Administering the FlexPod Solution (FPIMPADM) UCS Director (UCSDF) Learn how to design, implement and administer FlexPod solutions Learn how to manage physical and virtual infrastructure using orchestration and automation functions of UCS Director. FlexPod Design Specialist; FlexPod Implementation & Administration Specialist Cisco Prime Service Catalog Learn how to deliver data center, workplace, and application services in an on-demand, automated, and repeatable method. Cisco Intercloud Fabric Learn how to implement end-to-end hybrid clouds with Intercloud Fabric for Business and Intercloud Fabric for Providers. Cisco Intelligent Automation for Cloud Learn how to implement and manage cloud deployments with Cisco Intelligent Automation for Cloud For more details, please visit: http://learningnetwork.cisco.com Questions? Visit the Learning@Cisco Booth or contact ask-edu-pm-dcv@cisco.com