Best practices to Deploy High-availability in Wireless LAN Architectures Kara Muessig Mobility Consulting Systems Engineer CCIE Wireless #29572 2
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 3
Enterprise Wireless Evolution From best effort to mission critical 7.7 billion new Wi-Fi (a/b/g/n) enabled devices will enter the market in the next five years.* By 2015 there will be 7.4 billion 802.11n devices in the market.* 1.2 billion smartphones will enter the market over the next five years, about 40% of all handset shipments.* Smartphone adoption growing 50%+ annually.** Currently 16% of mobile data is diverted to Wi-Fi, by 2015 this will number will increase to 48%.* By 2012, more than 50% of mobile devices will ship without wired ports.*** Hotspot System Management Scalable Performance Self Healing & Optimizing Spectrum Policy TIME Source: *ABI Research, **IDC, *** Morgan Stanley Market Trends 2010 4
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 5
RF High Availability RF HA is the ability to have redundancy in the physical layer. Creating a stable RF environment Dealing with coverage holes if an AP goes down How to mitigate an interference source Creating a pervasive, predictable RF environment 6
Guidelines Surveying for RF HA Rule of Thumb Want most radios at power level 3 Use Active Survey tools AirMagnet Ekahau Veriwave WaveDeploy Clients and controller Understanding WLAN Technology Differences and survey for lowest common client type 802.11b/g 802.11a 802.11n Three dimensional radio propagation in multi-story buildings has to be taken into account Be aware of perimeter and corner areas May not be optimal to start first survey with AP in corner 7
Analyzing Surveyor Data Raw Surveyor data Analyze the path taken and survey points for speed and frequency Analyze the survey profile details such as the propagation assessment settings and client device power settings Spectrum Analysis 8
RRM Radio Resource Management What are RRM s objectives? To dynamically balance the infrastructure and mitigate changes Monitor and maintain coverage for all clients Manage Spectrum Efficiency so as to provide the optimal throughput under changing conditions What RRM does not do Substitute for a site survey Correct an incorrectly architected network Manufacture spectrum 9
How Does RRM Do This? DCA Dynamic Channel Assignment Each AP radio gets a transmit channel assigned to it Changes in air quality are monitored, AP channel assignment changed when deemed appropriate (based on DCA cost function) TPC Transmit Power Control Tx Power assignment based on radio to radio pathloss TPC is in charge of reducing Tx on some APs but may also increase Tx by defaulting back to power level higher than the current Tx level CHDM Coverage Hole Detection and Mitigation Detecting clients in coverage holes Deciding on Tx adjustment (typically Tx increase) on certain APs based on (in)adequacy of estimated downlink client coverage 10
RF Profiles - Overview RF Profiles allow the administrator to tune groups of AP s sharing a common coverage zone together. Selectively changing how RRM will operate the AP s within that coverage zone RF Profiles are created for either the 2.4 GHz radio or 5GHz radio Profiles are applied to groups of AP s belonging to an AP Group, in which all AP s in the group will have the same Profile Settings There are two components to this feature: RF Groups Existing capability No impact on channel selection algorithms RF Profile New in 7.2 providing administrative control over: o Min/Max TPC values o TPCv1 Threshold o TPCv2 Threshold o Data Rates
CleanAir Self Healing & Optimizing Spectrum Policy BEFORE Wireless interference decreases reliability and performance Wireless Client Performance AFTER CleanAir mitigates RF interference improving reliability and performance AIR QUALITY PERFORMANCE AIR QUALITY PERFORMANCE Spectrum intelligence solution designed to proactively manage the challenges of a shared wireless spectrum. Who, what, when, where, and how with interference Enables the network to act upon this information 12
Power Power Why CleanAir? The Industry s ONLY in-line high-resolution spectrum analyzer Typical Wi-Fi chipset Spectral Resolution at 5 MHz Cisco CleanAir Wi-Fi chipset Spectral Resolution at 156 KHz Microwave oven Microwave oven? BlueTooth BlueTooth Identification is fuzzy, best guess Limited ability to differentiate devices Devices lost in the noise 32 times WiFi chip s visibility Accurate classification Multiple device recognition Chip View Visualization of Microwave oven and BlueTooth Interference 13
Client Link: Reduced Coverage Holes Higher PHY Data Rates ClientLink Disabled ClientLink Enabled Lower Data Rates Source: Miercom; AirMagnet/Fluke Iperf Survey Higher Data Rates 14
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 15
Campus Design for High Availability Resiliency Structure, Modularity and Hierarchy Access Distribution Core Distribution Access WAN Data Center Anchor WLC - 1 Internet Anchor WLC - 2 WLC -1 WLC -2
Campus Design Resiliency Structure, Modularity and Hierarchy Not This!! WLC -1 WLC -2 Server Farm WAN Internet PSTN 17
HA Design Access Create redundancy throughout the access layer by homing APs into different switches Distribution Core 18
Controller Redundancy Dynamic Rely on CAPWAP to load-balance APs across controllers and populate APs with backup controllers Results in dynamic salt-and-pepper design Design works better when controllers are clustered in a centralized design Pros Easy to deploy and configure less upfront work APs dynamically load-balance (though never perfectly) Cons More intercontroller roaming Bigger operational challenges due to unpredictability Longer failover times No fallback option in the event of controller failure Cisco s general recommendation is: Only for Layer 2 roaming Use deterministic redundancy instead of dynamic redundancy AP1 AP2 AP3 AP4 AP5 AP6 AP7 AP8 AP9 WLC1 WLC2 19
Controller Redundancy Deterministic WLAN-Controller-A WLAN-Controller-B WLAN-Controller-C Primary: WLAN-Controller-A Secondary: WLAN-Controller-B Tertiary: WLAN-Controller-C Primary: WLAN-Controller-B Secondary: WLAN-Controller-C Tertiary: WLAN-Controller-A Primary: WLAN-Controller-C Secondary: WLAN-Controller-A Tertiary: WLAN-Controller-B Administrator statically assigns APs a primary, secondary, and/or tertiary controller Pros Con Assigned from controller interface (per AP) or WCS (template-based) Predictability easier operational management More network stability More flexible and powerful redundancy design options Faster failover times Fallback option in the case of failover More upfront planning and configuration This is Cisco s recommended best practice 20
Controller Redundancy Most Common (N+1) Redundant WLC in a geographically separate location WLAN-Controller-1 APs Configured With: Primary: WLAN-Controller-1 Secondary: WLAN-Controller-BKP Layer-3 connectivity between the AP connected to primary WLC and the redundant WLC NOC or Data Center WLAN-Controller-BKP WLAN-Controller-2 APs Configured With: Primary: WLAN-Controller-2 Secondary: WLAN-Controller-BKP Redundant WLC need not be part of the same mobility group Configure high availability (HA) to detect failure and faster failover WLAN-Controller-n APs Configured With: Primary: WLAN-Controller-n Secondary: WLAN-Controller-BKP Use AP priority in case of over subscription of redundant WLC 21
Controller Redundancy Disaster Recovery (N+N) For every active primary controller there is a standby redundant controller. Redundant WLCs in a geographically separate location APs can be load balanced or not Layer-3 connectivity between the AP connected to primary WLC and the redundant WLC Configure high availability (HA) to detect failure and faster failover 22
High Availability Using Cisco 5508 Hardware Failure of WLC5508 APs are connected to primary WLC 5508 In case of hardware failure of WLC 5508 AP s fall back to secondary WLC 5508 Primary WLC5508 Secondary WLC5508 Traffic flows through the secondary WLC 5508 and primary core switch 23
High Availability Using WiSM-2 Uplink Failure on Primary Switch In case of uplink failure of the primary switch Active HSRP Switch Primary WiSM-2 Standby HSRP Switch New Active HSRP Switch Standby switch becomes the active HSRP switch APs are still connected to primary WiSM Traffic flows through the new HSRP active switch 24
High Availability Using WiSM-2 Hardware Failure of WiSM-2 APs are connected to primary WiSM-2 In case of hardware failure of primary WiSM-2 AP s fall back to secondary WiSM-2 Primary WiSM-2 Secondary WiSM-2 Traffic flows through the secondary WiSM-2 and primary core switch 25
Redundancy Using VSS and Cisco 5508 Cisco 5508 WLC can be attached to a Cisco Catalyst VSS switch pair 4 ports of Cisco 5508 are connected to active VSS switch 2 nd set of 4 ports of Cisco 5508 is connected to standby VSS switch In case of failure of primary switch traffic continues to flow through secondary switch in the VSS pair Cisco 5508 Catalyst VSS Pair 26
Core Options 6500 VSS w/l2 Access, Nexus w/l3 Access Access Core/ Distribution Dual physical links appear logically as a single link Catalyst 6500 VSS Nexus 7000 Data Center Authentication Authentication Wireless Services Layer 2 to Access Layer Wireless Services ISP1 ISP2 ISP1 ISP2 Layer 3 to Access Layer ngle Configuration Multi-Chassis Etherchannel load-balancing Higher 10 Gigabit Capacity More extensive virtualization capabilities Equal Cost Multipath Load-balancing 28
Ethernet in IP Tunnel Mobility Group Best practices to configure mobility groups for deterministic failover Roaming is supported across mobility groups with in the mobility group domain up to 72 controllers With Inter Release Controller Mobility (IRCM) roaming is supported between 4.2.207, 6.0.188 and 7.0 and 7.2 codes Mobility Group allows controllers to peer with each other to support seamless roaming across controller boundaries Controller-A MAC: AA:AA:AA:AA:AA:01 Mobility Group Name: MyMobilityGroup Mobility Group Neighbors: Controller-B, AA:AA:AA:AA:AA:02 Controller-C, AA:AA:AA:AA:AA:03 Controller-B MAC: AA:AA:AA:AA:AA:02 Mobility Group Name: MyMobilityGroup Mobility Group Neighbors: Controller-A, AA:AA:AA:AA:AA:01 Controller-C, AA:AA:AA:AA:AA:03 CCKM / 802.11r APs learn the IPs of the other members of the mobility group after the CAPWAP Join process Support for up to 24 controllers, 24,000 APs per mobility group If possible place the controllers so that they can be L2 adjacent in the mobility than L3 to improve roaming capabilities Controller-C MAC: AA:AA:AA:AA:AA:03 Mobility Group Name: MyMobilityGroup Mobility Group Neighbors: Controller-A, AA:AA:AA:AA:AA:01 Controller-B, AA:AA:AA:AA:AA:02 32
Mobility Group Config The Mobility Group Members > Edit All page lists the MAC address, IP address, and mobility group name of all the controllers currently in the mobility group. The controllers are listed one per line with the local controller at the top of the list. *Note that the mac address corresponds with the virtual interface s mac address. Mobile-1 33
AP Failover Understanding the CAPWAP State Machine AP Boots UP Discovery Reset DTLS Setup Image Data Run Join Config 34
AP Boots UP AP Failover High Availability Principles : AP is registered with a WLC and maintain a backup list of WLC. AP use heartbeats to validate WLC connectivity AP use Primary Discovery message to validate backup WLC list Discovery DTLS Setup Join Reset Image Data Config Run When AP loose 3 heartbeats it start join process to first backup WLC candidate Candidate Backup WLC is the first alive WLC in this order : primary, secondary, tertiary, global primary, global secondary. AP do not re-initiate discovery process. 35
AP Failover Backup controller If there are no primary/secondary/tertiary WLCs configured on the AP Backup controllers configured under High Availability The backup controllers are added to the primary discovery request message recipient list of the AP. 36
AP Failover Failover Priority Critical AP fails over Assign priorities to APs: Critical, High, Medium, Low AP Priority: Critical Medium priority AP dropped Controller Critical priority APs get precedence over all other APs when joining a controller AP Priority: Medium In a failover situation, a higher priority AP will be allowed in ahead of all other APs If controller is full, existing lower priority APs will be dropped to accommodate higher priority APs 37
AP Boots UP AP Failover Fast Heartbeat Interval Discovery Reset To reduce the amount of time it takes to detect a controller failure, you can configure the fast heartbeat interval, with smaller timeout values DTLS Setup Image Data Run When the fast heartbeat timer expires, if no packets have been received from the controller by the AP then the AP sends a fast echo request to the WLC Join Config In the event of WLC fail-over, the AP should select an available controller from its backup controller list in the order of primary, secondary, tertiary, primary backup controller, and secondary backup controller. It sends a Join Request directly to this selected backup controller without going back to the discovery process You can configure the fast heartbeat timer only for access points in local and flexconnect modes. config advanced timers ap-fast-heartbeat {local hreap all} {enable disable} interval {1-10 seconds} 38
AP Failover AP Primary Discovery Request Timer The access point maintains a list of backup controllers and periodically sends primary discovery requests to each entry on the list. Prior to 5.0 this echo request was static at 30 seconds. Configure a primary discovery request timer to specify the amount of time that a controller has to respond to the discovery request Allows the primary discovery request to have a different timer default than echo request, two minutes, and it is configurable. config advanced timers ap-primarydiscovery-timeout interval {30-3600} 39
AP Failover Times Heartbeat Timeout Fast Heartbeat Timer AP Retransmit Interval AP Retransmit with FH Enabled AP Fallback to next WLC New Timers 7.2 1-30 secs 1-10 secs 2-5 secs 3-8 Times 12 secs WiSM 2 AP failover fast heartbeat 3:19 min 5508 AP failover fast heartbeat 1:00 min 7500 AP failover fast heartbeat 3:46 min 2500 AP failover fast heartbeat 1:04 min Differences in times due to processors, cores and code versions. 40
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 41
AP Joins without Download AP Pre-image Download CAPWAP-L3 AP Pre-image Download Cisco WLAN Controller nce most CAPWAP APs can download and keep more than one image of 4-5MB each AP Pre-image download allows AP to download code while it is operational Pre-image download operation 1. Upgrade the image on the controller 2. Don t reboot the controller 3. Issue AP Pre-image download command 4. Once all AP images are downloaded 5. Reboot the controller 6. AP now re-joins the controller without re-boot Access Points 42
Configuring Pre-image Download Upgrade the image on the controller and don t reboot 44
Configure AP Pre-image Download Perform primary image predownload on the AP Wireless > AP > Global Configuration AP now starts predownloading AP now swaps image after reboot of the controller 45
Software Updates Scheduling AP Pre-Image Download with NCS Provides option to schedule image download to AP. Reboot can be scheduled at a future date/time. Email notification can be sent after completion of download. 46
Software Update Scalability When you upgrade the controller s software, the software on the controller s associated access points is also automatically upgraded. When an access point is loading software, each of its LEDs blinks in succession WiSM 2 5508 7500 2500 500 simultaneous AP software upgrades 500 simultaneous AP software upgrades (7.0) 500 simultaneous AP software upgrades 50 simultaneous AP software upgrades (7.0) 100 simultaneous AP software upgrades (6.0) 47
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 48
FlexConnect (HREAP) Hybrid architecture ngle management and control point Centralized Traffic Central te Cluster of WLC Centralized Traffic Data Traffic Switching Centralized traffic (split MAC) or WAN Local traffic (local MAC) HA will preserve local traffic only Traffic Switching is configured per AP and per WLAN (SSID) Local Traffic Remote Office
FlexConnect Backup Scenario WAN Failure FlexConnect will backup on local switched mode No impact for locally switched SSIDs Disconnection of centrally switched SSIDs clients Static authentication keys are locally stored in FlexConnect AP Lost features RRM, WIDS, location, other AP modes Web authentication, NAC Remote te Central te WAN Application Server
FlexConnect Backup Scenario - WLC Failure FlexConnect will first backup on local switched mode No impact for locally switched SSIDs Disconnection of centrally switched SSIDs clients CCKM roaming allowed in FlexConnect group FlexConnect AP will then search for backup WLC; when backup WLC is found, FlexConnect AP will resync with WLC and resume client sessions with central traffic. Client sessions with Local Traffic are not impacted during resync with Backup WLC. Remote te Central te WAN Application Server
FlexConnect Group: Local Backup RADIUS Backup Scenario Normal authentication is done centrally On WAN failure, AP authenticates new clients with locally defined RADIUS server Existing connected clients stay connected Clients can roam with CCKM fast roaming, or Reauthentication Central RADIUS Local Backup RADIUS Remote te Central te WAN FlexConnect Group 1 CCKM Fast Roaming
H-REAP Group: Local Backup RADIUS Configuration Define primary and secondary local backup RADIUS server per H- REAP group
Local Authentication By default FlexConnect AP authenticates clients through central controller Local Authentication allow use of local RADIUS server directly from the FlexConnect AP Central RADIUS Central te WAN Local RADIUS Remote te FlexConnect Group 1 New in 7.0.116
Local Authentication Configuration
FlexConnect Group: Local Backup Authentication Backup Scenario Normal authentication is done centrally On WAN failure, AP authenticates new clients with its local database Each FlexConnect AP has a copy of the local user DB Existing authenticated clients stay connected Clients can roam with:! CCKM fast roaming, or Local re-authentication Only LEAP and EAP-FAST Supported Central RADIUS Remote te CCKM Fast Roaming Central te WAN FlexConnect Group 1
FlexConnect Group: Local Backup Authentication Configuration Define users (max 100) and passwords Define EAP parameters (LEAP or EAP-FAST) 1 2
FlexConnect Backup Scenario WAN Down Behavior (Bootup Standalone Mode) Central Switched WLANs will shutdown Web-auth WLANs will shutdown Local Switched WLANs will be up : Only Open, Shared and WPA-PSK are allowed. Local 802.1x allowed with local authentication or local RADIUS Unsupported features RRM, CCKM, WIDS, Location, Other AP Mode, NAC.
FlexConnect Design Considerations Feature Limitations Apply Some features are not available in standalone mode or in local switching mode Local controller Web Auth in Standalone Mode Mesh AP WGB & Universal WGB VideoStream IPv6 L3 Mobility SXP TrustSec QoS override See full list in «H-REAP Feature Matrix» http://www.cisco.com/en/us/products/ps6366/products_tech_note09186a0080b 3690b.shtml
! Not Supported Backup Scenario AP Changing Mode on Failure AP can not automatically change from local mode to FlexConnect mode on local WLC failure Changing mode is a configuration task of the AP Why it does not make sense Need for dual configuration at the switch level (access port for central, 802.1Q for FlexConnect) Lost controller features when going to FlexConnect If you accept FlexConnect locally, then don t but local WLC Remote te Central te WAN Application Server! Not Supported Backup Scenario
! Not Supported Backup Scenario Auto-Enabling Backup Local Switching FlexConnect AP can not be configured with two SSID with same name; one in central switching mode, one in local switching mode; when central switching is down, local switched SSID becomes active Changing enable status of an SSID is a configuration task of the WLC level Cisco recommends using Local Switching. Why? Fault Tolerance will always keep client connection UP. H-REAP AP SSID Data (Central Switching) Remote te Central te WAN Backup Application Server Primary Application Server! Not Supported Backup Scenario SSID Data (Local Switching) Disable Enable
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 62
Network Control System High Availability NCS runs in an active / standby (1:1) mode Secondary NCS not accessible Requires same HW and SW - Physical-physical and virtual-virtual supported No database loss when failover occurs Failover can be Automatic or Manual If the standby NCS doesn t receive 3 heartbeats (timeout 2 seconds) then either the standby NCS will become active or email will be sent to network admin. Failback is always manual No Extra Licenses required Active Standby 63
NCS HA Health Monitor The Health Monitor (HM) is a process implemented in NCS, that is the primary component that manages the high availability operation of the system. It displays valuable logging and troubleshooting information To get to the Health Monitor direct the secondary NCS to the 8082 port https://< secondary NCS ip address>:8082 Note if you navigate to the primary s port 8082 you will not be able to login as it is only available on the secondary NCS 64
NCS Failover Operation HM detects failure (3 missed heartbeats 2 sec timeout) Manual Automatic Critical alarm is sent to admin Admin logs into secondary NCS to failover system Application on secondary NCS is started immediately Secondary NCS updates all controllers with its own address as the trap destination Admin configures DNS to point to failover NCS Admin configures DNS to point to failover NCS Failback process is always initiated manually as to avoid flapping, a condition that can sometimes occur when there are network connectivity problems 65
NCS HA Configuration of HA Feature The first step is to install and configure the Secondary NCS. When configuring the Primary NCS for HA, the Secondary NCS needs to be installed and reachable by the Primary NCS The following parameters must be configured on the primary NCS: name/ip address of secondary NCS email address of network administrator for system notification manual or automatic failover option Secondary NCS must always be a new installation and this option must be selected during NCS install process, i.e. standalone or primary NCS cannot be converted to secondary NCS. Standalone NCS can be converted to HA Primary. 66
NCS HA Configuration cont. Verify that the configuration is complete on the HA Status tab. After initial deployment of NCS, the entire configuration of primary NCS is replicated to the host of the secondary NCS This process can be time consuming and take up to a half hour to run After database is replicated on the delta of changes will be pushed over to the secondary NCS 67
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 68
Mobility Service Engine (MSE) High Availability A heartbeat is maintained between the primary and secondary MSE. When the primary MSE fails and the secondary takes over, the virtual address of the primary MSE is switched transparently. No HA license or a second set of client/ WIPS license required Supports 1:1 & 2:1 configuration (2 primaries can be backed to one secondary) HA for all services supported; Failover times < 1 min HA supports Network Connected and Direct Connected. Directly connected with a cable can help reduce latencies in heartbeat response times, data replication and failure detection times. WLC1 Primary MSE Virtual IP: 10.10.10.11 Eth0: 10.10.10.12 WLC2 Directly or network connected Secondary MSE Eth0: 10.10.10.13 NCS 3 rd Party 69
MSE HA Deployment Considerations Only MSE Layer-2 redundancy is supported. Both the health monitor IP and virtual IP must be on the same subnet and accessible from the Network Control System (NCS). Layer-3 redundancy is not supported. Supports automatic & manual failover / failback Physical to physical & virtual to virtual HA supported Every active primary MSE is backed up by another inactive instance. The secondary MSE becomes active only after the failover procedure is initiated. The failover procedure can be manual or automatic. WLC1 Failover to Secondary Primary MSE Eth0: 10.10.10.12 WLC2 Directly or network connected Secondary MSE Virtual IP: 10.10.10.11 Eth0: 10.10.10.13 NCS 3 rd Party 70
MSE HA Configuration Additional config required under HA HA mode in Start up script Define secondary name & ip address 71
MSE HA Verification Status shows active under the HA Configuation Sync is complete 72
Agenda RF HA te Survey RRM CleanAir HA network design Physical layout HA process and configuration Failover times / Fast heartbeat timer Software upgrades Pre-image download Scalability of AP software downloads Flex connect WAN survivability NCS HA Health Monitor Configuration MSE HA HA Architectures 73
Access Point Access Point Access Switches VLAN 10,11,12 Distribution Switches (standalone using routing, HSRP, STP) Auxiliary Switches VLAN 20,21,22 Extremely Resilient Rapid reconvergence on Link Loss due to extensive use of EtherChannel Option in Aux switch for use of dual Supervisors for improved availability Wireless Controller Wireless Controller NMSP NMSP Mobility Service Engine SOAP/XML/SNMP SNMP SNMP Mobility Service Engine SOAP/XML/SNMP Data Centre Network Control System Network Control System
Access Switches VLAN 10,11,12 Distribution Switches (VSS pair) Auxiliary Switches Wireless Controller Access Point Access Point VLAN 20,21,22 Wireless Controller Option for use of VSS for even greater resiliency, as well as a simplified design Rapid reconvergence on Link Loss due to extensive use of EtherChannel Option to eliminate Aux switches in this design, as controllers are dual-homed to VSS switch pair NMSP NMSP Mobility Service Engine SOAP/XML/SNMP SNMP SNMP Mobility Service Engine SOAP/XML/SNMP Data Centre Network Control System Network Control System
Access Point Access Point Access Switches VLAN 10,11,12 VLAN 20,21,22 Distribution Switches (standalone using routing, HSRP, STP) Auxiliary Switches Guest WLANs are configured with Auto Anchor Option showing use of Anchor controllers for use with Guest SSIDs Wireless Controller Wireless Controller EoIP Tunnels EoIP Tunnels Anchor Wireless Controller Anchor Wireless Controller Internet Edge Guest DHCP/DNS Server Internet Guest DHCP/DNS Server
Complete Your Online Session Evaluation Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Passport points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center. Don t forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account at the Cisco booth in the World of Solutions or visit www.ciscolive.com. 77
Final Thoughts Get hands-on experience with the Walk-in Labs located in World of Solutions, booth 1042 Come see demos of many key solutions and products in the main Cisco booth 2924 Visit www.ciscolive365.com after the event for updated PDFs, on-demand session videos, networking, and more! Follow Cisco Live! using social media: Facebook: https://www.facebook.com/ciscoliveus Twitter: https://twitter.com/#!/ciscolive LinkedIn Group: http://linkd.in/ciscoli 78