Preserve IP Addresses During Data Center Migration

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Preserve IP Addresses During Data Center Migration"

Transcription

1 White Paper Preserve IP Addresses During Data Center Migration Configure Cisco Locator/ID Separation Protocol and Cisco ASR 1000 Series Aggregation Services Routers 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 90

2 Contents Executive Summary... 4 Cisco Services... 5 Introduction... 5 Benefits of Keeping the Server IP Address During Data Center Migrations... 7 Existing Challenges... 7 Benefits of LISP for Data Center Migrations... 8 Layer 3-Based Data Center Migration... 9 LISP and OTV Coexistence... 9 Scope... 9 Terminology Solution Overview Solution Considerations Transparent Integration into Existing Production Network Map Server and Map Resolver Placement Transport Network Routing Considerations LISP Routing through Stretched Subnets No Changes to Virtual or Physical Servers Traffic Symmetry: Firewall and Load-Balancer Considerations Selecting the Cisco ASR 1000 Series Option Scalability and Latency Using a Non-LISP Device as the Default Gateway in the New Data Center Deploying LISP on Cisco ASR 1000 Series for Data Center Migration Implementation Details for the Brownfield (Source) Data Center Detection of Source Data Center EIDs with Cisco Embedded Event Manager Script Implementation Details for the Greenfield (New) Data Center Connectivity between the Cisco ASR 1000 Series LISP Routers Discovery of Servers and Registration with the Map Servers Traffic Flows Configuring the Source Site Cisco ASR 1000 Series Router as a LISP PxTR Configuring the Destination Site Cisco ASR 1000 Series Router as a LISP xtr, Map Server, and Map Resolver Multicast Configuration for LISP Map-Notify Messages in a Multihomed Environment Configuration Examples Source Data Center Cisco ASR 1000 Routers Detecting EIDs with EEM Scripts Destination Data Center Cisco ASR 1000 Routers Verification and Traffic Flows for the Stages of Migration Initial State: LISP Routers Have Been Implemented, but Servers Have Not Migrated Verify That the PxTRs Have Detected the Dynamic EIDs (Servers Still in Source Data Center) Verify That the Dynamic EIDs Detected in the Source Data Center Have Registered with the Map Servers. 48 Midmigration State: Some Servers Have Migrated to the Destination Data Center Verify That the xtr-msmrs Have Detected the Dynamic EIDs (Servers That Have Migrated) Verify That the Dynamic EIDs EID-to-RLOC Mapping Database Has Been Updated on Map Servers for Servers That Have Migrated Generate Traffic between a Migrated Server and a Server Still in the Source Data Center Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 90

3 Generate Traffic between a Migrated Server and the WAN End-of-Migration State: All Servers Have Migrated to the Destination Data Center Verify That the xtr-msmrs Have Detected the Dynamic EIDs (Servers That Have Migrated) Verify That the Dynamic EIDs EID-to-RLOC Mapping Database Has Been Updated on Map Servers for Servers That Have Migrated End-of-Migration Steps to Decommission LISP Routers Step 1: Move the Default Gateway for Servers to the New Data Center Switches Step 1.1: Add VLAN 4000 Interfaces on the Aggregation Switch in the New Data Center Step 1.2: - Note that Before Starting Step 1.2, be Aware that Step 1.2 and Step 1.3 Need to be Done in Quick Succession. After Verifying that the Destination Data Center Switches are in the Listen State for the Same HSRP Group as the xtr-msmrs, Increase the HSRP Priority of the Switches so that they Preempt the xtr- MSMRs and Become HSRP Active and Standby Step 1.3: Remove the HSRP Configuration on the xtr-msmrs for VLAN Step 2: Advertise the Route for the Migrated Subnet from the Destination Data Center Step 2.1: Advertise the Subnet to the WAN from the Destination Data Center Step 2.2: Shut Down the Subnet s VLAN Interface on the Aggregation Switches in the Source Data Center. 64 Step 2.3: Stop Advertising the Subnet to the WAN from the Source Data Center Failure and Recovery Analysis Using Route Watch to Track the Reachability of Remote ETRs HSRP Tracking to Force Failover When ITR Loses Both Routes to Remote ETRs EEM Scripts to Force Failover of Incoming LISP Traffic When the Internal Interface on the Primary ETR Goes Down Failover of Traffic between a Server in the Destination Data Center and a WAN Site Not Enabled for LISP Optional Deployment Variations Environments with Multiple VRF Instances LISP Coexistence with OTV Nonredundant Deployments Secondary IP Addressing Support for Stateful Devices Firewall Placement between Aggregation and Core Layers Firewall Placement between Servers and Aggregation Layer Steps for Troubleshooting LISP Mobility Step 1: Verify the Underlay Routing Step 2: Verify the LISP Control Plane Step 2.1: Verify that the ETR has Detected the Dynamic EIDs Step 2.2: Verify that the ETR has Registered the EIDs with the Map Server Step 2.3: Verify that ITR Adds an Entry in its Map Cache when Traffic is Sent to the Destination EID Step 3: Verify the LISP Data Plane Step 3.1: Check the Hardware Forwarding Table (Cisco Express Forwarding [CEF] Table) Step 3.2: Verify that ITR is LISP Encapsulating the Data Packets Step 3.3: Verify that ETR is Receiving the LISP-Encapsulated Data Packets Conclusion Appendix: Overview of LISP Locator/Identifier Separation Protocol LISP Basic Concepts For More Information Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 90

4 Executive Summary Data center migration projects usually are complex and involve careful planning and coordination between multiple teams, including application, server, network, storage, and facilities teams. It s common to hear that a data center migration project is taking longer than expected, and that the need for coordination between network and application or server teams was one of the reasons for the delay. The traditional ways of performing data center migrations have limitations that often lead to delays and high costs. Here are some of the traditional methods and their limitations: Application migration with IP address change: This type of migration is expensive and difficult to perform because it requires full documentation of existing application interactions. In addition, the IP address change results in a migration with considerable associated complexity. Many applications, particularly traditional applications, still have IP addresses hard coded into the application, so changing their IP addresses requires a rewrite the application, which is often expensive to do and has a high risk. Physical migration of the server network by repatching or rebuilding the source (existing) network as an extension to target (new) data center: This type of migration is complex because it requires you to collect a large volume of data before the project start. In addition, in some cases, conflicting VLAN numbers between the source and target networks add to the complexity. Usually this approach involves the movement of existing servers, which does not allow initiatives such as server virtualization to occur together with the data center migration project, requiring additional subsequent projects. Migration of the entire data center: This type of migration usually is not viable because it requires you to collect a large volume of data before the project start, it has high risk, and it requires an outage of the entire data center during the migration event. This approach often requires a large workforce. A better solution for data center migrations is one that allows customers to migrate servers with IP mobility and that also removes the affinity-group constraints of traditional approaches: that is, that lets you move a server (physical or virtual) and keep the IP address, subnet mask, default gateway, and host name. Such a solution also allows you to decouple the server migration process and schedule from network constraints. This document describes a new solution for data center migration. It uses Cisco Locator/ID Separation Protocol (LISP) technology (RFC 6830) running on Cisco ASR 1000 Series Aggregation Services Routers, and it simplifies and accelerates data center migration. Here are some of the benefits this solution delivers: You can decouple the server migration activities (planning, affinity-group migration, schedules, cutovers, etc.) from network constraints. The solution provides IP address mobility: the IP address, subnet mask, default gateway, and host name of migrated servers do not need to change. You can implement migration in small groupings, enabling even a single-server migration (if required), or the migration of a group of servers. The hardware cost is lower than for alternative solutions, with just four devices required. Customers who have adopted this solution were able to perform a data center migration more quickly and with much lower risk. In fact, some customers expect to reduce the migration window by up to 95 percent Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 90

5 Cisco Services The Cisco Services team is available to assist with the planning, design, deployment, support, and optimization of the solution described on this document. Effective design is essential to reducing risk, delays, and the total cost of network deployments. Cisco Services can deliver a high-level design or an implementation-ready detailed design for the solution described in this document. Integration of devices and new capabilities without compromising network availability or performance is critical to success. Cisco Deployment Services can help by providing expert assistance to develop implementation plans and to install, configure, and integrate the solution described here into your production network. Services include project management, post-implementation support, and ongoing knowledge transfer. After deployment, Cisco Optimization Services can help ensure that your deployment continues to run optimally. Cisco Services for the solution described here are available globally and can be purchased from Cisco and Cisco Certified Partners. Service delivery details may vary by region and may differ depending on the service options that you choose. For more information about Cisco Services, visit Introduction This document explains a network solution based on Cisco Locator/ID Separation Protocol (LISP) technology (RFC 6830) that helps you accelerate data center migration projects and make them run more smoothly. The target of this solution is the scenario in which a customer has a source environment, usually a data center, and a destination environment, usually a target data center, and is trying to migrate servers from the source to the destination data center. One common requirement in this scenario is the capability to migrate the servers without making any changes to their IP configuration. In particular, server administrators want to avoid changing the server IP address, subnet mask, and default gateway settings. You can use LISP to address this requirement (Figure 1). Server is moved to the destination data center while the other servers in the same subnet remain active in the source data center. No changes are required in the server IP setting. Figure 1. Nonintrusive Insertion of LISP Data Center Migration Routers 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 90

6 LISP separates location and identity, thus allowing the migration or creation of new servers in the target data center with the same IP address (identity of the server), subnet mask, and default gateway configurations as the server used in the source data center. The details of how the solution works are described in other sections of this document. Essentially, though, when a server is moved, the LISP router updates in its database the endpoint ID-to-router locator (EID-to-RLOC) mapping of the server to reflect the new location. In this example, that location is the target data center. No changes are required to the end systems, users, or servers, because LISP handles the mapping between identity (server IP address) and location (source or target data center) invisibly to the users trying to reach the server. LISP operates as an overlay, encapsulating the original packets to and from the server into a User Datagram Protocol (UDP) packet along with an additional outer IPv4 or IPv6 header, which holds the source and destination RLOCs. This encapsulation allows the server in the target data center to use the same IP address that existed in the source data center, removing the constraint of having a subnet present in only a single location. The LISP encapsulated packets can cross multiple Layer 3 boundaries and hops. Another important property of LISP that is relevant to safer, lower-risk data center migration is IP portability. LISP enables IP portability by routing (Layer 3) to the right location where the server is, providing total isolation of broadcast (Layer 2) domains between the source and destination data centers. Sites not enabled for LISP communicate with the servers moved to the target data center through the source data center where LISP is deployed. The solution documented here does not require LISP to be enabled globally. The solution instead is deployed by enabling LISP in just the source and destination data centers, with little impact on the operations of either site. The optional deployment of LISP at remote sites (branch offices) provides data-path optimization, because the LISP-encapsulated traffic is routed directly to the data center in which the target server is located. In the solution documented here, IP mobility service (enabled by LISP) is provided in the network by a Cisco ASR 1000 Series router and supports all kinds of servers, physical or virtual. For virtual servers, the solution supports any hypervisor. The ASR 1000 Series router that supports LISP and that is deployed in the source data center does not need to be the default gateway for the local servers (physical and virtual machines). Therefore, this solution can be introduced nondisruptively into an existing production environment. In summary, the goal of the solution explained here is to allow a customer to move servers (physical or virtual) between data centers while keeping their IP address, subnet mask, and default gateway and without requiring changes in firewall rules and other access lists. The next sections discuss the needs and benefits that LISP provides for data center migration. The document then positions LISP as the technology for enabling IP mobility to extend Layer 2 between data centers. This document also provides a solution overview, steps for deploying this solution, and failure scenarios and discusses convergence time Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 90

7 Benefits of Keeping the Server IP Address During Data Center Migrations Server and applications teams typically prefer to keep the same server IP address during data center migrations. The LISP solution makes this possible. Existing Challenges Without a solution that provides IP mobility, moving a server or application to a new data center requires changes in the IP address, default gateway, and host name. In this case, it takes longer to start moving servers because data must be collected, and application interfaces, interactions, and traffic flows must be documented. This approach also is complex, with the need to maintain interfaces and flows and monitor the effects of the migration even on systems that are not moved or migrated. Figure 2Error Reference source not found. shows that a change in the address affects several communication flows. Domain Name System (DNS) updates may not always help with existing applications if the IP address is hard coded. Applications local and remote may need to be modified, as may firewall rules. The risk associated with such changes can be significant. Figure 2. Challenges with Server Migrations Another consideration when migrating servers is that some servers have multiple interfaces. For example, a server may have an interface for production traffic, another for management, and another for backup and restore operations, as illustrated in figure 3. Without IP mobility, if you do not want to change the IP addresses of the servers, you must move all the servers in a subnet together. However, as illustrated in the figure, this means that you must move a large number of servers together, making this approach especially complex Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 90

8 Figure 3. All Servers in Subnets A, B, and C Would Need to Be Migrated Together Benefits of LISP for Data Center Migrations The LISP solution proposed here, which retains the server IP address during data center migrations, has these main benefits: You can perform server migrations in much smaller increments, which lowers the risk of the project. Even just a single server can be migrated without affecting any other server in any subnet to which that server is connected. Server migrations can begin much faster: as soon as the data for that server is available in target data center. Less data needs to be kept synchronized, reducing risk and WAN requirements. Path optimization from the user to the application is possible, eliminating latency concerns and reducing WAN bandwidth requirements. This benefit requires adoption of LISP at remote sites. Affinity-group constraints are removed. Each server can be migrated independently. Table 1 demonstrates the impact of LISP on data center migration by comparing an existing approach that provides no IP mobility and LISP, in which a server keeps its IP address during data center migration. Table 1. Comparison of Migration with and Without LISP Traditional Approach (Without IP Mobility) LISP (with IP Mobility) Impact of LISP on Data Center Migration Migration increments (number of servers to be migrated at the same time) Smallest chunk that can be migrated is all the servers in the same subnet: usually a large number of servers Smallest chunk that can be migrated is a single server Lower risk; servers can be migrated faster with less dependency When can servers in target data center begin being activated? Only after all data for all servers in the same subnet is migrated to target data center As soon as the data for a single server is available, that server can be moved Servers can be activated in target data center much sooner than before; reduces the amount of time large data sets have to be kept synchronized Impact on latency and application performance Path optimization from the end user to the server not easily achievable Direct access from the end user to new server located on target data center is possible Reduces latency concerns and WAN bandwidth requirements 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 90

9 Layer 3-Based Data Center Migration Layer 2 (VLAN) extension can also be used to provide IP mobility during data center migrations. However, the need for a Layer 3-based (routing based) solution that provides IP mobility is growing because customers prefer not to extend the broadcast domain (Layer 2) between their environments. The main difference between Layer 2-based technologies, such as Cisco Overlay Transport Virtualization (OTV) and Virtual Private LAN Service (VPLS), and LISP, which provides a Layer 3-based solution, is that with LISP the broadcast or Layer 2 failure domain is not extended between sites. LISP does not propagate broadcasts, whereas with Layer 2-based solutions broadcast packets are flooded. Another consideration is that a pair of devices enabled for OTV devices would usually connect to a single pair of aggregation switches because they are connected at Layer 2. However, with LISP a single pair of routers enabled for LISP can connect to multiple aggregation blocks in the source and destination data centers; because they are connected at Layer 3, there is not a risk of bridging between aggregation blocks. Note that Layer 2 solutions allow cluster members that require Layer 2 connectivity to be stretched between data centers. LISP does not support this capability because it does not flood Layer 2 broadcasts or link local multicast packets. Live-migration applications (for example, VMware vmotion) also require Layer 2 connectivity between hosts, and therefore are supported only by Layer 2-based solutions. The LISP solution presented here supports cold migration between data centers. LISP and OTV Coexistence LISP supports a deployment mode called extended subnet mode, in which LISP is used for the same VLAN and subnet as a LAN extension solution (for example, OTV or VPLS). In this case, LISP provides path optimization, and the LAN extension solution provides IP mobility. The solution presented here does is not use LISP extended subnet mode. However, as documented in the section LISP Coexistence with OTV you can combine OTV and LISP on the same ASR 1000 Series router, with OTV used for some VLANs, and LISP used for others. For example, OTV may be used to extend VLAN 10, which has subnet /24 configured on it, and LISP may be used to provide IP mobility for subnet /24 in VLAN 20. Note that LISP does not extend VLANs. Therefore, the VLAN ID is irrelevant for forwarding using LISP. Some customers are taking the approach route when you can, and bridge when you must. For this approach, a Layer 3-based data center migration as presented in this document is required. Scope This document describes a solution that allows servers to be migrated while keeping their IP addresses, subnet masks, and default gateways. The solution is based on Cisco LISP running on the Cisco ASR 1000 Series routers deployed at the source and destination data centers. This solution requires the ASR 1000 Series router running Cisco IOS XE Software Release 3.10 or later with the Cisco Advanced IP Services or Advanced Enterprise Services feature set. Although not described directly in this document, a Cisco Cloud Services Router (CSR) 1000V running Cisco IOS XE Software Release 3.11 or later with the Cisco Premium or AX license package can be used with the same results Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 90

10 LISP is supported on several other Cisco platforms. However, the use of other routers and switches that support LISP to provide the solution described here is outside the scope of this document, and their implementation of LISP may vary from the implementation on Cisco IOS XE Software used by the ASR 1000 and CSR 1000V Series. Terminology This section defines some of the main terms used in this document. These terms are described in more detail elsewhere in the document. Cisco Cloud Services Router (CSR) 1000V: Cisco s virtual router offering, which you can deploy in private, public, or hybrid cloud environments Cisco Locator/ID Separation Protocol (LISP): A tunnel protocol that uses a central database in which endpoint location is registered; LISP enables IP host-based mobility for endpoints LISP-enabled virtualized router: A virtual machine or appliance that supports routing and LISP functions, including host mobility Endpoint ID (EID): IPv4 or IPv6 identifier of the devices connected at the edge of the network; used in the first (innermost) LISP header of a packet Routing locator (RLOC): IPv4 or IPv6 addresses used to encapsulate and transport the flow between LISP nodes Ingress tunnel router (ITR): A router that has two functions: it resolves the location of an EID by querying the database, and then it performs the encapsulation to the remote RLOC Egress tunnel router (ETR): A router that has two functions: it registers the endpoint or location associated with the database, and then it decapsulates the LISP packet and forwards it to the right endpoint xtr: A generic name for a device performing both ITR and ETR functions Proxy-ITR (PITR): A router that acts like an ITR but does so on behalf of non-lisp sites that send packets to destinations at LISP sites Proxy-ETR (PETR): A router that acts like an ETR but does so on behalf of LISP sites that send packets to destinations at non-lisp sites PxTR: A generic name for a device that performs both PITR and PERT functions Solution Overview Data center migration requires the mobility of any workload with IP address retention, in a plug-and-play solution, independent of the server type and hypervisor. It must allow partial migration, which means keeping a full IP subnet active on both existing (brownfield) and new (greenfield) data centers at the same time. In addition, IP mobility must be implemented over a totally safe connection without extending the fault domain from one site to the other. In the proposed solution, LISP offers routed IP mobility: that is, a stretched subnet connection with Layer 3 mobility. LISP is a host-based routing technology that uses a central database and local caching to enable IP intercommunication. For more information about LISP, see the Appendix: Overview of LISP section in the appendix of this document Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 90

11 Figure 4 shows a LISP topology. Figure 4. Reference Topology Solution Considerations Migration is performed from an existing, brownfield data center to a new, greenfield data center. No constraints are imposed on either of these data centers. Usually the existing data center is built on the standard aggregation- and access-layer architecture, and the new data center either uses the same topology or a fabric approach, with a spine-and-leaf design. However, in fact, the solution exposed in this document requires only access to the data center VLANs without constraint on the internal design. Transparent Integration into Existing Production Network To interconnect the data center, the data center interconnect (DCI) uses two pairs of LISP nodes deployed on a stick in each data center. It uses an IP connection between that could be the IP infrastructure of the enterprise, or a direct connection from a LISP node to another LISP node when the migration occurs over a short distance Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 90

12 The concept of on a stick deployment relies on devices that are not inserted into the common data path and so are not used by the main application sessions. The LISP connection will be used only for the workload being migrated. The devices are connected with two links: one to the core to simply allow IP to run and which the LISP encapsulated packet will traverse, and another in VLAN trunk mode and which must be connected to the distribution layer or to a leaf node to get access to the subnets in migration. This design uses several LISP functions, but they all reside in just two pairs of physical devices. There is no need for any LISP deployment anywhere else. On the brownfield data center ASR 1000 Series (or CSR 1000V Series) pair: Proxy Address Resolution Protocol (proxy-arp): The LISP node will respond to ARP directed to any migrated workload, which attracts traffic to LISP. PITR: Get the flow from any source to be transported by LISP. PETR: Attract traffic that returns from the new data center to help ensure a symmetrical path, if needed. On the greenfield data center ASR 1000 Series (or CSR 1000V Series) pair: Default-Gateway: During the migration, the ASR 1000 Series router is the gateway for the subnet under migration. ETR: This router transmits traffic coming from LISP to the data center. ITR: This router transmits traffic coming from the data center to LISP. Use-PETR: This option forces all traffic to return to the original data center. Map Server and Map Resolver Placement The LISP database, composed of the map server (MS) and map resolver (MR), must be hosted on one pair of devices. This database could be anywhere, but the discussion in this document, it is hosted on the greenfield side. Transport Network For the solution described here, the only requirement for the transport network is that it must provide IP connectivity between the LISP-enabled routers located in the source and destination data centers. This network is referred to as the transport network. The transport network can be IP based or Multiprotocol Label Switching (MPLS) based. So long as there is IP connectivity between the LISP-enabled routers, the solution works. The IP connectivity can be provided by dedicated links interconnecting the sites as shown earlier in figure 4, or the LISP flows can traverse the WAN. Routing Considerations As stated earlier, the LISP nodes just must be reachable from each other over an IP network. If this IP connection is the enterprise network, you need to consider two factors: the maximum transmission unit (MTU) size and fast convergence. LISP encapsulates packets under UDP and extends their size. LISP adds 36 bytes for IPv4 and 56 bytes for IPv6 encapsulation. LISP tunneling does support dynamic MTU adaptation through Path Maximum Transmission Unit Discovery (PMTUD), and LISP nodes do support packet fragmentation, but the easiest way to achieve reachability is for the IP network to support a large MTU Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 90

13 Fast convergence also is required. Upon node failure, LISP converges at the speed at which Interior Gateway Protocol (IGP) advertises the failure of the route locator, which is the tunnel tail-end address. So if the IGP is slow, LISP convergence also is slow. Optionally, LISP offers regular probing, but with a minimum of 60s for detection. In very rare cases, use of service-level agreement (SLA) probes provide fast convergence to LISP even if the infrastructure is slow. In the special but common case in which the LISP nodes are directly connected from data center to data center, the IP network is just this connection. Here, the LISP IP network is independent from the enterprise core and WAN. It just has to be considered as an isolated IP network, with its own local IGP, tuned to fast convergence. LISP Routing through Stretched Subnets The pair of LISP PxTRs in the brownfield data center interacts with the local data center only through proxy ARP. When one server is moved to the new data center, it will be detected and registered in the LISP database, and from then on, the local pair of PxTRs will reply with their own MAC address to any ARP to the remote server. Reachability throughout the stretched subnet is then established. No Changes to Virtual or Physical Servers As the subnet is extended, and with a default gateway in each data center with the same virtual IP address, no changes whatsoever need to be made on the migrated server. In addition, because the IP address and reachability of the moved workload has not changed, neither the firewall rules nor the load-balancer definition need to be updated. During the migration, the service nodes are still running in the old data center and will be moved at the end of the migration. Traffic Symmetry: Firewall and Load-Balancer Considerations Usually, because of the presence of a firewall or load balancer, the traffic coming back from the moved workload has to be pushed back to the brownfield data center. For this process, during the migration the ASR 1000 Series pair in the greenfield data center is maintained as the default gateway for the moved servers, and it forces traffic back to the primary data center using LISP tunneling toward the PETR. From there, the traffic may be delivered natively to the Layer 3 network through the site firewall, or if the firewall or load balancer is the local default gateway, it may be delivered through the Layer 2 LAN through Policy-Based Routing (PBR). Selecting the Cisco ASR 1000 Series Option All ASR 1000 Series routers are capable of supporting the solution described in this document. There are no hardware dependencies. The only differences between the ASR 1000 Series routers are their performance, scale, number of interfaces, and redundancy model (software or hardware). The ASR 1000 Series product range makes the router series an excellent platform for this solution. To see the product choices, go to You can use the same solution, with exactly the same configuration, for scenarios requiring lower throughput: for example, by using a Cisco ASR 1001-X Router. Or you can use it in scenarios in which throughput of 100 Gbps or more is required, by using a Cisco ASR 1006 or 1013 Router. For more information about the ASR 1000 Series, visit or contact your local Cisco account representative. For information about the ASR 1000 Series ordering and bundles, please refer to the Cisco ASR 1000 Ordering Guide Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 90

14 Scalability and Latency The migration nodes - one pair per site - can be any device that supports LISP mobility. Devices validated so far are the ASR 1000 Series and the CSR 1000V virtual router. Any combination of these devices is supported. Please refer to the ASR 1000 Series products mentioned in the preceding section for more information about node choice based on the throughput you need. The solution described here has been validated to scale to support up to 5000 dynamic EIDs when deployed in an ASR 1000 Series router. Each ASR 1000 Series LISP node adds 30 microseconds of latency, which is negligible. However, you need to consider the latency on the link that connects the sites. Using a Non-LISP Device as the Default Gateway in the New Data Center The solution in this document needs the default gateway of the greenfield site to be on the LISP XTR for the workload under migration. Only at the end of migration can it be moved to the aggregation layer. This approach is the recommended and tested design. However, in some cases the default gateway in new data center needs to be on another device. For instance, the brownfield data center may not have any firewall. In this case, symmetry of traffic is not required, and the traffic on the greenfield side can exit naturally through routing. LISP encapsulation is not needed on the return traffic, and the default gateway can be anywhere. If the old data center has a firewall that requires symmetry of traffic, the LISP XTR on the new data center must capture the return traffic. In this case, the default gateway has to force traffic across the LISP node in some way. A default route or PBR on the source subnet under migration could be used. Deploying LISP on Cisco ASR 1000 Series for Data Center Migration The reference topology (Figure 5) shows two data centers. The source data center is the existing data center in which the servers are currently located. The destination data center is the new data center to which the servers will be migrated. This example shows two server subnets, /24 and /24, which are in VLANs 200 and 201, respectively, in the source data center. LISP mobility will be enabled on the ASR 1000 Series routers for these two subnets, to allow servers to be migrated from the source to the destination data center. The destination data center will have a new VLAN allocation scheme, so the subnets /24 and /24 will be in VLANs 4000 and 4001, respectively. Note that this initial example doesn t have any stateful devices such as load balancers or firewalls. The inclusion of firewalls and load balancers in the topology is discussed in the section Support for Stateful Devices later in this document Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 90

15 Figure 5. Reference Topology with Sample Subnets to Be Migrated Implementation Details for the Brownfield (Source) Data Center The source data center consists of a traditional three-tier topology with servers connected to the access layer. The default gateway for the servers is the aggregation switches. If Virtual Switching System (VSS) is supported, the aggregation switches could use VSS, or the solution could run in standalone mode and use Hot Standby Router Protocol (HSRP) for first-hop redundancy for the servers. The aggregation switches will advertise the server subnets in a dynamic routing protocol and have routed Layer 3 interfaces facing the core devices. The core devices will then advertise the server subnets to the WAN to attract traffic from the remote sites to the source data center. The ASR 1000 Series routers in the source data center will be configured as LISP PxTRs. The aggregation switches will use IEEE 802.1Q trunks to connect to the PxTRs and trunk the server VLANs. The PxTRs will use routed IEEE 802.1Q subinterfaces on these trunks for each server subnet that needs to be migrated. The PxTRs will run HSRP for each server subnet to determine the active PxTR for egress traffic for each server subnet. This HSRP group will be separate from the HSRP group used on the aggregation switches, and the servers will not use the PxTRs as their default gateway. In this example, PxTR-01 is the HSRP active for both server subnets Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 90

16 LISP mobility will be enabled for the server subnets on the PxTRs. Hence, these subnets will be dynamic EID subnets. A lower RLOC priority will be given to PxTR-01 for both dynamic EID subnets. PxTR-02 will be assigned a higher RLOC priority for both dynamic EID subnets. Lower priority is preferred; hence, inbound LISP encapsulated traffic will traverse PxTR-01 during normal circumstances. The destination data center ASR 1000 Series routers will use source data center ASR 1000 Series routers as proxy egress tunnel routers (PETRs). This approach allows servers that have been migrated to send traffic destined for non-lisp subnets or remote sites through the source data center. This capability enables traffic to flow symmetrically for servers that have migrated through any stateful devices in the source data center such as load balancers and firewalls. A separate IEEE 802.1Q subinterface on both PxTRs without LISP mobility enabled will be used for default gateway routing. A /29 subnet will be used for this VLAN. A static default route will be used on the PxTRs with the next hop as the aggregation switches HSRP virtual IP address or switch virtual interface (SVI) for the VLAN if VSS is used. This subinterface will be used only for outbound traffic to non-lisp subnets and remote sites. Because these subinterfaces will be used only for outbound traffic from the PxTRs, HSRP is not required on the PxTRs for this subnet. Detection of Source Data Center EIDs with Cisco Embedded Event Manager Script For LISP mobility to function, the xtr must receive traffic from the servers to determine their locations. In the source data center, the PxTRs are not the default gateway for the servers. Hence, for detection they depend on receipt of broadcast traffic such as ARP requests from the servers. In most cases, this situation is not a problem because servers regularly send broadcast ARP requests either to resolve the MAC address for other servers on the subnet or to resolve the MAC address of the default gateway. However, a problem may occur if a server has already resolved the MAC address of the default gateway before the PxTRs are added to the network. Subsequent ARP requests from the server for the default gateway may be unicast to refresh the ARP cache. Because the PxTRs are not the default gateway in the source data center, they may never learn about servers that are not sending broadcasts. To solve this problem, a Cisco Embedded Event Manager (EEM) script can be used on the primary PxTR to ping every IP address in the subnets enabled for LISP mobility. Before pinging each IP address, the PxTR needs to send an ARP request for the address of the server. Each server that responds with an ARP reply will be added to the PxTR s local LISP database as a dynamic EID. This solution works even if the server has a software firewall that blocks pings, because the PxTR can build its local LISP database based on the ARP replies even without receiving an Internet Control Message Protocol (ICMP) reply. Examples of EEM scripts are shown later, in the configuration section of this document. Implementation Details for the Greenfield (New) Data Center The destination data center is based on a modern spine-and-leaf architecture. The ASR 1000 Series routers in the destination data center will be LISP xtrs and LISP mapping servers and map resolvers. The spine switches will use IEEE 802.1Q trunks to connect to the xtr mapping servers and map resolvers and trunk the server VLANs. The xtr mapping servers and map resolvers will use routed IEEE 802.1Q subinterfaces on these trunks for each server subnet that needs to be migrated Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 90

17 The xtr mapping servers and map resolvers will run HSRP for each server subnet to determine the active ITR for egress traffic for each server subnet. This HSRP group will be the same as the HSRP group used on the aggregation switches in the source data center, and the migrated servers will use the xtr mapping servers and map resolvers as their default gateway for the duration of the migration. This approach allows the migrated servers to use the same IP address and MAC address for their default gateway after they have been migrated. In this example, xtr-msmr-01 will be the HSRP active router for both server subnets. LISP mobility will be enabled for the server subnets on the xtr mapping servers and map resolvers. Hence, these subnets will be dynamic EID subnets. A lower RLOC priority will be given to xtr-msmr-01 for both dynamic EID subnets. xtr-msmr-02 will be assigned a higher RLOC priority for both dynamic EID subnets. Lower priority is preferred; hence, inbound LISP encapsulated traffic will traverse xtr-ms/mr-01 during normal circumstances. Connectivity between the Cisco ASR 1000 Series LISP Routers Each LISP router needs just Layer 3 IP reachability to the RLOC addresses of the other LISP routers. Therefore, the solution is independent of Layer 1 and Layer 2. The connectivity between the ASR 1000 Series LISP routers can be over any physical medium, such as dark fibre, dense wavelength-division multiplexing (DWDM), SONET, or SDH. The connectivity could be over Layer 2 metro Ethernet, MPLS Layer 3 VPN, or the Internet. You can even use the data center s existing WAN links for IP connectivity between the LISP routers. Typically for a data center migration, dedicated links are used for the duration of the migration. Dark fibre between the data centers is more common for distances less than 10 km, whereas DWDM or metro Ethernet is more common for distances greater than 10 km. This example assumes dedicated Layer 1 or Layer 2 links between the ASR 1000 Series routers for the migration: that is, the interfaces between the ASRs are Layer 2 adjacent and are on the same subnet. A loopback interface will be used on each of the LISP routers for the RLOC. Open Shortest Path First (OSPF) will be used as the routing protocol between the LISP routers. Bidirectional Forwarding Detection (BFD) can be used to improve convergence if the ASRs are connected over a Layer 2 network such as metro Ethernet. OSPF should be enabled only on the point-to-point links between the ASRs and the RLOC loopback interface. OSPF should not be enabled on the subinterfaces connected to the data center switches, and the data center server subnets should not be announced to OSPF. Discovery of Servers and Registration with the Map Servers The LISP PxTRs can be connected to the existing environments in the source and destination data centers nonintrusively. No routing changes are required in the source data center. Because routed subinterfaces are used on the PxTRs facing the aggregation switches, spanning tree isn t needed. You should use spanning-tree portfast trunk on the trunk ports on the aggregation switches facing the ASRs for faster convergence. LISP mobility will be enabled on each server-facing subinterface on the PxTRs. The PxTRs will not be the default gateway for the servers in the source data center. The HSRP active PxTR will update its local LISP database based on broadcast packets such as the ARP requests it receives from servers in the subnets in which LISP mobility is enabled. Each server detected will appear as a dynamic EID in the local LISP database. The HSRP active PxTR will send multicast LISP map-notify messages back out the interface on which the server was detected. The HSRP standby PxTR will update its local LISP database based on the information in the map-notify messages received from the active PxTR Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 90

18 Both PxTRs will send map-register messages to both map servers, which are the LISP xtr-msmr routers in the destination data center. The map servers will update the EID-to-RLOC mapping database based on the information in the map-register messages. This information includes the RLOCs and their priority and weight values for each EID. Initially, when no servers have been migrated, the RLOC addresses for all EIDs registered in the EID-to- RLOC mapping database will be the RLOC addresses of the two PxTRs. See Figure 6. Figure 6. Discovery of Servers and Registration with Map Servers As in the source data center, the destination data center xtr-msmr routers will be connected to the spine switches over a IEEE 802.1Q trunk with routed subinterfaces on the xtr-msmrs. Hence, there is no spanningtree interaction. Similarly, you should use spanning-tree portfast trunk on the ports on the spine switches facing the ASRs for faster convergence. Each subinterface on the xtr-msmrs will be for the new server VLANs in the destination data center and will have LISP mobility enabled. When a server is migrated to the new data center, it will send ARP requests for its default gateway. In the destination data center, the default gateway for the servers will be the xtr-msmr routers. The xtr-msmsr routers will update their local LISP database to include the server that has sent an ARP request. They will then update the map-server EID-to-RLOC mapping database, which is stored locally because these routers are the map servers. They will then send a map-notify message to the PxTRs to indicate that the server is now located in the destination data center. The PxTRs will remove this server from the local LISP database and send a Gratuitous ARP (GARP) request out the subinterface for the VLAN in which the server previously resided Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 90

19 Other devices in that VLAN should update their ARP cache to use the HSRP virtual IP MAC address of the PxTR for the migrated server s IP address. See Figure 7. Figure 7. Migration of a Server to the Destination Data Center Traffic Flows The LISP routers can be connected to the existing environments in the source and destination data centers nonintrusively. No routing changes are required in the source data center. Traffic to and from the WAN and servers in the source data center will still route in and out of the source data center. The LISP routers will not be in the traffic path for any flows between servers in the source data center and the WAN (Figure 8) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 90

20 Figure 8. Traffic between Server Still in the Source Data Center and the WAN The LISP routers also will not be in the traffic path for any flows between servers within the same data center, whether it is intra-vlan or inter-vlan traffic. All traffic between servers in the same data center is kept local to that data center (Figure 9) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 90

21 Figure 9. Inter-VLAN Routing Is Local between Servers in the Same Data Center The LISP routers take part in traffic forwarding only for flows between the data centers: for example, traffic between migrated servers and servers still in the source data center. When a server that has been migrated to the destination data center needs to send traffic to a server in the same subnet that is still in the source data center, it will send an ARP request. Because ARP requests are broadcast packets, the xtr-msmr that is HSRP active will check its local LISP database to see if the server (EID) is in its local data center. If it is not, then the router will use proxy ARP for the destination EID and check its LISP map cache for a mapping for the destination EID. If the map cache doesn t contain an entry, then it will query the map-server EID-to-RLOC database and update the map cache if an entry exists. The originating server will now have an ARP entry for the remote server with the MAC address of the HSRP virtual IP address of the xtr-msmrs and can now send packets to the remote server. Based on its map-cache entry for the destination EID, the xtr-msmr that is HSRP active will encapsulate these packets with LISP with the destination RLOC of the PxTR in the source data center that has the lowest priority. When the PxTR receives the LISP-encapsulated packet, the PxTR will decapsulate the packet and forward the original packet to the server in the source data center. The same process occurs in the opposite direction (Figure 10) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 90

LISP Functional Overview

LISP Functional Overview CHAPTER 2 This document assumes that the reader has prior knowledge of LISP and its network components. For detailed information on LISP components, their roles, operation and configuration, refer to http://www.cisco.com/go/lisp

More information

Deploying a Secure Hybrid Cloud Extension with Cisco CSR 1000V and LISP

Deploying a Secure Hybrid Cloud Extension with Cisco CSR 1000V and LISP White Paper Deploying a Secure Hybrid Cloud Extension with Cisco CSR 1000V and LISP White Paper 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric Stretched Active- Active Application Centric Infrastructure (ACI) Fabric May 12, 2015 Abstract This white paper illustrates how the Cisco Application Centric Infrastructure (ACI) can be implemented as

More information

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure

More information

Simplify Your Route to the Internet:

Simplify Your Route to the Internet: Expert Reference Series of White Papers Simplify Your Route to the Internet: Three Advantages of Using LISP 1-800-COURSES www.globalknowledge.com Simplify Your Route to the Internet: Three Advantages of

More information

LISP Host Mobility Deployment Best Practices

LISP Host Mobility Deployment Best Practices APPENDIXA This appendix presents some design best practices and recommendations when deploying the LISP Host Mobility solution between data center sites equipped with Nexus 7000 switches. When not specified

More information

Course Contents CCNP (CISco certified network professional)

Course Contents CCNP (CISco certified network professional) Course Contents CCNP (CISco certified network professional) CCNP Route (642-902) EIGRP Chapter: EIGRP Overview and Neighbor Relationships EIGRP Neighborships Neighborship over WANs EIGRP Topology, Routes,

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

Service Graph Design with Cisco Application Centric Infrastructure

Service Graph Design with Cisco Application Centric Infrastructure White Paper Service Graph Design with Cisco Application Centric Infrastructure 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 93 Contents Introduction...

More information

Using LISP for Secure Hybrid Cloud Extension

Using LISP for Secure Hybrid Cloud Extension Using LISP for Secure Hybrid Cloud Extension draft-freitasbellagamba-lisp-hybrid-cloud-use-case-00 Santiago Freitas Patrice Bellagamba Yves Hertoghs IETF 89, London, UK A New Use Case for LISP It s a use

More information

Top-Down Network Design

Top-Down Network Design Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,

More information

CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network

CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network Olga Torstensson SWITCHv6 1 Components of High Availability Redundancy Technology (including hardware and software features)

More information

VXLAN Bridging & Routing

VXLAN Bridging & Routing VXLAN Bridging & Routing Darrin Machay darrin@arista.com CHI-NOG 05 May 2015 1 VXLAN VM-1 10.10.10.1/24 Subnet A ESX host Subnet B ESX host VM-2 VM-3 VM-4 20.20.20.1/24 10.10.10.2/24 20.20.20.2/24 Load

More information

Cisco Networking Academy CCNP Multilayer Switching

Cisco Networking Academy CCNP Multilayer Switching CCNP3 v5 - Chapter 5 Cisco Networking Academy CCNP Multilayer Switching Implementing High Availability in a Campus Environment Routing issues Hosts rely on a router to find the best path Issues with established

More information

How To Understand and Configure Your Network for IntraVUE

How To Understand and Configure Your Network for IntraVUE How To Understand and Configure Your Network for IntraVUE Summary This document attempts to standardize the methods used to configure Intrauve in situations where there is little or no understanding of

More information

Integrate Cisco Application Centric Infrastructure with Existing Networks

Integrate Cisco Application Centric Infrastructure with Existing Networks White Paper Integrate Cisco Application Centric Infrastructure with Existing Networks What You Will Learn Cisco Application Centric Infrastructure (ACI) offers a revolutionary way of deploying, managing,

More information

Cisco Certified Network Associate ( )

Cisco Certified Network Associate ( ) Cisco Certified Network Associate (200-125) Exam Description: The Cisco Certified Network Associate (CCNA) Routing and Switching composite exam (200-125) is a 90-minute, 50 60 question assessment that

More information

Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications

Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications Product Overview Cisco Dynamic Multipoint VPN (DMVPN) is a Cisco IOS Software-based security solution for building scalable

More information

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009 Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208

More information

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more

More information

"Charting the Course...

Charting the Course... Description "Charting the Course... Course Summary Interconnecting Cisco Networking Devices: Accelerated (CCNAX), is a course consisting of ICND1 and ICND2 content in its entirety, but with the content

More information

TRILL for Data Center Networks

TRILL for Data Center Networks 24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: wuhuajun@huawei.com Tel: 0041-798658759 Agenda 1 TRILL Overview

More information

The Network Level in Local Area Networks. Fulvio Risso Politecnico di Torino

The Network Level in Local Area Networks. Fulvio Risso Politecnico di Torino The Network Level in Local Area Networks Fulvio Risso Politecnico di Torino 1 LANs and Routers Routers are a fundamental part of a LAN We cannot imagine a network without access to the Internet and/or

More information

Interconnecting Cisco Networking Devices Part 2

Interconnecting Cisco Networking Devices Part 2 Interconnecting Cisco Networking Devices Part 2 Course Number: ICND2 Length: 5 Day(s) Certification Exam This course will help you prepare for the following exam: 640 816: ICND2 Course Overview This course

More information

CHAPTER 10 LAN REDUNDANCY. Scaling Networks

CHAPTER 10 LAN REDUNDANCY. Scaling Networks CHAPTER 10 LAN REDUNDANCY Scaling Networks CHAPTER 10 10.0 Introduction 10.1 Spanning Tree Concepts 10.2 Varieties of Spanning Tree Protocols 10.3 Spanning Tree Configuration 10.4 First-Hop Redundancy

More information

Network Virtualization Network Admission Control Deployment Guide

Network Virtualization Network Admission Control Deployment Guide Network Virtualization Network Admission Control Deployment Guide This document provides guidance for enterprises that want to deploy the Cisco Network Admission Control (NAC) Appliance for their campus

More information

Juniper / Cisco Interoperability Tests. August 2014

Juniper / Cisco Interoperability Tests. August 2014 Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper

More information

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West

More information

Description: Objective: Upon completing this course, the learner will be able to meet these overall objectives:

Description: Objective: Upon completing this course, the learner will be able to meet these overall objectives: Course: Building Cisco Service Provider Next-Generation Networks, Part 2 Duration: 5 Day Hands-On Lab & Lecture Course Price: $ 3,750.00 Learning Credits: 38 Description: The Building Cisco Service Provider

More information

Troubleshooting Bundles and Load Balancing

Troubleshooting Bundles and Load Balancing CHAPTER 5 This chapter explains the procedures for troubleshooting link bundles and load balancing on the Cisco ASR 9000 Aggregation Services Router. A link bundle is a group of ports that are bundled

More information

What is VLAN Routing?

What is VLAN Routing? Application Note #38 February 2004 What is VLAN Routing? This Application Notes relates to the following Dell product(s): 6024 and 6024F 33xx Abstract Virtual LANs (VLANs) offer a method of dividing one

More information

MPLS VPN over mgre. Finding Feature Information. Prerequisites for MPLS VPN over mgre

MPLS VPN over mgre. Finding Feature Information. Prerequisites for MPLS VPN over mgre The feature overcomes the requirement that a carrier support multiprotocol label switching (MPLS) by allowing you to provide MPLS connectivity between networks that are connected by IP-only networks. This

More information

640-816: Interconnecting Cisco Networking Devices Part 2 v1.1

640-816: Interconnecting Cisco Networking Devices Part 2 v1.1 640-816: Interconnecting Cisco Networking Devices Part 2 v1.1 Course Introduction Course Introduction Chapter 01 - Small Network Implementation Introducing the Review Lab Cisco IOS User Interface Functions

More information

Service Definition. Internet Service. Introduction. Product Overview. Service Specification

Service Definition. Internet Service. Introduction. Product Overview. Service Specification Service Definition Introduction This Service Definition describes Nexium s from the customer s perspective. In this document the product is described in terms of an overview, service specification, service

More information

RESILIENT NETWORK DESIGN

RESILIENT NETWORK DESIGN Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, igregr@fit.vutbr.cz Campus Best Practices - Resilient network design Campus

More information

Extending Networking to Fit the Cloud

Extending Networking to Fit the Cloud VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at

More information

Cisco Configuring Basic MPLS Using OSPF

Cisco Configuring Basic MPLS Using OSPF Table of Contents Configuring Basic MPLS Using OSPF...1 Introduction...1 Mechanism...1 Hardware and Software Versions...2 Network Diagram...2 Configurations...2 Quick Configuration Guide...2 Configuration

More information

Interconnecting Cisco Networking Devices: Accelerated Course CCNAX v2.0; 5 Days, Instructor-led

Interconnecting Cisco Networking Devices: Accelerated Course CCNAX v2.0; 5 Days, Instructor-led Interconnecting Cisco Networking Devices: Accelerated Course CCNAX v2.0; 5 Days, Instructor-led Course Description Interconnecting Cisco Networking Devices: Accelerated (CCNAX) v2.0 is a 60-hour instructor-led

More information

SSVVP SIP School VVoIP Professional Certification

SSVVP SIP School VVoIP Professional Certification SSVVP SIP School VVoIP Professional Certification Exam Objectives The SSVVP exam is designed to test your skills and knowledge on the basics of Networking, Voice over IP and Video over IP. Everything that

More information

Clustering. Configuration Guide IPSO 6.2

Clustering. Configuration Guide IPSO 6.2 Clustering Configuration Guide IPSO 6.2 August 13, 2009 Contents Chapter 1 Chapter 2 Chapter 3 Overview of IP Clustering Example Cluster... 9 Cluster Management... 11 Cluster Terminology... 12 Clustering

More information

Interconnecting Data Centers Using VPLS

Interconnecting Data Centers Using VPLS Interconnecting Data Centers Using VPLS Nash Darukhanawalla, CCIE No. 10332 Patrice Bellagamba Cisco Press 800 East 96th Street Indianapolis, IN 46240 viii Interconnecting Data Centers Using VPLS Contents

More information

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane

More information

Cisco Dynamic Workload Scaling Solution

Cisco Dynamic Workload Scaling Solution Cisco Dynamic Workload Scaling Solution What You Will Learn Cisco Application Control Engine (ACE), along with Cisco Nexus 7000 Series Switches and VMware vcenter, provides a complete solution for dynamic

More information

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet CCNA R&S: Introduction to Networks Chapter 5: Ethernet 5.0.1.1 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.

More information

Network Virtualization with the Cisco Catalyst 6500/6800 Supervisor Engine 2T

Network Virtualization with the Cisco Catalyst 6500/6800 Supervisor Engine 2T White Paper Network Virtualization with the Cisco Catalyst 6500/6800 Supervisor Engine 2T Introduction Network virtualization is a cost-efficient way to provide traffic separation. A virtualized network

More information

Outline VLAN. Inter-VLAN communication. Layer-3 Switches. Spanning Tree Protocol Recap

Outline VLAN. Inter-VLAN communication. Layer-3 Switches. Spanning Tree Protocol Recap Outline Network Virtualization and Data Center Networks 263-3825-00 DC Virtualization Basics Part 2 Qin Yin Fall Semester 2013 More words about VLAN Virtual Routing and Forwarding (VRF) The use of load

More information

Cisco Certified Network Associate Exam. Operation of IP Data Networks. LAN Switching Technologies. IP addressing (IPv4 / IPv6)

Cisco Certified Network Associate Exam. Operation of IP Data Networks. LAN Switching Technologies. IP addressing (IPv4 / IPv6) Cisco Certified Network Associate Exam Exam Number 200-120 CCNA Associated Certifications CCNA Routing and Switching Operation of IP Data Networks Operation of IP Data Networks Recognize the purpose and

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Configuring IPS High Bandwidth Using EtherChannel Load Balancing

Configuring IPS High Bandwidth Using EtherChannel Load Balancing Configuring IPS High Bandwidth Using EtherChannel Load Balancing This guide helps you to understand and deploy the high bandwidth features available with IPS v5.1 when used in conjunction with the EtherChannel

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date 2012-10-30

VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date 2012-10-30 Issue 01 Date 2012-10-30 HUAWEI TECHNOLOGIES CO., LTD. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of

More information

Cisco AnyConnect Secure Mobility Solution Guide

Cisco AnyConnect Secure Mobility Solution Guide Cisco AnyConnect Secure Mobility Solution Guide This document contains the following information: Cisco AnyConnect Secure Mobility Overview, page 1 Understanding How AnyConnect Secure Mobility Works, page

More information

VMware vcloud Architecture Toolkit for Service Providers Architecting a Hybrid Mobility Strategy with VMware vcloud Air Network

VMware vcloud Architecture Toolkit for Service Providers Architecting a Hybrid Mobility Strategy with VMware vcloud Air Network VMware vcloud Architecture Toolkit for Service Providers Architecting a Hybrid Mobility Strategy with VMware vcloud Air Network Version 1.0 October 2015 This product is protected by U.S. and international

More information

LISP Proxy Tunnel Router Redundancy Deployment

LISP Proxy Tunnel Router Redundancy Deployment LISP Proxy Tunnel Router Redundancy Deployment July 23, 2012 ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

Redundancy and load balancing at L3 in Local Area Networks. Fulvio Risso Politecnico di Torino

Redundancy and load balancing at L3 in Local Area Networks. Fulvio Risso Politecnico di Torino Redundancy and load balancing at L3 in Local Area Networks Fulvio Risso Politecnico di Torino 1 Default gateway redundancy (1) H1 DG: R1 H2 DG: R1 H3 DG: R1 R1 R2 ISP1 ISP2 Internet 3 Default gateway redundancy

More information

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation

More information

TRILL Large Layer 2 Network Solution

TRILL Large Layer 2 Network Solution TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network

More information

MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans

MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans Contents Overview 1 1. L2 VPN Padding Verification Test 1 1.1 Objective 1 1.2 Setup 1 1.3 Input Parameters 2 1.4 Methodology 2 1.5

More information

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business

More information

- Multiprotocol Label Switching -

- Multiprotocol Label Switching - 1 - Multiprotocol Label Switching - Multiprotocol Label Switching Multiprotocol Label Switching (MPLS) is a Layer-2 switching technology. MPLS-enabled routers apply numerical labels to packets, and can

More information

Switching in an Enterprise Network

Switching in an Enterprise Network Switching in an Enterprise Network Introducing Routing and Switching in the Enterprise Chapter 3 Version 4.0 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Compare the types of

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

100-101: Interconnecting Cisco Networking Devices Part 1 v2.0 (ICND1)

100-101: Interconnecting Cisco Networking Devices Part 1 v2.0 (ICND1) 100-101: Interconnecting Cisco Networking Devices Part 1 v2.0 (ICND1) Course Overview This course provides students with the knowledge and skills to implement and support a small switched and routed network.

More information

Redundancy and load balancing at L3 in Local Area Networks. Fulvio Risso Politecnico di Torino

Redundancy and load balancing at L3 in Local Area Networks. Fulvio Risso Politecnico di Torino Redundancy and load balancing at L3 in Local Area Networks Fulvio Risso Politecnico di Torino 1 Problem: the router is a single point of failure H1 H2 H3 VLAN4 H4 VLAN4 Corporate LAN Corporate LAN R1 R2

More information

TechBrief Introduction

TechBrief Introduction TechBrief Introduction Leveraging Redundancy to Build Fault-Tolerant Networks The high demands of e-commerce and Internet applications have required networks to exhibit the same reliability as the public

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

Configuring the Transparent or Routed Firewall

Configuring the Transparent or Routed Firewall 5 CHAPTER This chapter describes how to set the firewall mode to routed or transparent, as well as how the firewall works in each firewall mode. This chapter also includes information about customizing

More information

ProCurve Networking. LAN Aggregation Through Switch Meshing. Technical White paper

ProCurve Networking. LAN Aggregation Through Switch Meshing. Technical White paper ProCurve Networking LAN Aggregation Through Switch Meshing Technical White paper Introduction... 3 Understanding Switch Meshing... 3 Creating Meshing Domains... 5 Types of Meshing Domains... 6 Meshed and

More information

Disaster Recovery Design with Cisco Application Centric Infrastructure

Disaster Recovery Design with Cisco Application Centric Infrastructure White Paper Disaster Recovery Design with Cisco Application Centric Infrastructure 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 46 Contents

More information

WHITE PAPER. Network Virtualization: A Data Plane Perspective

WHITE PAPER. Network Virtualization: A Data Plane Perspective WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Two-Tiered Virtualized Data Center for Large Enterprise Networks Published: 2014-01-10 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California

More information

NETE-4635 Computer Network Analysis and Design. Designing a Network Topology. NETE4635 - Computer Network Analysis and Design Slide 1

NETE-4635 Computer Network Analysis and Design. Designing a Network Topology. NETE4635 - Computer Network Analysis and Design Slide 1 NETE-4635 Computer Network Analysis and Design Designing a Network Topology NETE4635 - Computer Network Analysis and Design Slide 1 Network Topology Design Themes Hierarchy Redundancy Modularity Well-defined

More information

Example: Advertised Distance (AD) Example: Feasible Distance (FD) Example: Successor and Feasible Successor Example: Successor and Feasible Successor

Example: Advertised Distance (AD) Example: Feasible Distance (FD) Example: Successor and Feasible Successor Example: Successor and Feasible Successor 642-902 Route: Implementing Cisco IP Routing Course Introduction Course Introduction Module 01 - Planning Routing Services Lesson: Assessing Complex Enterprise Network Requirements Cisco Enterprise Architectures

More information

NetFlow Subinterface Support

NetFlow Subinterface Support NetFlow Subinterface Support Feature History Release Modification 12.2(14)S This feature was introduced. 12.2(15)T This feature was integrated into Cisco IOS Release 12.2 T. This document describes the

More information

OSPF Routing Protocol

OSPF Routing Protocol OSPF Routing Protocol Contents Introduction Network Architecture Campus Design Architecture Building Block Design Server Farm Design Core Block Design WAN Design Architecture Protocol Design Campus Design

More information

Networking 4 Voice and Video over IP (VVoIP)

Networking 4 Voice and Video over IP (VVoIP) Networking 4 Voice and Video over IP (VVoIP) Course Objectives This course will give delegates a good understanding of LANs, WANs and VVoIP (Voice and Video over IP). It is aimed at those who want to move

More information

SSVP SIP School VoIP Professional Certification

SSVP SIP School VoIP Professional Certification SSVP SIP School VoIP Professional Certification Exam Objectives The SSVP exam is designed to test your skills and knowledge on the basics of Networking and Voice over IP. Everything that you need to cover

More information

Network Technologies for Next-generation Data Centers

Network Technologies for Next-generation Data Centers Network Technologies for Next-generation Data Centers SDN-VE: Software Defined Networking for Virtual Environment Rami Cohen, IBM Haifa Research Lab September 2013 Data Center Network Defining and deploying

More information

Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles.

Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles. Data Networking and Architecture The course focuses on theoretical principles and practical implementation of selected Data Networking protocols and standards. Physical network architecture is described

More information

TSHOOT: Troubleshooting and Maintaining Cisco IP Networks

TSHOOT: Troubleshooting and Maintaining Cisco IP Networks 642-832 TSHOOT: Troubleshooting and Maintaining Cisco IP Networks Course Introduction Course Introduction Module 01 - Planning Maintenance for Complex Networks Lesson: Applying Maintenance Methodologies

More information

Application Centric Infrastructure Overview: Implement a Robust Transport Network for Dynamic Workloads

Application Centric Infrastructure Overview: Implement a Robust Transport Network for Dynamic Workloads White Paper Application Centric Infrastructure Overview: Implement a Robust Transport Network for Dynamic Workloads What You Will Learn Application centric infrastructure (ACI) provides a robust transport

More information

Demonstrating the high performance and feature richness of the compact MX Series

Demonstrating the high performance and feature richness of the compact MX Series WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table

More information

Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led

Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led Course Description The Implementing Cisco Data Center Unified Fabric (DCUFI) v5.0 is a five-day instructor-led training

More information

Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T

Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T White Paper Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T Introduction to Virtual Private LAN Service The Cisco Catalyst 6500/6800 Series Supervisor Engine 2T supports virtual

More information

Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures

Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures White Paper Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

Introduction about cisco company and its products (network devices) Tell about cisco offered courses and its salary benefits (ccna ccnp ccie )

Introduction about cisco company and its products (network devices) Tell about cisco offered courses and its salary benefits (ccna ccnp ccie ) CCNA Introduction about cisco company and its products (network devices) Tell about cisco offered courses and its salary benefits (ccna ccnp ccie ) Inform about ccna its basic course of networking Emergence

More information

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES Alastair JOHNSON (AJ) February 2014 alastair.johnson@alcatel-lucent.com AGENDA 1. EVPN Background and Motivation 2. EVPN Operations 3. EVPN

More information

Configuring Switch Ports and VLAN Interfaces for the Cisco ASA 5505 Adaptive Security Appliance

Configuring Switch Ports and VLAN Interfaces for the Cisco ASA 5505 Adaptive Security Appliance CHAPTER 4 Configuring Switch Ports and VLAN Interfaces for the Cisco ASA 5505 Adaptive Security Appliance This chapter describes how to configure the switch ports and VLAN interfaces of the ASA 5505 adaptive

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

Interconnecting Cisco Network Devices 1 Course, Class Outline

Interconnecting Cisco Network Devices 1 Course, Class Outline www.etidaho.com (208) 327-0768 Interconnecting Cisco Network Devices 1 Course, Class Outline 5 Days Interconnecting Cisco Networking Devices, Part 1 (ICND1) v2.0 is a five-day, instructorled training course

More information

WAN Traffic Management with PowerLink Pro100

WAN Traffic Management with PowerLink Pro100 Whitepaper WAN Traffic Management with PowerLink Pro100 Overview In today s Internet marketplace, optimizing online presence is crucial for business success. Wan/ISP link failover and traffic management

More information

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts Expert Reference Series of White Papers vcloud Director 5.1 Networking Concepts 1-800-COURSES www.globalknowledge.com vcloud Director 5.1 Networking Concepts Rebecca Fitzhugh, VMware Certified Instructor

More information

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Expert Reference Series of White Papers. VMware vsphere Distributed Switches Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,

More information

Building Secure Network Infrastructure For LANs

Building Secure Network Infrastructure For LANs Building Secure Network Infrastructure For LANs Yeung, K., Hau; and Leung, T., Chuen Abstract This paper discusses the building of secure network infrastructure for local area networks. It first gives

More information

IMPLEMENTING CISCO SWITCHED NETWORKS V2.0 (SWITCH)

IMPLEMENTING CISCO SWITCHED NETWORKS V2.0 (SWITCH) IMPLEMENTING CISCO SWITCHED NETWORKS V2.0 (SWITCH) COURSE OVERVIEW: Implementing Cisco Switched Networks (SWITCH) v2.0 is a five-day instructor-led training course developed to help students prepare for

More information

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need

More information

Configuring Switch Ports and VLAN Interfaces for the Cisco ASA 5505 Adaptive Security Appliance

Configuring Switch Ports and VLAN Interfaces for the Cisco ASA 5505 Adaptive Security Appliance CHAPTER 5 Configuring Switch Ports and VLAN Interfaces for the Cisco ASA 5505 Adaptive Security Appliance This chapter describes how to configure the switch ports and VLAN interfaces of the ASA 5505 adaptive

More information