Avaya VENA Data Center Technical Solution Guide
|
|
|
- Joseph Parrish
- 9 years ago
- Views:
Transcription
1 VENA Data Center Engineering Avaya VENA Data Center Technical Solution Guide Avaya Networking Document Date: Document Number: NN Document Version: 1.0
2 2012 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is complete and accurate at the time of printing, Avaya assumes no liability for any errors. Avaya reserves the right to make changes and corrections to the information in this document without the obligation to notify any person or organization of such changes. Documentation disclaimer Avaya shall not be responsible for any modifications, additions, or deletions to the original published version of this documentation unless such modifications, additions, or deletions were performed by Avaya. End User agree to indemnify and hold harmless Avaya, Avaya s agents, servants and employees against all claims, lawsuits, demands and judgments arising out of, or in connection with, subsequent modifications, additions or deletions to this documentation, to the extent made by End User. Link disclaimer Avaya is not responsible for the contents or reliability of any linked Web sites referenced within this site or documentation(s) provided by Avaya. Avaya is not responsible for the accuracy of any information, statement or content provided on these sites and does not necessarily endorse the products, services, or information described or offered within them. Avaya does not guarantee that these links will work all the time and has no control over the availability of the linked pages. Warranty Avaya provides a limited warranty on this product. Refer to your sales agreement to establish the terms of the limited warranty. In addition, Avaya s standard warranty language, as well as information regarding support for this product, while under warranty, is available to Avaya customers and other parties through the Avaya Support Web site: Please note that if you acquired the product from an authorized reseller, the warranty is provided to you by said reseller and not by Avaya. Licenses THE SOFTWARE LICENSE TERMS AVAILABLE ON THE AVAYA WEBSITE, ARE APPLICABLE TO ANYONE WHO DOWNLOADS, USES AND/OR INSTALLS AVAYA SOFTWARE, PURCHASED FROM AVAYA INC., ANY AVAYA AFFILIATE, OR AN AUTHORIZED AVAYA RESELLER (AS APPLICABLE) UNDER A COMMERCIAL AGREEMENT WITH AVAYA OR AN AUTHORIZED AVAYA RESELLER. UNLESS OTHERWISE AGREED TO BY AVAYA IN WRITING, AVAYA DOES NOT EXTEND THIS LICENSE IF THE SOFTWARE WAS OBTAINED FROM ANYONE OTHER THAN AVAYA, AN AVAYA AFFILIATE OR AN AVAYA AUTHORIZED RESELLER, AND AVAYA RESERVES THE RIGHT TO TAKE LEGAL ACTION AGAINST YOU AND ANYONE ELSE USING OR SELLING THE SOFTWARE WITHOUT A LICENSE. BY INSTALLING, DOWNLOADING OR USING THE SOFTWARE, OR AUTHORIZING OTHERS TO DO SO, YOU, ON BEHALF OF YOURSELF AND THE ENTITY FOR WHOM YOU ARE INSTALLING, DOWNLOADING OR USING THE SOFTWARE (HEREINAFTER REFERRED TO INTERCHANGEABLY AS "YOU" AND "END USER"), AGREE TO THESE TERMS AND CONDITIONS AND CREATE A BINDING CONTRACT BETWEEN YOU AND AVAYA INC. OR THE APPLICABLE AVAYA AFFILIATE ("AVAYA"). Copyright Except where expressly stated otherwise, no use should be made of the Documentation(s) and Product(s) provided by Avaya. All content in this documentation(s) and the product(s) provided by Avaya including the selection, arrangement and design of the content is owned either by Avaya or its licensors and is protected by copyright and other intellectual property laws including the sui generis rights relating to the protection of databases. You may not modify, copy, reproduce, republish, upload, post, transmit or distribute in any way any content, in whole or in part, including any code and software. Unauthorized reproduction, transmission, dissemination, storage, and or use without the express written consent of Avaya can be a criminal, as well as a civil offense under the applicable law. Third Party Components Certain software programs or portions thereof included in the Product may contain software distributed under third party agreements ("Third Party Components"), which may contain terms that expand or limit rights to use certain portions of the Product ("Third Party Terms"). Information regarding distributed Linux OS source code (for those Products that have distributed the Linux OS source code), and identifying the copyright holders of the Third Party Components and the Third Party Terms that apply to them is available on the Avaya Support Web site: Trademarks The trademarks, logos and service marks ("Marks") displayed in this site, the documentation(s) and product(s) provided by Avaya are the registered or unregistered Marks of Avaya, its affiliates, or other third parties. Users are not permitted to use such Marks without prior written consent from Avaya or such third party which may own the Mark. Nothing contained in this site, the documentation(s) and product(s) should be construed as granting, by implication, estoppel, or otherwise, any license or right in and to the Marks without the express written permission of Avaya or the applicable third party. Avaya is a registered trademark of Avaya Inc. All non-avaya trademarks are the property of their respective owners. Downloading documents For the most current versions of documentation, see the Avaya Support. Web site: Contact Avaya Support Avaya provides a telephone number for you to use to report problems or to ask questions about your product. The support telephone number is in the United States. For additional support telephone numbers, see the Avaya Web site: 2
3 Abstract This Technical Solution Guide describes how to design an Avaya VENA Data Center. The document provides an overview of the best design practices to implement in a virtualized data center capable of moving applications and services to where they are most needed. Information in this Technical Solution Guide has been obtained through Avaya Networking interoperability testing and additional technical discussions. Testing was conducted at the Avaya Networking Test Lab. The audience for this Technical Solution Guide is intended to be Avaya Sales teams, Partner Sales teams and end-user customers. All of these groups can benefit from understanding the common design practices and recommended components for an Avaya VENA Data Center. For any comments, edits, corrections, or general feedback, please contact Dan DeBacker ([email protected]). Acronym Key Throughout this guide the following acronyms will be used: AAA: Authorization, Authorization, and Accounting ACE: Agile Communication Environment ADAC: Auto Detect / Auto Configure AES: Application Enablement Services AIE: Application Integration Engine BCB: Backbone Core Bridge BEB: Backbone Edge Bridge B-VLAN: Backbone VLAN COM: Communication Orchestration Manager CRI: Communication Resources, Inc. C-VLAN: Customer VLAN DAI: Dynamic ARP Inspection DHCP: Dynamic Host Configuration Protocol DMLT: Distributed MultiLink Trunking ERS: Ethernet Routing Switch FCoE: Fibre Channel over Ethernet I-SID: Service Instance Identifier IST: InterSwitch Trunk LLDP: Link Layer Discovery Protocol MLT: MultiLink Trunking PoE: Power over Ethernet SAN: Storage Area Network 3
4 SIP: Session Initiation Protocol SLPP: Simple Loop Prevention Protocol SMLT: Split MultiLink Trunking SPB: Shortest Path Bridging ToR: Top of Rack VENA: Virtual Enterprise Network Architecture VLACP: Virtual Link Aggregation Control Protocol VLC: Video LAN Client VPS: Virtualization Provisioning Service VSN: Virtual Services Network VSP: Virtual Services Platform 4
5 Table of Contents Figures... 7 Tables Introducing VENA Introducing the VENA Data Center Traditional Data Center Architecture Avaya VENA Data Center Architecture Avaya VENA Data Center Components Access Layer Avaya IP Phones Desktop Computers Ethernet Switching MultiLink Trunking (MLT and DMLT) Link Aggregation (LACP and VLACP) Simple Loop Prevention Protocol (SLPP) Access Layer Configuration Details Virtual LANs and IP Subnets Connection Details Configuration Notes Data Center Layer Top of Rack (ToR) Ethernet Switching Avaya Ethernet Routing Switch Avaya Virtual Services Platform Switch Clustering Split MultiLink Trunking (SMLT) Troubleshooting and Monitoring Packet Capture (PCAP) Port Mirroring Remote Logging Stackables Tools Avaya Aura VMware Servers Storage Area Network Avaya Unified Communications Management Avaya Communications Orchestration Manager (COM) Avaya Virtualization Provisioning Service (VPS) Network Access Control
6 4.8.1 Identity Engines Network Operations Data Center Layer Configuration Details Virtual LANs and IP Subnets Connection Details ToR Configuration Notes SAN Configuration Notes Core Layer SPB Topology Avaya Ethernet Routing Switch Avaya Virtual Services Platform Lossless Ethernet Logical Topologies SPB Services SPB Logical Topology SPB Logical Topology SPB Logical Topology Core Layer Configuration Details Virtual LANs and IP Subnets Connection Details Configuration Notes Test Results ERS 8800 core test results VSP 9000 core test results Lossless Ethernet test results Conclusion
7 Figures Figure 1.1 Avaya VENA Virtual Services Fabric Figure 1.2 Traditional vs. Avaya VENA Data Center Figure 1.3 Traditional Data Center before moving a VM Figure 1.4 Traditional Data Center after moving a VM Figure 1.5 Full Mesh Data Center Figure 1.6 Avaya VENA Data Center before moving a VM Figure 1.7 Avaya VENA Data Center after moving a VM Figure 2.1 Avaya VENA Data Center Topology Figure 3.1 Common Access Layer Topology Figure 3.2 Access Layer Topologies Figure 3.3 Access Layer Connection Details Figure 4.1 ERS Figure 4.2 Avaya VSP 7000 Ethernet Switch Figure 4.3 Common Data Center ToR Switching Topology Figure 4.4 ERS 5600 Stacking Figure 4.5 Common Data Center SAN Switching Topology Figure 4.6 Avaya COM Figure 4.7 Avaya COM Device Inventory Manager Figure 4.8 Avaya COM VLAN Manager Figure 4.9 Avaya COM VPS Manager Figure 4.10 Identity Engines Portfolio Architecture Figure 4.11 Data Center Layer Topology Figure 4.12 SAN Virtual LANs and Subnets Figure 4.13 Data Center Layer Avaya Aura VLAN Details Figure 4.14 Data Center VMware VLAN Details Figure 4.15 Dell R610 Server Connectivity Details Figure 4.16 Data Center Connection Details Figure 4.17 Data Center 1 Server / Appliance Connection Details Figure 4.18 Data Center 2 Server / Appliance Connection Details Figure 4.19 Data Center 1 Rack Layout Figure 4.20 Data Center 2 Rack Layout Figure 5.1 SPB Core Layer Figure 5.2 SPB Topology Figure 5.3 Avaya ERS Figure 5.4 Avaya VSP 9000 Ethernet Switch Figure 5.5 Non Virtualized Data Center Figure 5.6 Virtualization Ready Data Center Figure 5.7 Virtualized Data Center Figure 5.8 SPB Virtual LANs and Subnets Figure 5.9 BCB Access BEB Connection Details Figure 5.10 BCB DC1 BEB Connection Details Figure 5.11 BCB DC2 BEB Connection Details Figure 6.1 BCB DC1 BEB Connection Details... Error! Bookmark not defined. 7
8 Tables Table Series IP Phone Hardware Table 3.2 Desktop Computer Hardware Table 3.3 Common Access Layer Hardware Table 3.4 Access Layer Virtual LANs and Subnets Table 3.5 Access Layer Configuration Notes Table 4.1 Common Data Center ToR Switching Hardware Table 4.2 Common Data Center Layer Hardware Table 4.3 CRI Virtualized Server Hardware Table 4.4 SAN Hardware Table 4.5 Unified Communications Management Table 4.6 Network Operations Services Table 4.7 Data Center 1 Virtual LANs and Subnets Table 4.8 Data Center 2 Virtual LANs and Subnets Table 4.9 ToR Switch Configuration in the Data Center Table 4.10 SAN Switch Configuration in the Data Center Table 5.1 ERS 8800 SPB Core Layer Table 5.2 VSP 9000 SPB Core Layer Table 5.3 Data Center Layer VRRP Priorities Table 5.4 Access Layer VRRP Priorities Table 5.5 Data Center Layer VRRP Priorities Table 5.6 Access Layer VRRP Priorities Table 5.7 Data Center Layer VRRP Priorities Table 5.8 Access Layer VRRP Priorities Table 5.9 SPB BCB Virtual LANs and Subnets Table 5.10 SPB BEB DC1 Virtual LANs and Subnets Table 5.11 SPB BEB DC2 Virtual LANs and Subnets Table 5.12 SPB BEB Access Virtual LANs and Subnets Table 5.13 General Core Configuration Notes Table 5.14 SPB Core Configuration Notes (Topology 1) Table 5.15 SPB Core Configuration Notes (Topology 2) Table 5.16 SPB Core Configuration Notes (Topology 3) Table 6.1 ERS 8800 Core Test Results Table 6.2 VSP 9000 Core Test Results Table 6.3 Lossless Ethernet Test Results
9 Conventions This section describes the text, image, and command conventions used in this document. Symbols Tip Highlights a configuration or technical tip. Note Highlights important information to the reader. Warning Highlights important information about an action that may result in equipment damage, configuration or data loss. Text Bold text indicates emphasis. Italic text in a Courier New font indicates text the user must enter or select in a menu item, button or command: ERS T# show running-config Output examples from Avaya devices are displayed in a Lucida Console font: ERS T# show sys-info Operation Mode: Switch MAC Address: B0-00 PoE Module FW: Reset Count: 83 Last Reset Type: Management Factory Reset Power Status: Primary Power Autotopology: Enabled Pluggable Port 45: None Pluggable Port 46: None Pluggable Port 47: None Pluggable Port 48: None Base Unit Selection: Non-base unit using rear-panel switch sysdescr: Ethernet Routing Switch T-PWR HW:02 FW: SW:v Mfg Date: HW Dev:H/W rev.02 9
10 1. Introducing VENA Avaya s Virtual Enterprise Network Architecture (VENA) helps enterprises reap the benefits of virtualization in a simplified and cost-effective manner. Unlike other virtualized products on the market, Avaya offers a comprehensive architecture that optimizes the network for business applications and services through virtualization. This technology utilizes a new, end-to-end, enterprise-wide architecture designed to help CIOs and IT departments meet the surging demand for new content and business collaboration applications. You can implement Avaya VENA in phases that suit your environment. The Avaya VENA architecture increases scalability by delivering an infrastructure that creates a private cloud to deliver always-on content and access to applications in a dramatically simplified model. This approach also protects enterprises core networks from the costly failures and human-error issues that are often experienced by the traditional, complicated process for provisioning or adding, deleting or changing applications in a virtualized environment. One of the most attractive features of Avaya VENA is its flexibility. The new architecture features a Virtual Services Fabric that provides an end-to-end connection from the desktop all the way through to the data center. Avaya VENA also includes products from their industry-leading partners, including VMware, for virtualization; Coraid and Dell for converged Ethernet storage area network (SAN); Communication Resources Inc (CRI), for virtualized servers; and Silver Peak Systems, for data center WAN optimization. Figure 1.1 Avaya VENA Virtual Services Fabric 10
11 1.1 Introducing the VENA Data Center The Avaya VENA Data Center is built on the enhanced IEEE Shortest Path Bridging (SPB), which is a next generation virtualization technology that revolutionizes the design, deployment and operations of Enterprise Campus core networks and Data Centers. The benefits of the technology are clearly evident in its ability to provide massive scalability and resiliency, while at the same time reducing the complexity of the network. SPB makes network virtualization a much easier paradigm to deploy within the Enterprise environment than other technologies. This Technical Solution Guide focuses on the Avaya VENA Data Center and how it is implemented on new data center modular and fixed platforms. In some networks, you can upgrade your data center by adding a simple upgrade to existing Avaya data routers and switches. The intent of this Technical Solution Guide is to describe the operational simplicity and efficiency of SPB by comparing a traditional data center with an Avaya VENA Data Center. The following figure illustrates what the data path looks like after a VM moves between data centers in both configurations. Imagine what the figure on the left will look like after several VM moves! Figure 1.2 Traditional vs. Avaya VENA Data Center 11
12 1.1.1 Traditional Data Center Architecture In a traditional data center configuration, the traffic flows into the network to a VM and out of the network in almost a direct path. (The red device in the following figures represents the VM.) The figure below shows an example of a traditional data center with Virtual Router Redundancy Protocol (VRRP) configured. Because end stations are often configured with a static default gateway IP address, a loss of the default gateway router causes a loss of connectivity to the remote networks. VRRP eliminates the single point of failure that can occur when the single static default gateway router for an end station is lost. Figure 1.3 Traditional Data Center before moving a VM 12
13 A VM is a virtual machine (in this case a server). When a VM is moved, the virtual server is moved as is. This means that the IP addresses of that server remain the same when the server is moved from one data center to the other. This in turn dictates that the same IP subnet (and hence VLAN) be present in both data centers. In Figure 1.4, the VM (red device) moved from the data center on the left to the data center on the right. To ensure a seamless transition that is transparent to the user, the VM retains its network connections through the default gateway. This method works, but it adds more hops to all traffic. As you can see in the figure below, one VM move results in a convoluted traffic path. Multiply this with many moves and soon the network look like a tangled mess that is very inefficient, difficult to maintain, and almost impossible to troubleshoot. Figure 1.4 Traditional Data Center after moving a VM 13
14 1.1.2 Avaya VENA Data Center Architecture Avaya offers a choice of data products that comprise the infrastructure for the Avaya VENA Data Center. These routers and switches are designed to deliver energy efficiencies that reduce enterprise operating costs while evolving current infrastructure investments to help eliminate the need for costly forklift upgrades. To manage this virtualized data center, Avaya provides several network management tools, which are described later in this document. The core component of the VENA Data Center is SPB. In an SPB network, an edge switch is called a Backbone Edge Bridge (BEB) and a core switch is called a Backbone Core Bridge (BCB). Note Once you create the SPB infrastructure, you configure the SPB services on the BEBs at the edge of the network only. There is no provisioning required on the core SPB switches. This provides a robust carrier grade architecture where configuration on the core switches never needs to be touched when adding new services. The boundary between the core MAC-in-MAC SPB domain and the edge customer 802.1Q domain is handled by a service instance identifier (I-SID). You provision an I-SID on the BEB and associate it with a particular service instance. The I-SID is then included in the SPB B-MAC header to identify and transmit any virtualized traffic in an encapsulated SPB frame. I-SIDs virtualize traffic in a Layer 2 Virtual Services Network (L2 VSN) or a Layer 3 Virtual Services Network (L3 VSN). With L2 VSN, the I-SID is associated with a customer VLAN (C-VLAN), which is then virtualized across the backbone. With L3 VSN, the I-SID is associated with a customer VRF, which is also virtualized across the backbone. Another implementation option that SPB supports in the data center is IP Shortcuts, which forward standard IP packets over IS-IS in the SPB core. Note I-SID configuration is required only for virtual services such as L2 VSN and L3 VSN. With IP Shortcuts, no I-SID is required as forwarding is done using the (Global Routing Table) GRT. If you are using vmotion, then use L2 VSNs between data centers. With L2 VSNs, you can simply add an IP address to the VLAN on both data centers and run VRRP between them to route to the rest of the network. 14
15 Figure 1.5 shows an SPB topology of a large data center. This figure represents a full-mesh Avaya VENA Data Center fabric using SPB. In this topology, traffic never travels more than two hops. Tip Avaya recommends a two-tier, full-mesh topology for large data centers. Figure 1.5 Full Mesh Data Center 15
16 Figure 1.6 shows two features that optimize an Avaya VENA Data Center: VLAN Routers in the Layer 2 domain (green icons) VRRP BackupMaster The VLAN Routers use lookup tables to determine the best path to route incoming traffic (red dots) to the destination VM. VRRP BackupMaster solves the problem with traffic congestion on the InterSwitch Trunk (IST). Because there can only be one VRRP Master, all other interfaces are in backup mode. In this case, all traffic is forwarded over the IST link towards the primary VRRP switch. All traffic that arrives at the VRRP backup interface is forwarded, so there is not enough bandwidth on the IST link to carry all the aggregated riser traffic. VRRP BackupMaster overcomes this issue by ensuring that the IST trunk is not used in such a case for primary data forwarding. The VRRP BackupMaster acts as an IP router for packets destined for the logical VRRP IP address. All traffic is directly routed to the destined subnetwork and not through Layer 2 switches to the VRRP Master. This avoids potential limitation in the available IST bandwidth. The Avaya VENA Data Center optimizes your network for bidirectional traffic flows. However, this solution turns two SPB BCB nodes into BEBs where MAC and ARP learning will be enabled on the Inter-VSN routing interfaces. If you do not care about top-down traffic flows, you can omit the Inter-VSN routing interfaces on the SPB BCB nodes. This makes the IP routed paths top-down less optimal, but the BCBs will remain pure BCBs, thus simplifying core switch configurations. Figure 1.6 Avaya VENA Data Center before moving a VM 16
17 In the traditional data center, we saw the chaos that resulted when a lot of VMs were moved. In an Avaya VENA Data Center as shown below, the incoming traffic enters the Layer 2 domain where an edge switch uses Inter-VSN Routing to attach an I-SID to a VLAN. The I-SID bridges traffic directly to the destination. With VRRP Backup Master, the traffic no longer goes through the default gateway; it takes the most direct route in and out of the network. Figure 1.7 Avaya VENA Data Center after moving a VM 17
18 2. Avaya VENA Data Center Components The following list describes the hardware components that are used in the different layers of the Avaya VENA Data Center. Figure 2.1 illustrates these components in their respective layers. In the Access layer, Avaya VENA is supported by a range of Avaya Ethernet Routing Switches (ERS) including the ERS 2500, ERS 4500, ERS 5000 stackable switches and the ERS 8300 modular switch. In the Data Center layer, Avaya offers a choice of Top-of-Rack Ethernet Switches including the Avaya ERS 5000 and the new Avaya Virtual Services Platform 7000 (VSP 7000). The VSP 7000 supports high-density 10 Gigabit ports with an evolution to 40/100G and FibreChannel-over- Ethernet (FCoE). In the Core layer, Avaya VENA is supported by the VSP 9000 and the ERS 8800/8600. Note Avaya recommends configuring all components according to Avaya s best practices as outlined in the Small, Medium, Large, and Super Large Technical Solution Guides. These guides are stored on the Avaya Technical Support Web site at Access Layer Data Center Laye 18
19 Core Layer Figure 2.1 Avaya VENA Data Center Topology 19
20 3. Access Layer The Avaya Networking Test Lab uses a common access layer for all of the Avaya VENA Data Center tests. The access layer uses Avaya Ethernet switches to connect Avaya IP Phones and desktop PCs to the network. This common access layer topology and configuration is the same regardless of the core or data center layer configuration. 3.1 Avaya IP Phones Each access layer switch has a pool of 10/100 and 10/100/1000 Avaya IP Phones for testing access layer interoperability. The IP phones register using SIP or H.323 to the Avaya Aura servers and services distributed between two data centers. Both H.323 and SIP endpoints were evaluated. Hardware Four or more Avaya IP Phone 1603SW-I (Black) Four or more Avaya IP Phone 1608-I (Black) Four or more Avaya IP Phone 1616-I (Black) Four or more Avaya IP Phone 9620C (Gray) Four or more Avaya IP Phone 9640 (Gray) Four or more Avaya IP Phone 9640G Four or more Avaya IP Phone 9650C Four or more Avaya IP Phone 9608 Four or more Avaya IP Phone 9611G Four or more Avaya IP Phone 9621G Notes Both SIP and H.323 evaluated. Table Series IP Phone Hardware 3.2 Desktop Computers Each access layer switch has a pool of 10/100/1000 desktop or notebook PCs that connect to the Avaya IP Phones. The desktop PCs verify Avaya IP Phone interoperability, performance testing, and run the One-X softphone client. Hardware HP, Dell or Lenovo based on Avaya standards Notes Avaya One-X client. Necessary performance testing software. Table 3.2 Desktop Computer Hardware 20
21 3.3 Ethernet Switching The common access layer consists of four Avaya Ethernet Routing Switch families that support 802.3af Power over Ethernet (PoE) and can be positioned in small, medium and large enterprise networks as access layer switches. The common access layer includes the stackable ERS 2500 series, ERS 4500 series, and ERS 5000 series as well as the modular ERS 8300 series switches running Windows Server 2008 and CentOS 5.0. Hardware Two or more ERS 2526T-PWR or ERS 2550T-PWR series PoE switches Two 1000BASE-SX SFP Transceivers Two or more ERS 4526GTX-PWR series PoE switches Two 10GBASE-SR XFP Transceivers Two or more ERS 5650TD-PWR series PoE switches Two 10GBASE-SR XFP Transceivers One slot or slot PoE Chassis Three 8301AC Power Supplies Two 8394SF Switch Fabrics Two 8348GTX-PWR I/O modules Two 10GBASE-SR XFP Transceivers Notes 2 x 1GbE Uplinks to Core Layer 2 x 10GbE Uplinks to Core Layer 2 x 10GbE Uplinks to Core Layer 2 x 10GbE Uplinks to Core Layer Table 3.3 Common Access Layer Hardware Each access layer switch family connects to the core layer using the following protocols to simulate a traditional customer environment: Distributed Multilink Trunking (DMLT) Virtual Link Aggregation Control Protocol (VLACP) Simple Loop Prevention Protocol (SLPP) The following sections provide a brief description of these protocols. 21
22 3.3.1 MultiLink Trunking (MLT and DMLT) MultiLink Trunking (MLT) is a point-to-point connection that aggregates multiple ports. Then these ports logically act like a single port. When you group multiple ports together like this into a logical link, it provides a higher aggregate on a switch-to-switch or switch-to-server application. Distributed MultiLink Trunking (DMLT) enhances MLT by providing module redundancy. DMLT allows you to aggregate similar ports from different modules. Tip Avaya recommends always using DMLT when possible Link Aggregation (LACP and VLACP) Link Aggregation Control Protocol (LACP) works with MLT to manage switch ports and port memberships to form a link aggregation group (LAG). LACP allows you to gather one or more links to form a LAG, which a Media Access Control (MAC) client treats as a single link. LACP dynamically detects whether links can be aggregated into a link aggregation group (LAG) and does so when links become available. Virtual LACP (VLACP) is an Avaya modification to LACP that provides end-to-end failure detection. VLACP is not a link aggregation protocol; VLACP implements link status control protocol at the port level. It is a mechanism to periodically check the end-to-end health of a point-to-point or end-to-end connection. You can run VLACP on single ports or on ports that are part of an MLT. Tip Avaya recommends that you do not configure VLACP on LACP-enabled ports. VLACP does not operate properly with LACP Simple Loop Prevention Protocol (SLPP) Simple Loop Prevention Protocol (SLPP) prevents loops in the network. SLPP provides active protection against network loops by sending a test packet to the VLAN. A loop is detected if the switch or peer aggregation switch on the same VLAN receives the original packet. If a loop is detected, the switch disables the port. To enable the port requires manual intervention. Tip Avaya recommends using SLPP to protect the network against Layer 2 loops. 22
23 Access Layer 1 Access Layer 2 Access Layer 3 Access Layer 4 Figure 3.1 Common Access Layer Topology To verify interoperability with Avaya IP Phones and Desktop Video Devices, dedicated access layer ports on each series of switches support common customer implemented authentication, auto-discovery, and provisioning methods. Each port is configured so you can connect an Avaya IP Phone with a desktop PC. 23
24 ERS 8300 ERS 5000 ERS 4500 ERS 2500 avaya.com 3.4 Access Layer Configuration Details The following sections provide configuration information for the VENA Data Center solution access layer. This configuration remains the same for all data center access layer and core configurations Virtual LANs and IP Subnets Each access layer has a pool of VLANs to simulate a typical customer environment. Each switching family in the access layer simulates an individual wiring closet and is assigned to a common Management & Guest VLAN as well as unique User and Converged VLANs. Each simulated wiring closet has a pool of ten contiguous VLANs and IP subnets to permit additional VLANs to be added in the future. The following table provides an example of the VLAN IDs and IP subnet schemes that can be deployed: VLAN ID VLAN Name Subnet Description 10 Management /24 Common Management VLAN 117 Guest /24 Common Guest VLAN 200 Converged /24 Wiring Closet 1 Converged VLAN User /24 Wiring Closet 1 User VLAN 1 10 Management /24 Common Management VLAN 117 Guest /24 Common Guest VLAN 210 Converged /24 Wiring Closet 2 Converged VLAN User /24 Wiring Closet 2 User VLAN 2 10 Management /24 Common Management VLAN 117 Guest /24 Common Guest VLAN 220 Converged /24 Wiring Closet 3 Converged VLAN User /24 Wiring Closet 3 User VLAN 3 10 Management /24 Common Management VLAN 117 Guest /24 Common Guest VLAN 230 Converged /24 Wiring Closet 4 Converged VLAN User /24 Wiring Closet 4 User VLAN 4 Table 3.4 Access Layer Virtual LANs and Subnets 24
25 Figure 3.2 Access Layer Topologies 25
26 3.4.2 Connection Details The following diagram provides a physical view of how the access layer switches connect to the access distribution switches: Figure 3.3 Access Layer Connection Details 26
27 3.4.3 Configuration Notes Enable the following features on the switches in the access layer. Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature Notes 802.1Q Tagging Enable on all MLT ports. Auto Detect / Auto Config (ADAC) Enable on all ports (except MLT). Globally configure ADAC to tag the respective Converged VLAN and untag the respective User VLAN. ADAC leverages LLDP for IP Phone detection and LLDP-Med policies for configuration. LLDP policies must supply Location, Call Server and File Server. Broadcast / Multicast Rate Limiting IP Source Guard / DHCP Snooping / Dynamic ARP Inspection (DAI) Enable on all ports (except MLT). Enable on each switch. Configure edge ports as untrusted. Configure MLT ports as trusted. Management Assign a switch IP address on its respective management VLAN to each ERS 2500, ERS 4500, and ERS 5000 stackable switch. Assign a stack IP address on their respective management VLAN to each ERS 2500, ERS 4500, and ERS 5000 stack of switches. Assign a virtual IP address on its respective VLAN to the ERS 8300 modular switch. Enable SSHv2, SNMPv3 and HTTPS secure management services. QoS SLPP ADAC automatically provides QoS for the Converged VLAN. Enable on all edge VLANs. Enable SLPP Guard for each SLPP enabled VLAN. Spanning Tree Protocol / BPDU Filtering VLACP Enable on all edge ports (except IST and SMLT). Enable on all MLT ports. Table 3.5 Access Layer Configuration Notes 27
28 4. Data Center Layer The Avaya Networking Test Lab uses a common data center access layer for all of the Avaya VENA Data Center tests. The data center access layer connects the Avaya Aura servers, Avaya Aura media gateways, storage arrays, and CRI virtualized servers to the network. For availability and failover testing, the common data center access layer has two data centers. The common data center access layer topology and configuration remains the same regardless of the core and access layer configuration. 4.1 Top of Rack (ToR) Ethernet Switching The common data center access layer consists of two data centers each with a specific switch configuration of top-of-rack (ToR) switches that can be positioned as premium data center switching solutions for small, medium, and large networks. The Avaya Networking Test Lab used ERS 5600 series switches for the configuration information in this Technical Solution Guide. However, the new Virtual Services Platform (VSP) 7000 top of rack (ToR) switch also passed the tests for this solution. The ToR switches are used primarily to connect Coraid s EtherDrive SRX Storage Area Network (SAN) device to the SPB core using a 10 GbE interface. The following sections describe the main features of the ERS 5600 and the VSP 7000 to help you decide which platform to use in the data center. Note that you can configure Lossless Buffering on the ERS 5600 only. It is not currently supported on the VSP Hardware Data Center 1 Six ERS 5650TFD series switches with Redundant Power Supplies Six 10GBASE-SR XFP Transceivers Data Center 2 Six ERS 5650TFD series switches with Redundant Power Supplies Six 10GBASE-SR XFP Transceivers Notes 4 x 10GbE IST 2 x 10GbE Uplinks to Core Layer 4 x 10GbE IST 2 x 10GbE Uplinks to Core Layer Table 4.1 Common Data Center ToR Switching Hardware ToR switches can operate as standalone units. However, as your network needs grow, you can horizontally stack up to eight switches in each stack. Stacking not only provides resiliency; it also provides management efficiency. You can manage a stack with a single IP address, as a single virtual switch, or as a single image across all models in the stack. The ToR data center switching solution consists of either two horizontal stacks of ERS 5600 series switches or VSP 7000 switches configured as a resilient horizontal stack cluster. Important: All units in a stack must be from the same product family and use the same software version. 28
29 4.1.1 Avaya Ethernet Routing Switch 5600 The Avaya Ethernet Routing Switch 5600 (ERS 5600) is a Layer 2/3 routing switch providing direct end station connectivity, aggregation for closet connectivity, as well as for servers, network appliances, and other devices. The ERS 5600 provides flexibility in many network designs as it can be utilized as a closet switch, aggregation switch, or as a small core switch. The ERS 5600 supports Switch Clustering by using Split Multilink Trunking (SMLT) for active/active uplink connectivity without the use of any form of spanning tree. However, the ERS 5600 also supports the IEEE 802.1w Rapid Spanning Tree Protocol (RSTP) for those environments where spanning tree is desired. The ERS 5600 also supports Lossless Buffering, which is critical in data center applications where reliable data transfer is more important than enhanced throughput. With lossless mode, when a port receives traffic volume greater than port bandwidth, the port sends flow control (pause) frames to the sender. The flow control frames notify the sender to stop packet transmission for a specified amount of time. All end stations connected to the stack must be capable of symmetric flow control, and all switch ports must auto-negotiate to symmetric flow control. Flow control for 10G ports will be symmetric by default when lossless buffering mode is enabled. Figure 4.1 ERS Avaya Virtual Services Platform 7000 The Avaya Virtual Services Platform (VSP 7000) is a new family of 1/10Gigabit, Top of Rack, Ethernet Switches. These high-density, high-capacity switches provide a high performance forwarding engine for data centers aggregation and small to medium core switches. The following is a list of some of the Avaya VSP 7000 features: 1RU stackable switch with class-leading switching performance of over 1.2Tbps Data center grade hardware that supports front-to-back or back-to-front cooling 5 th generation ASIC technology for future proof feature requirements 24 ports of SFP+ supporting both/either 1 and 10 GbE Media Dependant Adaptor (MDA) for a range of high-speed expansion options SFP+ connectivity to connect at 1 Gigabit or 10 Gigabit speeds Future-ready with flexible support for 40Gbps, 100Gbps Ethernet and Fibre Channel Support network-wide fabric-based Virtualized Services and Lossless environments Dual, hot-swappable AC or DC power supplies and fan trays for always-on high-performance The Avaya VSP 7000 is designed for Enterprise customers requiring high density, high performance 10 Gigabit connectivity. In a high-performance Data Center, the Avaya VSP 7000 can serve as a Top-of- Rack Switch. In a network with an existing Core Switch deployment, it can provide a cost-effective 10 Gigabit Ethernet fan-out capability. In a Campus distribution layer, it can deliver flexible connectivity and consolidation options. 29
30 Figure 4.2 Avaya VSP 7000 Ethernet Switch 4.2 Switch Clustering Split MultiLink Trunking (SMLT) Switch Clustering using Split MultiLink Trunking (SMLT) provides industry-leading performance for resiliency. Providing redundant active-active links without using Spanning Tree allows the ultimate design in a converged environment. Sub-second failover and the simplicity of a network without Spanning Tree reduces TCO and ensures converged applications will function flawlessly. A vital feature of Switch Clustering is its ability to work with any end device (3 rd party switch, servers, etc.) that supports a form of link aggregation. Switch Clustering also provides the ability to perform virtual hitless upgrades of the core switches (cluster). With all connections to the cluster dually attached, a single core switch can be taken out of service without interrupting end user traffic. This switch then can be upgraded and brought back into service. By performing the same function on the other switch, after the upgraded switch is back online, the entire cluster has been upgraded without taking a service outage and with minimal interruption to traffic flows on the network. Each horizontal stack cluster connects to the core layer using SMLT, which improves Layer 2 (bridged) resiliency by adding switch failure redundancy with sub-second failover. SMLT allows you to connect any switch or server that supports some form of link aggregation to two distinct separate SMLT endpoints or switches. These SMLT switches form a Switch Cluster and are referred to as an IST Core Switch pair. Both data center horizontal stack clusters use VLACP and SLPP to simulate a customer environment and are configured following Avaya s best practices. 30
31 Data Center 1 ToR Data Center 2 ToR Figure 4.3 Common Data Center ToR Switching Topology Each data center access layer includes an Avaya Aura Server VLAN, VMware VLANs, and various application VLANs. Each VLAN (except the Avaya Aura VLAN) extends between data centers. The ERS 5600 series switches support a resilient stacking architecture. The stack is created by using the stacking cables and stacking ports on the ERS switches. Switches are cabled together in the manner shown below so that every switch has two stacking connections for utmost resiliency. The shortest path 31
32 algorithm used for stacking allows for the most efficient use of bandwidth across the stack. The maximum number of switch can be allowed to stack is 8 and can be of any model (mix and match) within the same product family. A failure in any unit of the stack will not adversely affect the operation of the remaining units in the stack. Replacement of the failed switch is easy with the Auto-Unit Replacement (AUR) feature. This allows for a new switch to be put into the stack and will automatically get the right software image and configuration without user intervention the replacement switch must be the exact model of the failed switch. In addition, the SW image has to be right for the AUR to work properly. Please refer to Product documentation for more info on AUR. Note ERS 5600 series switches support 144Gbps stacking bandwidth per switch, and 1.1Tbps maximum per stack. Figure 4.4 ERS 5600 Stacking 4.3 Troubleshooting and Monitoring Understanding what is happening during the normal course of operations and knowing what to look for during abnormal times can help to maintain connectivity or restore operations quickly. This section highlights a few critical and often used troubleshooting tools. For details on all the options available, refer to the Troubleshooting documentation for each Avaya product Packet Capture (PCAP) The ERS 8800/8600 supports a Packet Capture Tool (PCAP) tool that captures ingress and egress packets on selected I/O ports. With this feature, you can capture, save, and download one or more traffic flows through the switch. The captured packets can then be analyzed offline for troubleshooting purposes. This feature is based on the mirroring capabilities of the I/O ports. To use PCAP, you must have the Advanced Software License. All captured packets are stored in the Secondary CPU, used as the PCAP engine. The Master CPU maintains its protocol handling and is not affected by any capture activity Port Mirroring The VSP 7000 and the ERS 5000 Series offer a port mirroring feature that helps you monitor and analyze network traffic. Port mirroring supports both ingress (incoming traffic) and egress (outgoing traffic) port mirroring. When you enable port mirroring, ingress or egress packets are forwarded normally from the mirrored (source) port, and a copy of the packets is sent to the mirroring (destination) port. Port mirroring capabilities and scalability vary between platforms. 32
33 4.3.3 Remote Logging All ERS platforms support a remote logging feature. This provides an enhanced level of logging by replicating system messages on a syslog server. System log messages from several switches can be collected at a central location, alleviating the network manager from querying each switch individually to interrogate the log files. It also ensures that information is not lost when a switch becomes inoperable. The level of logging and the details provided differ between ERS platforms refer to the System Monitoring or Troubleshooting documentation for each of the products to obtain more details Stackables Tools The ERS stackables have built-in tools that offer information on the health and well-being of the stack: Stack Health Check The stack health check feature provides a view into the overall operation of the stack; with information on the cascade connections, if the stack is resilient (return cable connected) or non-resilient, number of units in the stack and the model of each unit along with its unit number. This feature shows you a quick snapshot of the stack configuration and operation Stack Monitor The stack monitor feature analyzes the health of a stack by monitoring the number of active units in the stack. With the stack monitor feature, when a stack is broken, the stack and any disconnected units from the stack send SNMP traps. If the stack or the disconnected units are still connected to the network, they generate log events and send trap messages to notify the administrator of the event. After the problem is detected, the stack and disconnected units continue to generate log events and send traps at a userconfigurable interval until the situation is remedied (or the feature is disabled) Stack Loopback Test The stack loopback test feature allows the customer to quickly test the switch stack ports and the stack cables on the ERS units. This feature helps you while experiencing stack problems to determine whether the root cause is a bad stack cable or a damaged stack port and prevents potentially good switches being returned for service. You can achieve this by using two types of loopback tests internal and external Stack Port Counters The stack port counters show statistics of the traffic traversing the stacking connectors, including the size of packets, FCS errors, filtered frames, etc Environmental Information This feature displays environmental information about the operation of the switch or units within a stack. It reports the power supply status, fan status and switch system temperature CPU and Memory Utilization The CPU and memory utilization feature provides data for the past 10 seconds, 1 minute, 1 hour, 24 hr, or since system boot up. You can use CPU utilization information to see how the CPU is used during a specific time interval. The memory utilization provides information on what percentage of the dynamic memory is currently used by the system. The switch displays memory utilization in terms of megabytes available since system boot up. 33
34 4.4 Avaya Aura Each data center has Avaya Aura servers, services and media gateways that connect to ToR switching solutions in the data center. The Avaya Aura communications system includes a Communication Manager, Session Manager for H.323 / SIP processing and System Manager. Additional servers and services can be added in the future as required such as messaging, conferencing and call center applications. In addition, there are media gateways for analog / digital phone communications and PSTN access. To test and validate core Avaya Aura operation and geo-redundancy failover scenarios on an Avaya data network, the following Avaya Aura services will be deployed in data center 1 and data center 2 using S8800 or common server hardware: Hardware Data Center 1 Communication Manager (Duplex) Session Manager System Manager G450 Media Gateway Notes S8800 or Common Servers (or existing hardware) Data Center 2 Communication Manager (ESS) Session Manager G450 Media Gateway Table 4.2 Common Data Center Layer Hardware 34
35 4.5 VMware Servers Avaya VENA uses products from their industry-leading partners like VMware for virtualization, Coraid and Dell for converged Ethernet storage area network (SAN), and Communication Resources Inc (CRI) for virtualized servers. Each data center in our testing includes eight Dell R610 servers that connect to the ToR and SAN switching solutions. In the Avaya lab, the Dell R610 servers were used to test and validate the following: Avaya Aura applications virtualized by CRI under VMware vmotion operation over switch clustering and SPB VMware Fault Tolerance operation over switch clustering and SPB For a fully geo-redundancy deployment, CRI requires eight Dell R610 servers in each data center (16 total) as well as a SAN that is extended between each data center. The Dell R610 servers host Avaya Aura applications, verify application availability, as well as validate the migration of Avaya Aura virtual servers and applications between physical hosts in each data center. Hardware Data Center 1 Eight Dell R610 Servers with: o 48GB RAM (6 x 8GB) o Two Intel Xeon X Ghz Processors o 4 x Gigabit Ethernet NICs Notes CRI Testing and Validation Data Center 2 Eight Dell R610 Servers with: o 48GB RAM (6 x 8GB) o Two Intel Xeon X Ghz Processors o 4 x Gigabit Ethernet NICs Table 4.3 CRI Virtualized Server Hardware 35
36 4.6 Storage Area Network The CRI VMware ESXi servers use two SAN solutions extended over a Layer 2 VSN to provide file system sharing for vmotion. The Avaya Networking Test Lab includes SAN arrays from Coraid and Dell. You can deploy either one SAN array to be shared between the CRI servers or one dedicated SAN array for each data center. Hardware Data Center 1 Two ERS 4526GTX or ERS 5650TD series switches Two 10GBASE-SR XFP Transceivers One Dell EqualLogic 6000 Series Storage Appliance & Drives (iscsi) Data Center 2 Two ERS 4526GTX or ERS 5650TD series switches Two 10GBASE-SR XFP Transceivers One Coraid EtherDrive SRX Series Storage Appliance & Drives (AoE) Notes Note: Both the Dell EqualLogic and the Coraid SAN arrays were tested and passed in the Avaya Networking Test Lab. Table 4.4 SAN Hardware The SAN array and VMware servers connect to a stack of two dedicated SAN switches in each data center. Each Dell R610 uses a dedicated Gigabit Ethernet connection to a port on its respective SAN switch. Figure 4.5 Common Data Center SAN Switching Topology 36
37 4.7 Avaya Unified Communications Management Avaya Unified Communications Management (UCM) is a centralized and integrated set of management tools and applications. Various UCM network management applications provide the following services: Real-time configuration Discovery Provisioning Monitoring and troubleshooting of the Avaya data infrastructure Provisioning, monitoring and management of the CRI virtualized servers The Avaya UCM network management services can be deployed on standalone servers or can optionally be virtualized. The VMware vcenter application must be deployed on a standalone device. Services Configuration Orchestration Manager (COM) 2.3 Virtualization Provisioning Service 1.0 (VPS) VMware vcenter Notes Note: Avaya management applications can be deployed on physical servers or can optionally be virtualized. Table 4.5 Unified Communications Management Avaya Communications Orchestration Manager (COM) Avaya Configuration and Orchestration Manager (COM) is a UCM management system that manages multiple network devices. Avaya COM provides management and configuration services for different elements in the Avaya Enterprise family of devices. Avaya COM has the following features: Web-based, platform-independent application. Internet Explorer and Firefox browser support. Supports saving the error log, preferences, and communities. Supported by dynamic HTML (DHTML). DHTML is a combination of HTML, JavaScript, and Cascading Style Sheets (CSS). To use DHTML, JavaScript and CSS must be enabled on the browser. Supports wizards and templates to simplify complex multi-device configuration management. Supports device configuration management. Supported across Windows, and Linux platforms. Provides a consistent graphical user interface (GUI) across COM and sub-managers, and provides a single point of access to the sub-managers. Provides access control and security using community strings, SNMPv3 USM, and SSH. 37
38 Avaya COM has an intuitive interface that helps you configure, manage, and provision multiple devices such as Avaya Ethernet Routing Switches and WLAN devices. The following figure shows how a sample network topology appears in COM. Figure 4.6 Avaya COM 38
39 The following figure shows how the Avaya COM Device Inventory Manager displays information on all the devices in the network. Figure 4.7 Avaya COM Device Inventory Manager 39
40 COM has several other managers that help you manage various features in the network. For example, the following figure shows how the Avaya COM VLAN Manager displays information on all the VLANs in the network. Figure 4.8 Avaya COM VLAN Manager 40
41 4.7.2 Avaya Virtualization Provisioning Service (VPS) Avaya Virtualization Provisioning Service (VPS) is a plug-in component to Avaya COM. Avaya VPS uses Avaya COM for network device inventory, topology, and configuration. Network management tools enable network operators to view the network topology. However, that view does not include virtualized servers. Avaya VPS solves this problem by providing an end-to-end view of the virtualized data center from servers, to VMs, to networking devices. This view streamlines the troubleshooting process and ensures that server and network operations teams work more effectively together. Without this view of ALL devices in the network, troubleshooting application performance and network connectivity issues can be a lengthy, inefficient process. Figure 4.9 Avaya COM VPS Manager Avaya VPS also audits and tracks VMs throughout their lifecycle to provide relevant reporting information. Because of Avaya VPS s ability to track VMs, it can automate network device provisioning by following VMs as they migrate through the network. As VMs move from one server to the next, the appropriate port profiles (VLAN, QoS, ACLs) are added and deleted from the edge devices that are connected to the physical servers, helping ensure consistent application performance as VMs migrate between servers. In addition to saving time, this mechanism also eliminates human error. Another Avaya VPS feature is that it provides a relay mechanism to VMware s vcenter, which is the management system that CRI uses to oversee the virtualized UC environment. Avaya VPS transports information between vcenter and the Avaya COM to manage and view both the virtual server and network environment. 41
42 4.8 Network Access Control Avaya s Identity Engines is the framework for role-based network access control. Within this framework there are several options to best accommodate the needs of the Enterprise customer, from simple MAC authentication to full 802.1X authentication and posture assessment (end station compliancy to corporate security policies) of the end user s workstation. With all the methods available, the end result is to ensure users are allowed on the network and permitted access to resources based on identity and credentials. This section will describe the backend infrastructure required (Identity Engines) along with the options available for end user authentication Identity Engines The Avaya Identity Engines portfolio integrates with any current network infrastructures to provide the central policy decision needed to enforce role-based Network Access Control (NAC). This is accomplished by combining the best elements of a next-generation RADIUS/AAA server, the deep directory integration found in application identity offerings, and one of the industry's most advanced standards-based policy engines. All this is done out-of-band for maximum scalability and cost effectiveness. The centralized policy engine sits in the data center to provide centralized authentication and authorization for wired, wireless, and VPN network devices. It is closely aligned with Avaya and third-party Ethernet switching, WLAN and VPN products as it provides centralized integrated security services for these network devices. Coupled with the centralized policy engine is a suite of complementary products that enable 802.1X rollouts for wired and wireless networks, while unifying those policies with existing VPN rules to achieve audit and compliance goals. These products offer a holistic network identity management solution which involves all aspects of managing how users access networks. Benefits include admission control, temporary user provisioning, policy decisions and directory integration Identity Engines Ignition Server A state-of-the-art network identity management solution with a powerful policy engine to centralize, streamline and secure access across the network. The Identity Engines Ignition Server offers a new level of accuracy, with identity- and policy-based control over who accesses the network, where, when, how, and with what type of device. Easy to deploy and use, it is a powerful, scalable foundation for network access control, guest access, secure wireless, compliance, and more Identity Engines Ignition Posture Identity Engines Ignition Posture provides endpoint health and posture checking that works in the real world. Most posture checking products today are inflexible, add-on layers that are expensive to support and frustrating for network users. In contrast, this product provides policy flexibility and integration with the Identity Engines Ignition Server to ensure that it is easier to support and less frustrating for users Identity Engines Ignition Guest Manager Because guests and visitors often have legitimate reasons to access networks, Identity Engines Ignition Guest Manager makes it easy and safe for organizations to let front-desk staff create guest user accounts. Simple delegation rules ensure front-desk personnel can give guests access to only specified network resources, and each guest account expires automatically after a designated period. 42
43 Identity Engines Ignition Analytics Identity Engines Ignition Analytics is a powerful reporting application that allows organizations to perform in-depth analysis of network activity including ingress and usage. With over 25 preconfigured audit, compliance and usage reports, organizations can easily produce multiple custom reports to fulfill its specific reporting requirements. 4.9 Network Operations Figure 4.10 Identity Engines Portfolio Architecture A Windows or Linux server supports devices connected to the access layer by providing AAA, DHCP, DNS, TFTP, and HTTP/HTTPS services. DHCP is required by all devices connected to the access layer. TFTP HTTP/HTTPS is used by Avaya IP Phones for firmware upgrades and device configuration. DNS is an optional service that can be leveraged by all devices. These services can be deployed on a standalone server or can optionally be virtualized. Services AAA (Ignition Server) Directory / DHCP / DNS (Windows Server) HTTP / HTTPS / TFTP (Linux or Windows) Notes Note: Services can be deployed on a physical server or can optionally be virtualized. Table 4.6 Network Operations Services 43
44 Data Center 1 avaya.com 4.10 Data Center Layer Configuration Details The following sections provide configuration information for the VENA Data Center solution data center layer. For availability and failover two data centers will be simulated where both data centers use ERS 5600 top of rack (ToR) switching. The common data center access layer topology and configuration are the same regardless of the core and access layer configuration Virtual LANs and IP Subnets Each data center will be allocated a pool of VLANs to simulate a typical customer environment. Each ERS 5000 horizontal stack cluster simulate a top of rack (ToR) data center switching solution and will be assigned a Management VLAN, Avaya Aura Services VLAN, Guest VLAN, Application VLANs (4), VMware VLANs (2) and a Communication Manager Fault Tolerant VLAN. Each data center will be allocated a pool of 10 contiguous VLANs and IP subnets to permit additional VLANs to be added in the future. The following table provides an example of the VLAN IDs and IP subnet schemes which can be deployed: VLAN ID VLAN Name Subnet Description 2 IST /30 IST VLAN 10 Management /24 Common Management VLAN 100 Aura /24 Avaya Aura 1 VLAN 110 vmotion /24 Common vmotion VLAN 111 VMFT /24 Common VMware Fault Tolerance VLAN 112 App /24 Common Application VLAN App /24 Common Application VLAN App /24 Common Application VLAN App /24 Common Application VLAN CMFT /24 Common CM Fault Tolerance VLAN 117 Guest /24 Common Guest VLAN 120 SAN /24 Common SAN VLAN Table 4.7 Data Center 1 Virtual LANs and Subnets 44
45 Data Center 2 avaya.com VLAN ID VLAN Name Subnet Description 2 IST /30 IST VLAN 10 Management /24 Common Management VLAN 101 Aura /24 Avaya Aura 2 VLAN 110 vmotion /24 Common vmotion VLAN 111 VMFT /24 Common VMware Fault Tolerance VLAN 112 App /24 Common Application VLAN App /24 Common Application VLAN App /24 Common Application VLAN App /24 Common Application VLAN CMFT /24 Common CM Fault Tolerance VLAN 117 Guest /24 Common Guest VLAN 120 SAN /24 Common SAN VLAN Table 4.8 Data Center 2 Virtual LANs and Subnets 45
46 Figure 4.11 Data Center Layer Topology 46
47 Figure 4.12 SAN Virtual LANs and Subnets Figure 4.13 Data Center Layer Avaya Aura VLAN Details Figure 4.14 Data Center VMware VLAN Details 47
48 Figure 4.15 Dell R610 Server Connectivity Details 48
49 Connection Details The following diagram provides a physical view of how the data center ToR switch connects to the data center distribution switches: Figure 4.16 Data Center Connection Details 49
50 Figure 4.17 Data Center 1 Server / Appliance Connection Details 50
51 Figure 4.18 Data Center 2 Server / Appliance Connection Details 51
52 Figure 4.19 Data Center 1 Rack Layout 52
53 Figure 4.20 Data Center 2 Rack Layout 53
54 ToR Configuration Notes Enable the following features on the ToR switches in the data center access layer. Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature Notes 802.1Q Tagging Enable on all IST and SMLT ports. Auto Detect / Auto Config (ADAC) Enable on all ports (except MLT). Globally configure ADAC to tag the respective Converged VLAN and untag the respective User VLAN. ADAC leverages LLDP for IP Phone detection and LLDP-Med policies for configuration. LLDP policies must supply Location, Call Server and File Server. Link Aggregation Configure dual connections from a select number of ESXi servers in Data Center 1 to switches in each stack using each supported link aggregation method. Configure dual connections from the Avaya Aura Servers and Media Gateways to switches in each stack using a supported link aggregation method. Management Assign a switch IP address on its respective management VLAN to each ERS 5000 stackable switch. Assign a stack IP address on their respective management VLAN to each ERS 5000 horizontal stack of switches. Enable SSHv2, SNMPv3 and HTTPS secure management services. QoS Configure all IST, SMLT, Avaya Aura Server, Media Gateway and VMware server ports as trusted. Configure all remaining ports as untrusted. SLPP Enable on all VLANs and ports (except IST). Enable SLPP Guard for each SLPP enabled VLAN. Spanning Tree Protocol / BPDU Filtering VLACP Enable on all edge ports (except IST and SMLT). Enable on all IST and SMLT ports. Table 4.9 ToR Switch Configuration in the Data Center 54
55 SAN Configuration Notes Enable the following features on the SAN switches in the data center access layer. Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature Notes 802.1Q Tagging Enable on all MLT ports. Management Assign a switch IP address on its respective management VLAN to each ERS 5000 stackable switch. Enable SSHv2, SNMPv3 and HTTPS secure management services. QoS SLPP Configure all DMLT and server ports as trusted. Enable on all VLANs and ports. Enable SLPP Guard for each SLPP enabled VLAN. Spanning Tree Protocol / BPDU Filtering VLACP Enable on all edge ports. Enable on all MLT ports. Table 4.10 SAN Switch Configuration in the Data Center 55
56 5. Core Layer The core layer uses the IEEE 802.1aq standard based Shortest Path Bridging (SPB). The SPB core layer consists of the following: Two Avaya Virtual Services Platform 9000 (VSP 9000) or two Avaya Ethernet Routing Switch 8800 (ERS 8800) series switches operating as backbone core bridges (BCBs). Three clusters of Avaya Ethernet Routing Switches operating as backbone edge bridges (BEBs) that interconnect the data center and access layers. Note I-SID configuration is required only for virtual services such as L2 VSN and L3 VSN. With IP Shortcuts, no I-SID is required as forwarding is done using the (Global Routing Table) GRT. The ERS 8800 core layer includes: Hardware Eight slot or slot Chassis Sixteen 8005AC Power Supplies Sixteen 8895SF or 8692SF /w Mezzanine Switch Fabrics Eight 8612XLRS I/O Modules Two 8634XGRS I/O Modules Two 1000BASE-SX SFP Transceivers Forty Eight 10GBASE-SR XFP Transceivers Notes 2 x 10GbE Uplinks from each BCB to BCB The VSP 9000 core layer includes: Table 5.1 ERS 8800 SPB Core Layer Hardware Two slot Chassis Twelve 9090SF Switch Fabrics Twelve 9006AC Power Supplies Four 9080CP Modules Four 9024XL Modules Four 9048GB Modules Four 9048GT Modules Twenty 10GBASE-SR/SW SFP+ Transceivers Notes 2 x 10GbE Uplinks from each BCB to BCB Table 5.2 VSP 9000 SPB Core Layer 56
57 The Avaya Networking Test Lab used the SPB core to validate various common SPB deployment scenarios including a simplified data center that uses IP Shortcuts, a virtualization ready data center that uses L3 VSNs, and a virtualized data center that uses L3 VSNs interconnected using a firewall. Default gateway redundancy will be provided using Avaya s VRRP with BackupMaster extensions except for the last scenario where standard VRRP will be enabled between the data center firewalls. Figure 5.1 SPB Core Layer 57
58 5.1 SPB Topology The SPB topology uses the common access and data center layers, which are inter-connected using three clusters of Avaya ERS 8800 series switches operating as backbone edge bridges (BEBs). Each BEB cluster connects to a pair of Avaya ERS 8800 or VSP 9000 series switches operating as backbone core bridges (BCBs). The following sections describe the main features of the ERS 8800 and the VSP 9000 to help you decide which platform to use in the core. Note that you can configure Lossless Ethernet on the VSP 9000 only. It is not currently supported on the ERS Figure 5.2 SPB Topology 58
59 5.1.1 Avaya Ethernet Routing Switch 8800 The ERS 8800 systems are typically deployed in Switch Clusters to deliver true end-to-end reliability and always-on application access. Available in a wide range of models, these systems are specifically designed to address the critical enterprise requirements of reliability, efficiency, and scalability. The ERS 8800 is also a key component of the Avaya Virtual Enterprise Network Architecture, supporting fullfeatured network virtualization capabilities for campus cores and data center applications. As a Layer 2/3 routing switch, the ERS 8800 provides flexibility in many network designs as it can be used as a closet switch, aggregation switch, or core switch. The ERS 8800 supports Switch Clustering by using SMLT for active/active uplink connectivity without using any form of spanning tree. However, the ERS 8800 also supports the IEEE 802.1w Rapid Spanning Tree Protocol (RSTP) for those environments where spanning tree is desired. Figure 5.3 Avaya ERS Avaya Virtual Services Platform 9000 The VSP 9000 is a new Ethernet Switching platform for Enterprise Campus environments and Enterprise Data Centers. This platform offers an unmatched switching architecture that scales from an initial 8.4 Terabits per second to an industry-leading 27 Terabits per second. The VSP 9000 delivers substantial performance and scalability, with immediate support for very high-density 1 and 10 GbE, in addition to being future-ready for the emerging 40 and 100 GbE standards. The fully scalable architecture helps ensure that network capacity seamlessly scales in line with performance requirements, without complex or expensive re-engineering. The Avaya VSP 9000 architecture is ultra-reliable and has the following features that help ensure uninterrupted business operations with no single point-of-failure: Fully redundant hardware, including the control processor and switch fabric modules, that ensure Switch Clustering delivering deterministic millisecond failover resiliency for instantaneous recovery from any individual failure or during maintenance without impacting user applications Layer 2 and Layer 3 network virtualization services providing support for multiple customers and user groups on the same platform Network failover in less than 20 milliseconds with instantaneous re-route across all ports to minimize packet loss In-service control plane integrity check" and rapid failure detection and recovery of data path for system-level health check and self-healing capabilities 59
60 Hitless patching eliminating the requirement to reload the complete system image, thereby minimizing maintenance down time Flight Recorder style logging capability to help with continuous real-time monitoring of internal control message flows Key Health Indicators to provide system operators with a view of system health on all levels: OS, system applications /protocols I/O modules, ports and the forwarding path Ability to remotely update flash images Avaya Virtual Services support using IEEE Shortest Path Bridging de-couples physical infrastructure from logical provisioning and ensures predictability for all network services Lossless Ethernet Figure 5.4 Avaya VSP 9000 Ethernet Switch Lossless Ethernet is a VSP 9000 feature that guarantees that the switch does not drop certain traffic types from configured 10 GbE ports. You can configure all the unicast traffic on the port to be Lossless, or you can configure only the unicast traffic with a specific Lossless 802.1p value to be Lossless. The VSP 7000 supports Lossless environments. Multicast traffic is not supported on 10GbE ports configured as Lossless. For other limitations and configuration guidelines, see Avaya Virtual Services Platform 9000 Network Design (NN ). Note Lossless Ethernet requires a Premier license. 60
61 5.2 Logical Topologies The Avaya Networking Test Lab tested three different SPB core topologies as described in this section. In all three topologies, VRRP with BackupMaster extensions provide default gateway redundancy for each routed VLAN in the data center where one IP interface is master and the three remaining are backup interfaces. Each VRRP interface is prioritized differently (highest preferred) and master interfaces for each data center VLAN are evenly distributed between each of the four ERS 8800 nodes. VRRP with BackupMaster extensions also provide default gateway redundancy for access layer VLANs. 5.3 SPB Services For testing and validation, the core is configured with the following SPB implementation options: IP Shortcuts Layer 2 Virtual Services Networks (L2 VSNs) Layer 3 Virtual Services Networks (L3 VSNs) IP Shortcuts forward standard IP packets over IS-IS in the SPB core. Unlike with L2 VSNs, no I-SID configuration is required with SPB IP Shortcuts. Instead, SPB nodes propagate Layer 3 reachability as leaf information in the IS-IS LSPs using Extended IP reachability TLVs (TLV 135), which contain routing information such as neighbors and locally configured subnets. SPB nodes receiving the reachability information can use this information to populate the routes to the announcing nodes. All TLVs announced in the IS-IS LSPs are grafted onto the shortest path tree (SPT) as leaf nodes. With IP Shortcuts, there is only one IP routing hop, as the SPB backbone acts as a virtualized switching backplane. L2 VSNs bridge customer VLANs (C-VLANs) over the SPB core infrastructure. At the BEBs, customer VLANs (C-VLAN) are mapped to I-SIDs based on the local service provisioning. Outgoing frames are encapsulated in a MAC-in-MAC header, and then forwarded across the core to the far-end BEB, which strips off the encapsulation and forwards the frame to the destination network based on the I-SID-to-C- VLAN provisioning. In the backbone VLAN (B-VLAN), Backbone Core Bridges (BCBs) forward the encapsulated traffic based on the BMAC-DA, using the shortest path topology learned using IS-IS. L3 VSNs provide IP connectivity over SPB for VRFs using IS-IS to exchange the routing information for each VRF. With SPB L3 VSN, the packet forwarding works in a similar fashion as the IP Shortcuts on the Network Routing Engine (NRE), with the difference that the encapsulation includes the I-SID to identify the VRF that the packet belongs to SPB Logical Topology 1 Topology 1 simulates a simple SPB data center design with no virtual services networks. In this topology, the Application, Avaya Aura, User, and Converged VLANs are assigned to IP interfaces on their respective BEBs, which inject routes into the Global Route Table. Then all of these routed VLANs are forwarded over the core using IP Shortcuts. The other VLANs are assigned as follows: Application, VMware, and SAN VLANs in the data center are assigned to unique L2 VSNs. Management and Guest VLANs are assigned to unique L2 VSNs. Application, Avaya Aura, User, and Converged VLANs are routed by propagating Layer 3 reachability into IS-IS LSPs using Extended IP reachability TLVs (TLV 135). From a logical perspective traffic is forwarded between the VLANs like a traditional routed data center design. 61
62 Figure 5.5 Non Virtualized Data Center VLAN ID DC1 BEB 1 DC1 BEB 2 DC2 BEB 1 DC2 BEB (Aura 1) 250 (Master) 200 (Backup) N/A N/A 101 (Aura 2) N/A N/A 250 (Master) 200 (Backup) 112 (App1) 250 (Master) (App2) (Master) (App3) (Master) (App4) (Master) 116 (App5) 250 (Master) Table 5.3 Data Center Layer VRRP Priorities 62
63 VLAN ID Access BEB Switch 1 Access BEB Switch (Converged 1) 250 (Master) 200 (Backup) 201 (User 1) 200 (Backup) 250 (Master) 210 (Converged 2) 250 (Master) 200 (Backup) 211 (User 2) 200 (Backup) 250 (Master) 220 (Converged 3) 250 (Master) 200 (Backup) 221 (User 3) 200 (Backup) 250 (Master) 230 (Converged 4) 250 (Master) 200 (Backup) 231 (User 4) 200 (Backup) 250 (Master) Table 5.4 Access Layer VRRP Priorities SPB Logical Topology 2 Topology 2 simulates a simple SPB data center design with future virtualization needs. In this topology, the Application, Avaya Aura, User, and Converged VLANs are assigned to virtual router forwarders (VRF) on their respective BEBs, which are mapped to a single L3 VSN. Then all routed VLANs are forwarded over the core using an L3 VSN. The other VLANs are assigned as follows: Application, VMware and SAN VLANs in the data center are assigned to unique L2 VSNs. Management and Guest VLANs are assigned to unique L2 VSNs. VRFs provide IP inter-connectivity for Application, Avaya Aura, User, and Converged VLANs assigned to a common VSN. VRFs assigned to the same VSN use IS-IS to exchange routing information about subnets connected to each VRF. VRFs can only forward traffic between VRFs assigned to the same VSN. Virtualization of additional services is provided by creating additional VSNs and assigning new VRFs. 63
64 Figure 5.6 Virtualization Ready Data Center 64
65 VLAN ID DC1 BEB 1 DC1 BEB 2 DC2 BEB 1 DC2 BEB (Aura 1) 250 (Master) 200 (Backup) N/A N/A 101 (Aura 2) N/A N/A 250 (Master) 200 (Backup) 112 (App1) 250 (Master) (App2) (Master) (App3) (Master) (App4) (Master) 116 (App5) 250 (Master) Table 5.5 Data Center Layer VRRP Priorities VLAN ID Access BEB Switch 1 Access BEB Switch (Converged 1) 250 (Master) 200 (Backup) 201 (User 1) 200 (Backup) 250 (Master) 210 (Converged 2) 250 (Master) 200 (Backup) 211 (User 2) 200 (Backup) 250 (Master) 220 (Converged 3) 250 (Master) 200 (Backup) 221 (User 3) 200 (Backup) 250 (Master) 230 (Converged 4) 250 (Master) 200 (Backup) 231 (User 4) 200 (Backup) 250 (Master) Table 5.6 Access Layer VRRP Priorities SPB Logical Topology 3 Topology 2 simulates a simple SPB data center design with future virtualization needs. In this topology, the Application, Avaya Aura, User, and Converged VLANs are assigned to virtual router forwarders (VRF) on their respective BEBs, which are mapped to a single L3 VSN. Then all routed VLANs are forwarded over the core using an L3 VSN. The other VLANs are assigned as follows: Topology 3 simulates an SPB data center design with virtualization. In this topology, the Avaya Aura and Converged VLANs are assigned to VRFs on their respective BEBs, which are mapped to one L3 VSN. Application and User VLANs are assigned to one L3 VSN while Avaya Aura and Converged VLANs are forwarded over a second L3 VSN. Communication between VSNs for IP Softphones is provided using Firewalls. Application, VMware and SAN VLANs in the data center are assigned to unique L2 VSNs. 65
66 Avaya Aura and Converged VLANs are assigned to VRFs on their respective BEBs, which are mapped to one L3 VSN. s Application and User VLANs are assigned to VRFs on their respective BEBs, which are mapped to a second L3 VSN. Management and Guest VLANs are assigned to unique L2 VSNs. VRFs assigned to one VSN provide IP connectivity for Avaya Aura and Converged VLANs while VRFs assigned to a second VSN provide IP connectivity for Application and User VLANs. A firewall connected to each data center BEB provides communications between User and Avaya Aura VLANs. Figure 5.7 Virtualized Data Center 66
67 VRF VLAN ID DC1 BEB 1 DC1 BEB 2 DC2 BEB 1 DC2 BEB 2 Green 100 (Aura 1) 250 (Master) 200 (Backup) N/A N/A Green 101 (Aura 2) N/A N/A 250 (Master) 200 (Backup) Purple 112 (App1) 250 (Master) Purple 113 (App2) (Master) Purple 114 (App3) (Master) 200 Purple 115 (App4) (Master) Purple 116 (App5) 250 (Master) Table 5.7 Data Center Layer VRRP Priorities VRF VLAN ID Access BEB Switch 1 Access BEB Switch 2 Green 200 (Converged 1) 250 (Master) 200 (Backup) Green 210 (Converged 2) 200 (Backup) 250 (Master) Green 220 (Converged 3) 250 (Master) 200 (Backup) Green 230 (Converged 4) 200 (Backup) 250 (Master) Purple 201 (User 1) 250 (Master) 200 (Backup) Purple 211 (User 2) 200 (Backup) 250 (Master) Purple 221 (User 3) 250 (Master) 200 (Backup) Purple 231 (User 4) 200 (Backup) 250 (Master) Table 5.8 Access Layer VRRP Priorities 67
68 BEB (Data Center 1) BCB avaya.com 5.4 Core Layer Configuration Details The following sections provide configuration information for the VENA Data Center solution core layer. The core layer will be used to verify data center operations over an IEEE 802.1aq Shortest Path Bridging (SPB) core. One series of tests will have an SPB core using the Avaya VSP 9000 running Release 3.3 software. Another series of tests will have an SPB core using the ERS 8800 series switches running Release The IEEE ratified the 802.1aq standard that defines SPB and the Type-Length-Value (TLV) encoding that IS-IS uses to support SPB services. With Release 7.1.3, Avaya is in full compliance with the IEEE 802.1aq standard Virtual LANs and IP Subnets The SBPB core terminates the data center and access layer VLANs on the BEB switch cluster switches. Depending on the SPB topology being evaluated, the VLANs are either terminated and routed by the BEB switches or transported through the core using MAC-in-MAC encapsulation. To create the SPB core, two backbone VLANs (B-VLAN) will be created. The following table provides an example of the VLAN IDs and IP subnet schemes which can be deployed: VLAN ID VLAN Name Subnet Description 5 B-VLAN /24 SPB B-VLAN 1 6 B-VLAN /24 SPB B-VLAN 2 Table 5.9 SPB BCB Virtual LANs and Subnets VLAN ID VLAN Name Subnet Description 2 IST /30 IST VLAN 5 B-VLAN /24 SPB B-VLAN 1 6 B-VLAN /24 SPB B-VLAN 2 10 Management /24 Common Management VLAN 100 Aura /24 Avaya Aura VLAN vmotion /24 Common vmotion VLAN 111 VMFT /24 Common VM Fault Tolerance VLAN 112 App /24 Common Application 1 VLAN 113 App /24 Common Application 2 VLAN 68
69 BEB (Data Center 2) avaya.com 114 App /24 Common Application 3 VLAN 115 App /24 Common Application 4 VLAN 116 CMFT /24 Common CM Fault Tolerance VLAN 117 Guest /24 Common Guest VLAN 120 SAN /24 Common SAN VLAN Table 5.10 SPB BEB DC1 Virtual LANs and Subnets VLAN ID VLAN Name Subnet Description 2 IST /30 IST VLAN 5 B-VLAN /24 SPB B-VLAN 1 6 B-VLAN /24 SPB B-VLAN 2 10 Management /24 Common Management VLAN 101 Aura /24 Avaya Aura VLAN vmotion /24 Common vmotion VLAN 111 VMFT /24 Common VM Fault Tolerance VLAN 112 App /24 Common Application 1 VLAN 113 App /24 Common Application 2 VLAN 114 App /24 Common Application 3 VLAN 115 App /24 Common Application 4 VLAN 116 CMFT /24 Common CM Fault Tolerance VLAN 117 Guest /24 Common Guest VLAN 120 SAN /24 Common SAN VLAN Table 5.11 SPB BEB DC2 Virtual LANs and Subnets 69
70 BEB (Access) avaya.com VLAN ID VLAN Name Subnet Description 2 IST /30 IST VLAN 5 B-VLAN /24 SPB B-VLAN 1 6 B-VLAN /24 SPB B-VLAN 2 10 Management /24 Common Management VLAN 117 Guest /24 Common Guest VLAN 200 Converged /24 Wiring Closet 1 Converged VLAN User /24 Wiring Closet 1 User VLAN Converged /24 Wiring Closet 2 Converged VLAN User /24 Wiring Closet 2 User VLAN Converged /24 Wiring Closet 3 Converged VLAN User /24 Wiring Closet 3 User VLAN Converged /24 Wiring Closet 4 Converged VLAN User /24 Wiring Closet 4 User VLAN 4 Table 5.12 SPB BEB Access Virtual LANs and Subnets 70
71 Figure 5.8 SPB Virtual LANs and Subnets 71
72 5.4.2 Connection Details The following diagrams show how the Access Layer and Data Center BEBs connect to core BCB: Figure 5.9 BCB Access BEB Connection Details 72
73 Figure 5.10 BCB DC1 BEB Connection Details 73
74 Figure 5.11 BCB DC2 BEB Connection Details 74
75 5.4.3 Configuration Notes Enable the following features on the SPB core switches to evaluate Avaya Aura and VMware operation. Note Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature DHCP Relay IP Routing Services Notes Enable on the virtual IP interfaces on the BEBs for access layer VLANs only. Assign a virtual IP interface to each routed data center layer VLAN. For VRRP with BackupMaster IP addresses: o o Assign.1 through.4 to the BEB switches. Assign.254 as the virtual IP address. o Assign four separate VRRP priorities (250, 200, 150 & 100). Distribute the VRRP master for each data center VLAN between data center BEB switches. Assign a virtual IP interface to each access layer VLAN. For VRRP with BackupMaster IP addresses: o o Assign.1 through.2 to each VLAN as real IP addresses. Assign.254 as the virtual IP address. SMLT Configure each BEB switch pair as a switch cluster. Configure an SMLT virtual B-MAC on both cluster switches. Configure each SMLT cluster switch as a peer with its neighbors System- ID (B-MAC). SPB Define primary and secondary B-VLANs. Configure each ERS 8800 switch with a unique Node Name and System-ID. Configure each ERS 8800 switch with a B-MAC that is easily recognizable for troubleshooting purposes. Assign a loopback interface with a unique IP address to each VSP 9000 or ERS 8800 switch. Enable IS-IS using area Enable CFM on each ERS 8800 switch. Table 5.13 General Core Configuration Notes 75
76 SPB Topology 1 Enable the following features on Topology 1 of the SPB core switches. Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature Virtual Router Forwarders (VRFs) Notes Assign Application and Avaya Aura VLANs to the global VRF 0 on the data center BEBs. Assign User and Converged VLANs to the global VRF 0 on the access BEB. Enable IS-IS route distribution on the access and data center BEBs permitting IP forwarding of User, Converged, Application, and Avaya Aura VLANs over the virtual switching fabric (IP Shortcuts). Virtual Service Networks (VSNs) Extend SAN, CM Fault Tolerant, Application, VMware Fault Tolerant, and vmotion VLANs between data centers using individual VSNs. Table 5.14 SPB Core Configuration Notes (Topology 1) SPB Topology 2 Enable the following features on Topology 2 of the SPB core switches. Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature Virtual Router Forwarders (VRFs) Notes Assign Application and Avaya Aura VLANs to a VRF named Purple on the data center BEBs. Assign all User and Converged VLANs to a VRF named Purple on the access BEB. Virtual Service Networks (VSNs) Assign Purple VRFs to a VSN over the virtual switching fabric (Layer 3 VSN). Extend SAN, CM Fault Tolerant, Application, VMware Fault Tolerant, and vmotion VLANs between data centers using individual VSNs. Table 5.15 SPB Core Configuration Notes (Topology 2) 76
77 SPB Topology 3 Enable the following features on Topology 3 of the SPB core switches. Unless otherwise stated, each feature is implemented following Avaya s current best practices and recommendations. Feature Virtual Router Forwarders (VRFs) Notes Assign Application VLANs to a VRF named Purple on the data center BEBs. Assign Avaya Aura VLANs to a VRF named Green on the data center BEBs. Assign all User VLANs to a VRF named Purple on the access BEB. Assign all Converged VLANs to a VRF named Green on the access BEB. Virtual Service Networks (VSNs) Assign Purple VRFs to a VSN over the virtual switching fabric (Layer 3 VSN). Assign Green VRFs to a VSN over the virtual switching fabric (Layer 3 VSN). Extend SAN, CM Fault Tolerant, Application, VMware Fault Tolerant, and vmotion VLANs between data centers using individual VSNs. Table 5.16 SPB Core Configuration Notes (Topology 3) 77
78 6. Test Results This section lists the test results in three sections: ERS 8800 core test results VSP 9000 core test results Lossless Ethernet test results 6.1 ERS 8800 core test results The following table shows the test results with ERS 8800s in the core and ERS 5600s in the data center: Connectivity Tests Test Cases Configure VLACP on the MLT ports to verify that it functions correctly. Configure SLPP for the SMLTs and simulate a loop to verify that SLPP identifies the loop and disables the port. Configure VRRP over the data center SMLT to verify that it functions correctly. Configure IP Source Guard, DHCP-snooping, and Dynamic ARP Inspection (DAI). Simulate an illegal message to the VLAN to verify that the message is identified and discarded. Configure CP-limit on a port and simulate a high volume of broadcast and multicast traffic to verify that CP-limit disables the port. Configure BPDU filtering on the edge switches and simulate loops to verify that BPDU filtering disables the port when it receives a BPDU packet. Configure Avaya phones with MAC authentication and ADAC to verify that they are authenticated and registered correctly to Aura servers. Configure Avaya phones with EAP authentication and ADAC to verify that they are authenticated and registered correctly to Aura servers. Connect the SAN to the data center to verify that it functions correctly. Install virtualized applications in the data center and verify that all the data center servers function correctly. Configure L2 VSNs and L3 VSNs and verify that traffic flows correctly. Test Results 78
79 Access Failover Tests Data Center Link Failover Tests Data Center Switch Failover Tests Simulate link failures between the access BEBs and BCBs to verify that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between the access BEBs and BCBs to verify that calls, video streams, and data traffic fail over with no noticeable impact. Simulate SMLT link failures between access switches (ERS 2500, 4500, 5600, and 8300) and access BEBs to verify that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed SMLT links between the access switches and access BEBs to verify that calls, video streams, and data traffic fail over with no noticeable impact. Simulate link failures between data center BEBs and BCBs to verify that VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between data center BEBs and BCBs to verify that VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Simulate link failures between ToR switches and data center BEBs to verify that VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between the ToR switches and the data center BEBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Simulate link failures between the SAN and data center BEBs to verify that VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between the SAN and data center BEBs to verify that VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Upgrade, downgrade, power down, and power up the data center BEBs to verify that the connections stay intact and that all traffic fails over with no noticeable impact Power down and power up the ToR base unit to verify that the connections stay intact and that all traffic fails over with no noticeable impact Upgrade, downgrade, power down and power up the whole stack to verify that the connections stay intact and that all traffic fails over with no noticeable impact 79
80 Core Switch Failover Tests vmotion Tests VMware Fault Tolerance (FT) Tests VMware NIC Teaming Tests Lossless Tests Aura Redundancy Tests Upgrade, downgrade, power down, and power up the BCBs to verify that the connections stay intact and that all traffic fails over with no noticeable impact Use vmotion to move virtual machines within a data center and between data centers. Verify that there is no noticeable impact on the real-time video streams and voice calls from those virtual machines. Verify that vmotion learned all the MAC addresses and successfully moved Aura application virtual machines were moved successfully from one data center to the other. Verify that Fault Tolerance works correctly on another host both within the same data center and in a different data center. Verify that Fault Tolerance works correctly on virtual machines with realtime applications. With Fault Tolerance enabled, verify that when the primary server fails the backup server makes the applications available with no data loss or impact of service. Verify that routes based on the virtual machine s original port ID are routed out from the uplink port on the virtual port where traffic entered the virtual machine. Verify that routes based on source MAC hashing are routed out from the uplink port on the hash of the source Ethernet MAC address. Verify that routes based on IP hashing are routed out from the uplink port on the hash of the source and the destination IP address of each packet. Verify that Lossless Ethernet works with no data loss during massive transmissions between the ERS 5600 and the Coraid network adaptor. Verify that Lossless Ethernet works both within the same data center and between data centers. Verify that Avaya SIP phones can register correctly with the Session Manager in the data centers. Verify that the backup Communication Manager takes over when the primary fails. Verify that the backup ESS Communication Manager in one data center takes over when the primary in the other data center fails. Table 6.1 ERS 8800 Core Test Results 80
81 6.2 VSP 9000 core test results The following table shows the test results with VSP 9000s in the core and VSP 7000s in the data center: Data Center Link Failover Tests Data Center Switch Failover Tests Core Switch Failover Tests Test Cases Simulate link failures between the data center BEBs and BCBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between the data center BEBs and BCBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Simulate link failures between the ToR switches and the data center BEBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between the ToR switches and the data center BEBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Simulate link failures between the SAN and the data center BEBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Recover the failed links between the SAN and the data center BEBs to verify that the VMware, vmotion, SAN, and applications connections stay intact and that calls, video streams, and data traffic fail over with no noticeable impact. Upgrade, downgrade, power down, and power up the data center BEBs to verify that the connections stay intact and that all traffic fails over with no noticeable impact Power down and power up the ToR base unit to verify that the connections stay intact and that all traffic fails over with no noticeable impact Upgrade, downgrade, power down and power up the whole stack to verify that the connections stay intact and that all traffic fails over with no noticeable impact Upgrade, downgrade, power down, and power up the BCBs to verify that the connections stay intact and that all traffic fails over with no noticeable impact Test Results 81
82 VSP 7000 Stacking Tests Replace the base unit (BU) in the stack to verify that the Auto Unit Replacement (AUR) feature works correctly with the configuration automatically restored to the new BU. Confirm that there is no traffic interruption on the other stack members when you replace the BU. Replace the temporary base unit (Temp-BU) in the stack to verify that the AUR feature works correctly with the configuration automatically restored to the new Temp-BU. Confirm that there is no traffic interruption on the other stack members when you replace the Temp-BU. Replace one of the non-base units (NBU) in the stack to verify that the AUR feature works correctly with the configuration automatically restored to the new NBU. Confirm that there is no traffic interruption on the other stack members when you replace the NBU. Simulate a stacking cable failure to verify that the intra-stack switch and inter-switch (SMLT) uplinks work correctly. Upgrade the stack image to verify reconvergence. Fail Fail Fail Table 6.2 VSP 9000 Core Test Results 82
83 6.3 Lossless Ethernet test results The following table shows the Lossless Ethernet test results, which was tested with VSP 9000s in the core and VSP 7000s in the data center: Test Cases Configure Lossless Ethernet on various sets of VSP 7000 ports to verify that there is no packet loss. Configure Lossless Ethernet with some ports oversubscribed and others not oversubscribed to verify that there is no packet loss. Configure oversubscription on the same stack link and separate stack links to verify that there is no packet loss. Generate pause frames on an ERS 5600 stack and send them to a VSP 7000 stack to verify interoperability between the two stack types and to verify that there is no packet loss. Generate pause frames on a VSP 7000 stack and send them to an ERS 5600 stack to verify interoperability between the two stack types and to verify that there is no packet loss. Generate pause frames on a VSP 9000 stack and send them to a VSP 7000 stack to verify interoperability and to verify that there is no packet loss. Generate pause frames on a VSP 7000 stack and send them to a VSP 9000 stack to verify interoperability and to verify that there is no packet loss. Generate pause frames on an ERS 5600 stack and on a VSP 9000 and send the frames to each other through a VSP 7000 stack to verify interoperability and to verify that there is no packet loss. Generate pause frames on ERS GbE Fiber MLT links and send them to a VSP 7000 stack to verify interoperability between the two stack types and to verify that there is no packet loss. Generate pause frames on VSP GbE Fiber MLT links and send them to a ERS 5600 stack to verify interoperability between the two stack types and to verify that there is no packet loss. Generate pause frames on ERS GbE Fiber LACP links and send them to a VSP 7000 stack to verify interoperability between the two stack types and to verify that there is no packet loss. Generate pause frames on VSP GbE Fiber LACP links and send them to a ERS 5600 stack to verify interoperability between the two stack types and to verify that there is no packet loss. Configure STP for multiple VLANs and STGs with oversubscription to verify the STP operation and that there is no packet loss. Configure MSTP for multiple VLANs and MSTIs with oversubscription to verify the MSTP operation and that there is no packet loss. Configure LLDP for multiple VLANs and STGs with oversubscription to verify the LLDP operation and that there is no packet loss. Configure VLACP for multiple VLANs and STGs with oversubscription to verify the VLACP operation and that there is no packet loss. Test Results 83
84 Power down and power up each unit in the stack one by one to verify that the connections stay intact and that all traffic fails over with no packet loss. Simulate a stacking cable failure to verify that pause frames are still sent and there is no packet loss. Renumber the units in the stack to verify that pause frames are still sent and there is no packet loss. Verify that Agent Auto Unit Replacement (AAUR) and Auto Unit Replacement (AUR) function properly in a stack in lossless mode running traffic. Change the Layer 2 path to verify that traffic recovers with no packet loss. Increase CPU utilization to 75-80% to verify that pause frames are still sent with no packet loss. Configure a mix of unicast, multicast, and broadcast traffic to the same traffic points to verify that there is no packet loss. Configure a mix of unicast L2 traffic and routed L3 traffic to the same traffic points to verify that there is no packet loss. Verify that traffic (L2 only and a mix of L2 and L3) runs between a VSP 7000 connected by SMLT to an ERS 5600 aggregation switch with no packet loss. Verify that traffic oversubscription does not generate memory leaks. Change mode from lossless to non-lossless and then back to lossless to verify that the configuration is preserved and pause frames are still sent. Connect a Dell server to a VSP 7000 and send pause frames to verify interoperability. Simulate a connection link failure and recovery between the Dell server and the VSP 7000 to verify that traffic recovers. Recover the failed links between the access BEBs and BCBs to verify that calls, video streams, and data traffic fail over with no noticeable impact. Table 6.3 Lossless Ethernet Test Results 84
85 7. Conclusion The configurations described in this document were thoroughly tested in the Avaya Networking Test Lab and all of the test cases passed. There was a particular focus on virtualization such as adding a new application that is virtualized across two data centers. Testing also proved how virtualization improves maintenance. By moving virtual machines from one data center to the other, you can perform core and edge maintenance without having to take an application out of service. In addition to virtualization, there were also many tests that demonstrated the resiliency of the network and that the Avaya VENA Data Center is fully interoperable with their industry-leading partners. But don t just take our word for it! Avaya commissioned the Miercom Independent Testing Labs to run their own suite of tests on the VENA Data Center. They scientifically measured how long it took to accomplish the most common data center tasks in a traditional data center and compared that to an SPB-enabled data center. Miercom also tested the resiliency of SPB and its ability to handle VM migrations. To review the complete Miercom test results, go to 85
86 2012 Avaya Inc. All Rights Reserved. Avaya and the Avaya Logo are trademarks of Avaya Inc. and are registered in the United States and other countries. All trademarks identified by, TM or SM are registered marks, trademarks, and service marks, respectively, of Avaya Inc. All other trademarks are the property of their respective owners. Avaya may also have trademark rights in other terms used herein. References to Avaya include the Nortel Enterprise business, which was acquired as of December 18,
Data Center Server Access Solution Guide. NN48500-577 Date: July 2010 Version: 1.1
Data Center Access Solution Guide NN48500-577 Date: Version: 1.1 2010 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is
Wake On LAN Technical Configuration Guide. Ethernet Edge Switch NN48500-598 Engineering
Ethernet Edge Switch NN48500-598 Engineering Wake On LAN Technical Configuration Guide Avaya Data Solutions Document Date: November 2010 Document Number: NN48500-598 Document Version: 1.1 2010 Avaya Inc.
Avaya Identity Engines Ignition Server Getting Started. Avaya Identity Engines Ignition Server Release 7.0
Getting Started Release 7.0 Document Status: Standard Document Number: NN47280-300 (325633-A) Document Version: 02.03 Date: 2010 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been
Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version 1.0.0. 613-001339 Rev.
Management Software AT-S106 Web Browser User s Guide For the AT-GS950/48 Gigabit Ethernet Smart Switch Version 1.0.0 613-001339 Rev. A Copyright 2010 Allied Telesis, Inc. All rights reserved. No part of
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
> Resilient Data Center Solutions for VMware ESX Servers Technical Configuration Guide
> Resilient Data Center Solutions for VMware ESX Servers Technical Configuration Guide Enterprise Networking Solutions Document Date: September 2009 Document Number: NN48500-542 Document Version: 2.0 Nortel
Avaya Identity Engines Ignition Server Release: 8.0 2013 Avaya Inc. All Rights Reserved.
/ 8.0.1 Ignition Server Release: 8.0 2013 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is complete and accurate at the
Ethernet Switch Product Feature Comparison
Ethernet Switch Product Feature Comparison The Avaya Ethernet Switch product line provides complete coverage ranging from entry-level branch office through premium high-performance wiring closet, to campus
BCM Rls 6.0. Remote Access. Task Based Guide
BCM Rls 6.0 Remote Access Task Based Guide Copyright 2010 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is complete and
IP Office Embedded Voicemail Mailbox User Guide
Embedded Voicemail Mailbox User Guide 15-604067 Issue 07b - (15 May 2010) 2010 AVAYA All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document
IP Office Avaya Radvision Interoperation Notes
Avaya Radvision Interoperation Notes Issue 1d (02 October 2012) 2012 AVAYA All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is complete
Avaya Visualization Performance and Fault Manager Discovery Best Practices
Avaya Visualization Performance and Fault Manager Discovery Best Practices 2.3 NN48014-105 01.02 June 2011 2011 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
> Technical Configuration Guide for Microsoft Network Load Balancing. Ethernet Routing Switch. Virtual Services Platform.
Ethernet Routing Switch Virtual Services Platform Engineering > Technical Configuration Guide for Microsoft Network Load Balancing Avaya Data Solutions Document Date: Document Number: NN48500-593 Document
Avaya VENA Fabric Connect
Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer
Avaya Engagement Assistant Web Portal Administration
Avaya Engagement Assistant Web Portal Administration Release 3.0 April 2015 2014-2015, Avaya, Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information in
SECURE AVAYA FABRIC CONNECT SOLUTIONS WITH SENETAS ETHERNET ENCRYPTORS
SECURE AVAYA FABRIC CONNECT SOLUTIONS WITH SENETAS ETHERNET ENCRYPTORS AUDIENCE Data networks consultants, Network architects, designers and administrators/ managers, Systems Integrators (SI) and networks
Avaya Microsoft Lync Integration User Guide for IP Office
Avaya Microsoft Lync Integration User Guide for IP Office Release 8.1 02-604138, 01.01 December 2012 2012 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the
Overview of Avaya Aura System Platform
Overview of Avaya Aura System Platform Release 6.3 Issue 5 June 2015 2015 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document
Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Router - Network Address Translation (NAT)
BCM50 Rls 6.0 Router - Network Address Translation (NAT) Task Based Guide Copyright 2010 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in
Layer 3 Network + Dedicated Internet Connectivity
Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
IP Office 8.1 Using Voicemail Pro in Intuity Mode
Using Voicemail Pro in Intuity Mode 15-601066 Issue 13a - (12 June 2012) 2012 AVAYA All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document
IP Office Release 7.0 IP Office Embedded Voicemail User Guide
IP Office Embedded Voicemail User Guide 15-604067 Issue 09a - (21 February 2011) 2011 AVAYA All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document
Avaya 2033 IP Conference Phone User Guide. Avaya Business Communications Manager
Avaya 2033 IP Conference Phone User Guide Avaya Business Communications Manager Document Status: Standard Document Number: NN40050-102 Document Version: 04.01 Date: May 2010 2010 Avaya Inc. All Rights
Easy Smart Configuration Utility
Easy Smart Configuration Utility REV1.1.0 1910010977 CONTENTS Chapter 1 About this Guide...1 1.1 Intended Readers... 1 1.2 Conventions... 1 1.3 Overview of This Guide... 1 Chapter 2 Getting Started...4
IP Office IP Office Softphone Installation
Softphone Installation - Issue 1a - (15 March 2010) 2010 AVAYA All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is complete and accurate
Configuration VLANs, Spanning Tree, and Link Aggregation Avaya Ethernet Routing Switch 5000 Series
Configuration VLANs, Spanning Tree, and Link Aggregation Avaya Ethernet Routing Switch 5000 Series Release 6.2 NN47200-502 Issue 06.03 September 2013 2013 Avaya Inc. All Rights Reserved. Notice While reasonable
IP Office Essential Edition IP Office Essential Edition - Quick Version Phone Based Administration
- Quick Version Phone Based Administration - Issue 3d - (31 May 2011) 2011 AVAYA All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is
How To Switch In Sonicos Enhanced 5.7.7 (Sonicwall) On A 2400Mmi 2400Mm2 (Solarwall Nametra) (Soulwall 2400Mm1) (Network) (
You can read the recommendations in the user, the technical or the installation for SONICWALL SWITCHING NSA 2400MX IN SONICOS ENHANCED 5.7. You'll find the answers to all your questions on the SONICWALL
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Building Tomorrow s Data Center Network Today
WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,
Auto Attendant Setup & Operation
SCS 4.0 Auto Attendant Setup & Operation Task Based Guide Copyright 2010 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
Chapter 3. Enterprise Campus Network Design
Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This
vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
20 GE + 4 GE Combo SFP + 2 10G Slots L3 Managed Stackable Switch
GTL-2691 Version: 1 Modules are to be ordered separately. 20 GE + 4 GE Combo SFP + 2 10G Slots L3 Managed Stackable Switch The LevelOne GEL-2691 is a Layer 3 Managed switch with 24 x 1000Base-T ports associated
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
hp ProLiant network adapter teaming
hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2
Application Note Gigabit Ethernet Port Modes
Application Note Gigabit Ethernet Port Modes Application Note Gigabit Ethernet Port Modes Table of Contents Description... 3 Benefits... 4 Theory of Operation... 4 Interaction with Other Features... 7
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
Abstract. MEP; Reviewed: GAK 10/17/2005. Solution & Interoperability Test Lab Application Notes 2005 Avaya Inc. All Rights Reserved.
Configuring Single Instance Rapid Spanning Tree Protocol (RSTP) between an Avaya C360 Converged Switch and HP ProCurve Networking Switches to support Avaya IP Telephony Issue 1.0 Abstract These Application
Using Avaya B189 Conference IP Phone
Using Avaya B189 Conference IP Phone Release 1.0 16-604295 Issue 1 January 2014 2013 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information in this
AT-S60 Version 1.1.4 Management Software for the AT-8400 Series Switch. Software Release Notes
AT-S60 Version 1.1.4 Management Software for the AT-8400 Series Switch Supported Platforms Software Release Notes Please read this document before you begin to use the AT-S60 management software. The AT-S60
Avaya Aura Session Manager Overview
Avaya Aura Session Manager Overview 03-603323, Issue 1 Release 1.1 May 2009 2009 Avaya Inc. All Rights Reserved. Notices While reasonable efforts were made to ensure that the information in this document
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
A Whitepaper on. Building Data Centers with Dell MXL Blade Switch
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
TRILL for Data Center Networks
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview
Product Brief Ethernet Routing Switch 5500 Series
Product Brief Ethernet Routing Switch 5500 Series Resilient, high-performance switching for the edge, aggregation or the core of the network s Ethernet Routing Switch 5500 series combines high performance,
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
BUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
Nutanix Tech Note. VMware vsphere Networking on Nutanix
Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
IP Office Contact Center Contact Recorder Configuration Task Based Guide
IP Office Contact Center Contact Recorder Configuration Task Based Guide Release 9.0.3 Issue 1.01 10 2014 Legal 2014 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure
IP Office Platform. Avaya IP Office Platform Embedded Voicemail User Guide (IP Office Mode) 15-604067 Issue 15b - (22 January 2015)
Avaya Embedded Voicemail User Guide (IP Office Mode) 15-604067 Issue 15b - (22 January 2015) 2015 AVAYA All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information
ACD Setup & Operation
SCS 4.0 ACD Setup & Operation Task Based Guide Copyright 2010 Avaya Inc. All Rights Reserved. Notices While reasonable efforts have been made to ensure that the information in this document is complete
IP Office. 1403 Phone User Guide. 15-601013 Issue 04a - (16 January 2015)
1403 Phone User Guide 15-601013 Issue 04a - (16 January 2015) 2015 AVAYA All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information in this document is complete
Juniper / Cisco Interoperability Tests. August 2014
Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper
Switch Clustering Best Practices
Switch Clustering Best Practices NN48500-584 Version 8.1 December 2015 Document Use The intention of this document is to provide a quick overview of the Avaya recommended Best Practices for implementing
What is VLAN Routing?
Application Note #38 February 2004 What is VLAN Routing? This Application Notes relates to the following Dell product(s): 6024 and 6024F 33xx Abstract Virtual LANs (VLANs) offer a method of dividing one
FASTIRON II SWITCHES Foundry Networks award winning FastIron II family of switches provides high-density
Delivers Industry Leading Price, Performance and Flexibility to Wiring Closets, Desktops and Server Farms Provides High-density 10/100 Mbps Ethernet and Gigabit Ethernet Copper Connectivity to Workstations
Administering Avaya Video Conferencing Solution Advanced Topics
Administering Avaya Video Conferencing Solution Advanced Topics 04-603308 Issue 1 Release 6.1 April 2012 Contents Chapter 1: Overview of Avaya Video Conferencing Solution....... 9 Components......................................
Deploying Avaya Contact Center Select Software Appliance
Deploying Avaya Contact Center Select Software Appliance Release 6.4 Issue 01.02 December 2014 2014 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information
Using Avaya Aura Messaging
Using Avaya Aura Messaging 6.0 November 2011 2010 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information in this document is complete and accurate
Chapter 1 Reading Organizer
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1
Smart Tips Enabling WAN Load Balancing Overview Many small businesses today use broadband links such as DSL or Cable, favoring them over the traditional link such as T1/E1 or leased lines because of the
Network Virtualization
. White Paper Network Services Virtualization What Is Network Virtualization? Business and IT leaders require a more responsive IT infrastructure that can help accelerate business initiatives and remove
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
Virtualizing the SAN with Software Defined Storage Networks
Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands
Chapter 7 Configuring Trunk Groups and Dynamic Link Aggregation
Chapter 7 Configuring Trunk Groups and Dynamic Link Aggregation This chapter describes how to configure trunk groups and 802.3ad link aggregation. Trunk groups are manually-configured aggregate links containing
Network Configuration Example
Network Configuration Example Configuring Link Aggregation Between EX Series Switches and Ruckus Wireless Access Points Modified: 2015-10-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208
VMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
Multi-Chassis Trunking for Resilient and High-Performance Network Architectures
WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture
Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code
Information Price Course Code Free Course Introducing Cisco Data Center Networking No. Days: 4 No. Courses: 2 Introducing Cisco Data Center Technologies No. Days: 5 Contact us at: Telephone: 888-305-1251
Management Software. User s Guide AT-S84. For the AT-9000/24 Layer 2 Gigabit Ethernet Switch. Version 1.1. 613-000368 Rev. B
Management Software AT-S84 User s Guide For the AT-9000/24 Layer 2 Gigabit Ethernet Switch Version 1.1 613-000368 Rev. B Copyright 2006 Allied Telesyn, Inc. All rights reserved. No part of this publication
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
TP-LINK. JetStream 28-Port Gigabit Stackable L3 Managed Switch. Overview. Datasheet T3700G-28TQ. www.tp-link.com
TP-LINK JetStream 28-Port Gigabit Stackable L3 Managed Switch Overview TP-LINK s is an L3 managed switch designed to build a highly accessible, scalable, and robust network. The switch is equipped with
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need
Position Paper The new data center edge Horizontal Stacking and Switch Clustering
Position Paper The new data center edge Horizontal Stacking and Switch Clustering Data center requirements The enterprise data center is one of the most critical areas of the network. This is especially
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
HUAWEI Tecal E6000 Blade Server
HUAWEI Tecal E6000 Blade Server Professional Trusted Future-oriented HUAWEI TECHNOLOGIES CO., LTD. The HUAWEI Tecal E6000 is a new-generation server platform that guarantees comprehensive and powerful
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
TP-LINK. 24-Port Gigabit L2 Managed Switch with 4 SFP Slots. Overview. Datasheet TL-SG5428. www.tp-link.com
TP-LINK TM 24-Port Gigabit L2 Managed Switch with 4 SFP Slots Overview Designed for workgroups and departments, from TP-LINK provides full set of layer 2 management features. It delivers maximum throughput
Top of Rack: An Analysis of a Cabling Architecture in the Data Center
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
TechBrief Introduction
TechBrief Introduction Leveraging Redundancy to Build Fault-Tolerant Networks The high demands of e-commerce and Internet applications have required networks to exhibit the same reliability as the public
AT-GS950/8. AT-GS950/8 Web Users Guide AT-S107 [1.00.043] Gigabit Ethernet Smart Switch. 613-001484 Rev A
AT-GS950/8 Gigabit Ethernet Smart Switch AT-GS950/8 Web Users Guide AT-S107 [1.00.043] 613-001484 Rev A Copyright 2011 Allied Telesis, Inc. All rights reserved. No part of this publication may be reproduced
Avaya Contact Center Select Business Continuity
Avaya Contact Center Select Business Continuity Release 6.4 Issue 01.01 December 2014 2014 Avaya Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information
48 GE PoE-Plus + 2 GE SFP L2 Managed Switch, 375W
GEP-5070 Version: 1 48 GE PoE-Plus + 2 GE SFP L2 Managed Switch, 375W The LevelOne GEP-5070 is an intelligent L2 Managed Switch with 48 x 1000Base-T PoE-Plus ports and 2 x 100/1000BASE-X SFP (Small Form
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Procedure: You can find the problem sheet on Drive D: of the lab PCs. Part 1: Router & Switch
University of Jordan Faculty of Engineering & Technology Computer Engineering Department Computer Networks Laboratory 907528 Lab. 2 Network Devices & Packet Tracer Objectives 1. To become familiar with
