CERTIFICATION STUDY GUIDE VCE CERTIFIED PROFESSIONAL VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: NETWORK EXAM 210-025 Document revision 1.2 December 2014 2014 VCE Company, LLC. All rights reserved.
Revision History Date Document Revision Author Description of Changes May 2014 1.0 VCE Initial Draft July 2014 1.1 VCE Final December 2014 1.2 VCE Formatting. Updated links. 2014 VCE Company, LLC. All rights reserved. 2
Table of Contents Obtaining the VCE-CIIEn Certification Credential... 5 VCE Vblock Systems Deployment and Implementation: Network Exam... 5 Recommended Prerequisites... 5 VCE Exam Prep Resources... 5 VCE Certification Website... 5 Accessing Related VCE Documentation... 5 About This Study Guide... 7 Vblock Systems Overview... 8 Vblock Systems Network Components... 10 Vblock System 200... 10 Vblock System 300 Family... 10 Vblock System 700 Family... 11 Vblock Systems Network Architecture... 12 Network Management On The AMP... 13 Vblock Systems Network Configuration... 15 Configuring the Switch... 15 Network Switches... 15 MDS SAN Switches... 16 Routing Architecture... 16 Spanning Tree... 17 VLANs... 17 Creating VLANs... 18 VLAN Trunking Protocol (VTP)... 18 VSANs... 19 Port Channels... 19 Configuring Port Channels... 19 Virtual Port Channels (vpcs)... 20 Deploying Virtual Port Channels... 21 Quality of Service (QoS)... 24 Hot Standby Router Protocol (HSRP)... 24 Virtual Networking... 25 Cisco Nexus 1000V... 25 Configuring Nexus 1000V Switches... 26 Virtual Route Forwarding (VRF) and Virtual Device Contexts (VDC)... 27 2014 VCE Company, LLC. All rights reserved. 3
Validate Networking and Storage Configurations... 28 Network Upgrades... 29 Security... 30 Additional Network Security Features... 31 Troubleshooting... 32 Conclusion... 33 2014 VCE Company, LLC. All rights reserved. 4
Obtaining the VCE-CIIEn Certification Credential The VCE Certified Professional program validates qualified IT professional to design, manage, configure, and implement Vblock Systems. The VCE Certified Converged Infrastructure Implementation Engineer (VCE- CIIE) credential verifies proficiency with respect to the deployment methodology and management concepts of the VCE Converged Infrastructure. VCE-CIIE credentials assure customers that a qualified implementer with a thorough understanding of Vblock Systems is deploying their systems. The VCE-CIIE track includes a core qualification and four specialty qualifications: Virtualization, Network, Compute, and Storage. Each track requires a passing score for the VCE Vblock Systems Deployment and Implementation: Core Exam and one specialty exam. To obtain the Certified Converged Infrastructure Implementation Engineer Network (CIIEn) certification, you must pass both the VCE Vblock Systems Deployment and Implementation: Core Exam, and the VCE Vblock Systems Deployment and Implementation: Network Exam. VCE Vblock Systems Deployment and Implementation: Network Exam The VCE Vblock Systems Deployment and Implementation: Network Exam confirms that candidates have met all entrance, integration, and interoperability criteria and are technically qualified to install, configure, and secure the IP and Storage network infrastructure on Vblock System. The exam covers network technology available at the time the exam was developed. Recommended Prerequisites There are no required prerequisites for taking the VCE Vblock Systems Deployment and Implementation: Network Exam; however, exam candidates should have working knowledge of Cisco network directors and switches in a production data center. The knowledge and experience should be obtained through formal ILT training and a minimum of one-year experience. It is also highly recommended that exam candidates have training, knowledge, and/or working experience with industry-standard x86 servers and operating systems. VCE Exam Prep Resources VCE strongly recommends exam candidates carefully review this study guide; however, it s not the only recommended prep resource for the VCE Vblock Systems Deployment and Implementation: Network Exam and reviewing this study guide alone does not guarantee passing the exam. VCE certification credentials require a high level of expertise and it s expected that you review the related Cisco, EMC, VMware, or resources listed in the References document (available from the VCE Certification website). It s also expected that you draw from real-world experiences to answer the questions on the VCE certification exams. The certification exam also tests deployment and implementation concepts covered in the Instructor-Led training (ILT) course VCE Vblock Systems Deployment and Implementation, which is a recommended reference for the exam. VCE Certification Website Please refer to https://www.vce.com/services/training/certified/exams for more information on the VCE Certified Professional program and exam prep resources. Accessing Related VCE Documentation The descriptions of the various hardware and software configurations in this study guide apply generically to Vblock Systems. Vblock Systems 200, Vblock Systems 300, and Vblock Systems 700 family Physical Build, Logical Build, Architecture, and Administration Guides contain more specific configuration details. 2014 VCE Company, LLC. All rights reserved. 5
The VCE related documentation is available via the links listed below. Use the link relative to your role. Role Customer Web Link http://support.vce.com VCE Partner www.vcepartnerportal.com VCE employee www.vceview.com/solutions/products/ Cisco, EMC, or VMware employee https://portal.vce.com/solutions Note: The websites listed above require some form of authentication using a username/badge and password. 2014 VCE Company, LLC. All rights reserved. 6
About This Study Guide The content in this study guide is relevant to the VCE Vblock Systems Deployment and Implementation: Network Exam. It provides information about Cisco network devices and how they integrate into the VCE Vblock Systems. Specifically, it addresses installation, administration, and troubleshooting of network resources within the Vblock Systems environment. This study guide focuses on deploying Cisco networking solutions in a VCE Vblock Systems converged infrastructure. Vblock Systems come preconfigured with specific customer-defined server, storage, and networking hardware. These components are already VCE qualified. The bulk of this guide concentrates on how to configure and manage the network infrastructure on Vblock Systems. The following topics are covered in this study guide: System Overview. An overview of Vblock Systems and the networking environment in particular including an architectural review of Cisco network topology and an introduction of Vblock Systems specific Cisco network components. AMP. A brief section on the Vblock Systems Advanced Management Pod (AMP) focuses on network connectivity and management of the Vblock Systems as single systems. It addresses specific componentdeployment considerations when launching individual element managers. Configuration and Optimization. Detailed configuration and implementation procedures address both segregated and unified network architectures. These include optimizing the environment for virtual network switches using Cisco Nexus 1000V Series Switches. Networking Concepts. Configuration concepts essential to successful Vblock Systems implementation. High-Availability. Configuring the network environment for high availability including various options that maximize the hardware redundancy. Troubleshooting. Troubleshooting complex systems is a multifaceted problem. The focus here is mostly diagnostic. Cisco has a number of tools to discover issues that interfere with network functionality and performance. 2014 VCE Company, LLC. All rights reserved. 7
Vblock Systems Overview VCE Vblock Systems combine industry-leading hardware components to create a robust, extensible platform to host VMware vsphere in an optimized scalable environment. Vblock Systems use redundant hardware and power connections, which, when combined with clustering and replication technologies, create a highly available virtual infrastructure. This study guide concentrates on the networking aspect of the converged infrastructure of the Vblock 200, Vblock System 300 family, and Vblock System 700 families. These systems are made up of several components: UCS rack-mount servers and blade servers, Cisco Nexus unified and IP-only network switches, Cisco Catalyst management switches, Cisco MDS SAN switches, the Cisco Nexus 1000V virtual switch, VMware vsphere Hypervisor ESXi and VMware vcenter Server software, and EMC VNX (Vblock 200 and Vblock System 300 family) or VMAX (Vblock System 700 family) storage systems. Because Vblock Systems come largely preconfigured, this document discusses advanced configuration, concepts, and upgrades of networking components. It also explores the network management applications installed on the Advanced Management Pod (AMP) Vblock Systems Architecture Vblock Systems are complete, enterprise-class data center infrastructure platforms. They have scaled-out architecture built for consolidation and efficiency. System resources are scalable through common and fully redundant components. The architecture allows for deployments involving large numbers of virtual machines and users. The specific hardware varies depending on the particular model and configuration of Vblock Systems. The compute, storage, and network components include: Cisco Unified Computing Systems (UCS) environment components UCS rack-mount blades (Vblock 200) and blade servers (Vblock System 300 family and Vblock System 700 family) UCS chassis UCS Fabric Interconnects UCS I/O modules Redundant Cisco Catalyst and/or Nexus LAN switches Redundant Cisco MDS SAN switches, installed in pairs Redundant Cisco Nexus and or Catalyst LAN switches Redundant Cisco MDS SAN switches, installed in pairs EMC VNX or VMAX enterprise storage arrays The base configuration software, including VMware vsphere, comes preinstalled. The Vblock Systems management infrastructure has two significant management components: The AMP resides on a designated server made up of management virtual machines. It functions as a centralized repository for Vblock Systems software management tools, including vcenter Server. VCE Vision Intelligent Operations application resides on the AMP, providing a single source for Vblock Systems resource monitoring and management. VCE Vision software, the industry s first convergedarchitecture manager, has been designed with a consistent interface that interacts with all Vblock Systems components. VCE Vision software integrates tightly with vcenter Operations Manager, the management platform for the vsphere environment. 2014 VCE Company, LLC. All rights reserved. 8
The diagram below provides a sample view of the Vblock Systems architecture. The Vblock System 340 is shown in this example. 2014 VCE Company, LLC. All rights reserved. 9
Vblock Systems Network Components Generally speaking, the network environment consists mainly of Cisco Nexus LAN switches and Cisco MDS dedicated SAN switches for customer TCP/IP and Storage connectivity. That said, Vblock Systems support multiple network directors and switch types with network component support specific to models of Vblock Systems. The list below introduces the individual Cisco switches used in Vblock Systems: Nexus 5500-Series. Cisco Nexus 5500 series switches in the network layer provide 10 GB IP connectivity or 2/4/8 GB FC connectivity. There are two versions: The 5548 offers 960 Gbps with 32 base ports and one 16-port expansion module; and 5596 has throughput at 1920 Gbps with 48 base ports and three 16- port expansion modules. Nexus 7010. Cisco has developed the Nexus 7000 series as a data center-class LAN switch. The Nexus 7010 has a 10-slot chassis and supports up to 384 GbE ports and 7 Tbps. Like the Nexus 5500 series switches, they are based on the NX-OS and can provide IP and FC connectivity. MDS 9148. The MDS 9148 Multilayer Fabric switch is a dedicated SAN switch used only for segregated storage networks. It has 48 ports and 8 Gbps throughput. It uses the same NX-OS firmware that the Nexus switches are based on. MDS 9513. The MDS 9513 Multilayer Director has been designed for large-scale storage networks and virtualized storage environments, with 32 port line cards for up to 256 Gbps. AMP Connectivity. AMP (AMP-2 for Vblock 340) connectivity is not part of the network architecture per se; however it does depend on Cisco switches. A Cisco Catalyst 3560 switch maintains IP connectivity for the AMP server in the Vblock System 700 platform, and a Cisco Nexus 3048 switch is used for AMPserver connectivity in Vblock System 300 platforms. (Vblock 200 platforms implement a logical AMP with no physical server.) Switch features and topologies are unique, depending on Vblock Systems platforms. It might be helpful to consider the switches within the context of their implementation. Vblock System 200 Cisco Nexus 5548UP switches in the network layer provide both storage access and Internet access in a unified network scheme. Vblock 200 platforms have no SAN-only configuration. The 5548 switch supports 10 GbE IP connectivity and 8 GB FC connectivity with up to 32 ports, with an expansion module for an extra 16 ports. Vblock System 300 Family The latest iteration of the Vblock System 300 family, the Vblock 340, supports both the Nexus 5548 and the 5596. Be aware that the Vblock 340 has five different models. Only the lowest-end model is limited to 5548 support. In unified networks, these Nexus switches handle both IP and FC connectivity. Segregated networks use the Nexus switches for IP and use the MDS 9148 switch for storage connectivity. 2014 VCE Company, LLC. All rights reserved. 10
Vblock System 700 Family Network Switches. Like the Vblock System 300 family, base configurations of the Vblock 700 platform also rely on pairs of Nexus 5500 series switches for all network connections and for FC connections in segregated configurations. However, Vblock 700 platforms can be upgraded with the Nexus 7010 switch. High-demand network configurations may require a modular, more scalable data center class switch, and the Nexus 7010 meets that requirement. Storage Area Network (SAN) Switches. Likewise, Vblock 700 platforms have an upgrade option for the SAN switches in MDS 9513 Multilayer Directors. The MDS 9148 is the default, but VCE customers can leverage the advanced 32 port line cards with 8 Gbps for all ports. Additional 32 port lines can be added as necessary. 2014 VCE Company, LLC. All rights reserved. 11
Vblock Systems Network Architecture This study guide is not an investigation into the intricate details of the Vblock Systems network architecture. Rather, it examines the Vblock Systems network in terms of implementation. Still, the architecture is significant. The topologies for segregated and unified Vblock Systems networks are based on the Cisco model, represented in the diagram below. Clearly, these are very simple designs. The segregated network uses dedicated MDS switches for storage access, whereas the unified network uses just the Nexus switch for both network and SAN. Notice the redundancy built into the system in both paradigms: SAN switches, LAN switches, and UCS Fabric Interfaces all installed in pairs with multiple connections. If we expand this Cisco model to incorporate the entire Vblock Systems infrastructure, the models are somewhat more complex. The diagram below depicts the unified architecture of the Vblock 340. The unified configuration includes no MDS switch. The Nexus switch maintains an FC connection to the storage processor as well as to the Fabric Interconnect in the compute environment. In addition, the Nexus switch maintains Ethernet connections. The segregated network on the other hand, implements MDS switches for all FC connectivity. The network switches are configured only for IP connections. 2014 VCE Company, LLC. All rights reserved. 12
Network Management On The AMP The Advanced Management Pod (AMP) is the primary management point for the Vblock Systems environment. The AMP is a specific set of hardware in the Vblock Systems, typically in a high-availability configuration that contains all the virtual machines and virtual applications (vapps) that are necessary to manage the Vblock Systems infrastructure. The Cisco Data Center Network Manager (DCNM) and Cisco NX-OS are installed and run on the AMP. The AMP is a Vblock Systems management server used to monitor and manage Vblock Systems health, performance, and capacity. Since it resides on a server separate from the production system, it supplies fault isolation for management without a drain on system resources while providing a clear point of demarcation for administration operations. Architecturally, the AMP is a vcenter cluster made up of management Virtual Machines (VMs), including a VM for element managers, where DCNM resides. A VM for the Nexus 1000V virtual switch is also one of the core AMP VMs. In the Vblock 300 and Vblock 700, the AMP is contained in C220 server(s); the Vblock 200 implements a logical AMP. The diagram below depicts VLAN network connectivity from the AMP. Again, the AMP is isolated. The connections in red on the right belong to the management system with separate management switches. On the left, are the AMP connections to the customer network. 2014 VCE Company, LLC. All rights reserved. 13
Virtual port channels (vpcs) between the AMP network switches and the upstream switches allow for redundancy. If one of the uplinks in the port channel is disabled, the remaining uplinks continue to pass traffic. The diagram below shows the vpc connections in blue. (We discuss VLANs and virtual port channels in detail later in this document.) 2014 VCE Company, LLC. All rights reserved. 14
Vblock Systems Network Configuration Vblock Systems network configuration is driven by application requirements, but sizing a Vblock Systems network is fairly simple. As compute and storage requirements scale up, the network automatically scales at the same time. Port densities and subscription ratios are architected into the platform, so network capacity increases with the number of chassis and blades in the specific model of Vblock Systems. High availability and redundancy are built into the Vblock Systems network. Failure of a component at the aggregation, distribution, or access layer cannot separate a chassis or array from the network. This is essential to the success of the Vblock Systems unified network and network convergence. Furthermore, policies can be applied end-to-end across the network infrastructure using NX-OS. The structure lets VCE customers design the network s physical and logical topology, and then lay policies on top of the infrastructure. Configuring the Switch Vblock Systems are shipped with network components initially configured and installed during manufacturing. Still, VCE expects that CIIEn professionals be familiar with the base configuration process and be well prepared for deployment challenges and upgrades. Each Vblock Systems deployment is unique. VCE customer resource documentation is essential notably from the Logical Configuration Survey (LCS), which contains all the customer site-specific network information necessary for initial Vblock Systems deployment. Additional VCE documentation is also available, including the VCE Vblock Systems Physical and Logical Build Guide, as well as the VCE Vblock Systems Release Certification Matrix (RCM). The following information provides a general overview of the base configuration procedure. For details, refer to VCE Deployment and Implementation (D&I) documentation as well as Cisco configuration documentation. Network Switches Nexus switches (5000 series and 7000 series) function either strictly as IP network switches, or as consolidation switches carrying IP, SAN, and IPC communication. Ethernet features (including VLAN membership) are configured on the physical interface. FCoE is responsible for consolidation functionality (including VSAN membership), and Nexus switches use a virtual interface to represent the FC connections. Virtual FC interfaces are implemented as Layer 2 (L2) subinterfaces of the Ethernet interface. Switches are installed in pairs for redundancy, with firmware loaded using the Cisco s Kickstart utility. It is important to verify the NX-OS firmware as well as the physical connections and port parameters before configuring the switch. Base configuration includes creating an IP address for the management interface, the default gateway, and the NTP server. It also establishes network services, basic FC configurations, port trunking, and zone traffic flow. VCE scripts define the advanced configuration. Again, each switch has a different script, and every configuration of every switch has a different script. The 5548UP switch has a different script set than the 5596UP switch, and the 5548UP switch in a segregated environment has a different script set than the 5548UP in a unified environment. The scripts follow customer requirements to integrate with the customer LAN and SAN. This study guide examines these script assignments and the concepts behind them more closely, but generally speaking they assign switch hostnames, VLANs, and port assignments for each switch. Separate Quality of Service (QoS) command sets exist for setting up block-only and unified networks. Port profiles are used to configure port interfaces, and they ensure consistency across port configurations. Profile templates contain interface configuration details; ports and port groups inherit information from the template. Changes to the port profile are automatically propagated to its assigned ports. In Vblock Systems, vcenter Server represents port profiles as a port group. Cisco switches have three types of port profiles: Ethernet, Interface-VLAN (SVI), and Port Channel. 2014 VCE Company, LLC. All rights reserved. 15
There are additional configuration operations, namely making a connection to VCE Vision software, configuring additional syslog destinations, and setting up Call Home capabilities. MDS SAN Switches The Cisco MDS Multilayer Directors and Switches provide connectivity in Vblock Systems with large segregated storage requirements. They are director-class storage SAN switches designed for deployment in scalable enterprise. The MDS directors and switches address the critical requirements of large, virtualized, data center storage environments such as high availability, security, scalability, ease-of-management, and simple integration for extremely flexible data center SAN solutions. Sharing the same operating system and management interface with other Cisco data center switches, the MDS directors provide high performance Fibre Channel connectivity for Vblock Systems requiring dedicated storage access. Vblock Systems are shipped with pairs of SAN switches installed with firmware and base configuration intact, including IP addresses for the host, the management interface, the default gateway, NTP; default services; port trunking; etc. Like with the Nexus switches, the base setup and physical connections should be checked before continuing with a deployment. For advanced configuration, the first step is to set up FC port-channel interfaces to the fabric interconnects in Vblock Systems UCS environment. The next step is to create the VSANs, assign ports, and configure FC interfaces to them. VSANs use zones to group ports, and the zones need to be configured and activated. Finally, the Cisco Data Center Network Manager (DCNM) needs to be installed on the SAN for management. The final configuration needs to be verified. Routing Architecture This study guide refers to various features and technologies functioning at the data plane or maybe the control plane. To clarify, routing has three basic architectural components: the data plane, the control plane, and the management plane. Data-plane traffic runs through the router or switch not to them, not from them. The data plane transfers data to and from clients, handling traffic with multiple protocols, and manages communication among remote peers, and enabling hardware-accelerated features. The control plane and management plane service the data plane. The control plane is responsible for programming the data plane. It supports a number of crucial software processes, including the routing information base and the various L2 and L3 protocols. All of these processes are important to the switch s interaction with other network nodes. The management plane handles administrative traffic: interfaces, IP subnets, and monitoring SNMP, for example. All three planes are carried in the firmware of network routers and switches. Often software takes over at the control plane for improvements in flexibility and administration. These configurations also determine security to some extent, because their parameters are so well defined. The chart below illustrates the security configurations at the management plane, the control plane, and the data plane. 2014 VCE Company, LLC. All rights reserved. 16
Spanning Tree The first important decision in the design of a data center network is the Spanning Tree. The Spanning Tree Protocol (STP) ensures a loop-free design for redundant paths throughout a switched network. The Spanning Tree design impacts system behavior during a failure. It requires careful consideration prior to actual network deployment, because it affects the function of features like virtual port channels. Describing Spanning Tree algorithms in detail is beyond the scope of this document. Suffice it to say that current Spanning Tree topologies are based on one of two well-known industry algorithms: Rapid Per-VLAN Spanning Tree Plus (PVST+) and Multiple Spanning Tree (MST). (For more information about Rapid PVST+, go to http://www.cisco.com/en/us/partner/docs/switches/datacenter/sw/4_1/nxos/layer2/configuration/guide/l2_pvrstconfig.html. For MST, go to http://www.cisco.com/en/us/partner/docs/switches/datacenter/sw/4_1/nxos/layer2/configuration/guide/l2_mstpconfig.html.) MST-based implementations are recommended for Vblock Systems even though Rapid PVST+ is the default Spanning Tree mode on Cisco switches. Typically, Spanning Tree VLAN load balancing is easier in Rapid PVST+ than in MST. However, Virtual Port Channel (vpc) deployments achieve VLAN load balancing automatically with no need to change Spanning Tree priorities. So, with vpc topologies, any Rapid PVST+ advantage disappears. Furthermore, MST scales better than Rapid PVST+. Bridge Assurance is used to protect against an unidirectional link failure. It must be enabled on both ends of the link between two switches. The switch generates only one Bridge Protocol Data Unit (BPDU), which summarizes all the necessary information for the specific instance. In other words, several VLANs can be mapped to a single Spanning Tree instance as a region. As Layer 2 (L2) networks increase, scalability becomes more significant and so does the ability for MST to maintain a regional topology. NX-OS supports three STP port types: edge port, a normal port, or a network port. An edge port is connected to Layer 2 switches and can be either an access port or a trunk port. Older Cisco switches configured edge ports with a feature called PortFast; otherwise the port would be normal. A network port connects only to a Layer 2 switch or bridge. VLANs In its simplest terms, a virtual LAN (VLAN) is a network partition with a distinct broadcast domain that communicates as if it were a single, isolated LAN. The idea is to limit traffic on large networks. Only the ports belonging to a specific VLAN share broadcast. Cisco switches create the broadcast domain, and the domain can include ports from different switches meaning that Switch A can share a VLAN with ports on Switch B. Any port can belong to a VLAN; unicast, broadcast, and multicast packets are forwarded and flooded only to end stations in that VLAN. Communication among devices that do not share VLAN requires a trunk port configured to forward packets to a router. VLANs are identified numerically. They have a variety of configurable parameters, including name, packet size, type, and state. 2014 VCE Company, LLC. All rights reserved. 17
Creating VLANs Specific VLAN configuration commands are different, depending on the switch. Regardless, the tasks are the same. The first step is to create New VLAN. The process begins by simply numbering it. Two ranges of ID numbers exist: a standard range from 1 1000 and an extended range from 1025 4096. A standard-range VLAN can be modified by changing its parameters: name (up to 32 characters) and state (active or suspended). If the VLAN is deleted, the ports are shut down with no traffic flow. Extended-range VLANs cannot be shut down. They are always enabled and active. Keep in mind also that several extended-range VLANs are reserved for specific purposes (e.g., multicast and diagnostics) and are unavailable. VLANs 3968 4047 and 4094 are internally allocated and cannot be modified or deleted. Defined Vblock Systems VLANs VLAN 101 (Vblock_Management) VLAN 105 (Vblock_ESX_Management) VLAN 106 (Vblock_ESX_VMotion VLAN 116 (Vblock_N1K_L3_Control) VLAN 300 (Vblock_DataVlan) Next, specify VLAN Ports. On a Cisco switch, every port is assigned to a single VLAN. By default, all ports automatically belong to VLAN 1 until designated otherwise. Most VLAN ports are static; once a network administrator assigns a port, it cannot be modified. But ports can also be assigned dynamically, based on the MAC address of the device. A VLAN can be implemented as a switch virtual interface (SVI) for a Layer 3 (L3) routing or bridging system, which is a useful configuration for separating data traffic from management traffic. The port profiles for these L3 VLANs are system VLANs. NX-OS has specific support for management SVIs. Refer to the Cisco Nexus 5000 Series NX-OS System Management Configuration Guide for details. VLAN Trunking Protocol (VTP) Conceptually, VLANs seem uncomplicated. Controlling them over a large, complex network environment, less so. For VLAN consistency, Cisco implements a VLAN Trunking Protocol (VTP), which defines the VLANs associated with a switch and propagates them across the whole network. Trunking is essentially a method of link sharing, where switch ports carry (or trunk) multiple VLAN traffic. Interconnected ports can transmit and receive frames in more than one VLAN over the same physical link. VTP keeps track of the all switches, the VLANs, and the trunk ports. It maintains a centralized database that contains the management domain, the configuration revision number, and the known VLANs associated with each trunk port. VTP mitigates a lot of the administration complexities involved with VLANs. For instance, once a VTP domain is set up, there is no need to configure the same VLAN information on each switch. The VLAN is distributed through all the switches in the domain. VTP also prevents duplicate VLAN IDs and their accompanying security issues. In terms of setup, VTP domains can be created on NX-OS devices. A series of commands enable VTP on the device (the default is disabled), establish the domain name, and set the VTP mode and password. A Cisco Nexus switch can operate in one of three VTP modes: Server mode for creating, modifying, and deleting VLANs and specifying additional configuration parameters. VTP servers advertise their VLAN configuration to other switches in the same domain and synchronize their VLAN configuration based on advertisements received over trunk lines. Client mode, which advertises configuration the same way VTP servers do. However, VTP clients cannot create, change, or delete VLANs. 2014 VCE Company, LLC. All rights reserved. 18
Transparent mode, which does not directly participate in VTP. Rather, VTP advertisements are forwarded through the trunk lines. VSANs In terms of configuration, the VSANs are not unlike VLANs. Numbering the VSAN is the first step. The default VSAN ID is VSAN 1, and user-defined VSAN IDs range from 2 4093. A unique name is added afterwards. The VSAN can be either active or suspended. As soon as it becomes enabled, the services for the VSAN are active. A suspended VSAN is configured but not enabled. Adding ports is the next step. In segregated Vblock Systems infrastructures, Host Bus Adapters (HBAs) connect the SAN host to the ports on the MDS fabric switch, and HBA drivers enable communication. An HBA port can be identified by its World Wide Node Name (WWN), World Wide Port Name (WWPN), or Port_ID. The FC logins take place after a link is operational. The flogi database on the switch verifies the host HBA and its connected ports. Any FC port is automatically part of VSAN 1 until it is specified otherwise. Ports can be assigned dynamically based on the device WWN. Groups of ports or zones provide access control for devices within a SAN. NX-OS supports a number of zone types: N-port zones based on the end-device port, FX port zones based on the switch port, Domain ID and port number zones, iscsi zones, LUN zones, read-only zones, and broadcast zones. Zone sets are often implemented in a VSAN. Typically, Vblock Systems administrators use Unisphere s UIM provisioning tool (located in the AMP) to create zones and zone sets and activate them. It is easier to use FC aliases for the port members, and UIM also configures device aliases. Zones can be changed and configured without disrupting network traffic. Each VSAN can have only one active zone set. MDS switches support up to 8,000 zones and 20,000 port members in a physical fabric. Port Channels NX-OS uses port-channel architecture for fault-tolerant, high-speed links between switches, routers, and servers. It allows physical Ethernet links to be bundled together into a port channel with a single logical interface. It works like an individual port. Any configuration commands or changes applied to the port channel are automatically applied to each member interface of that port channel. For example, if you set Spanning Tree Protocol parameters on the port channel, the Cisco NX-OS applies those parameters to each interface in the port channel. The port channel interface is a virtual link that represents the traffic path of the bundle toward a specific destination. Port channels have three functions. They increase aggregate bandwidth by distributing traffic among all functional links in the channel. They also perform load balancing across multiple interfaces while maintaining optimum bandwidth. Predetermined algorithms determine which physical link to send the port channel on. Lastly, they provide high availability. If one link fails, traffic switches to the remaining links. Upper protocols are never aware of link failures within port channels. Rather, the link still appears to exist, albeit at a diminished bandwidth. MAC address tables are not affected by link failure. A port channel can bundle up to eight Ethernet ports. The port channel stays operational as long as at least one physical interface within the port channel is operational. Configuring Port Channels Typically, Vblock Systems virtual port channels are based on the IEEE 802.3ad Link Aggregation Control Protocol (LACP). These rely on series ID parameters, which are set during configuration. LACP links pass protocol packets to peer (i.e., LACP-supported) devices switch-to-switch, switch-to-server, switch-to-firewall, etc. When LACP channel is deleted, NX-OS automatically deletes the associated channel group, and member interfaces revert to their previous configuration. 2014 VCE Company, LLC. All rights reserved. 19
Simple, static port channels with no protocol association are also an option. The NX-OS default is actually for static port channels; you need to enable LACP. Either way, NX-OS uses a port-channel interface configuration mode with a CLI for creating port channels, and the general configuration procedures are similar: Create a new port channel. Creating the port channel comes first, and then ports and channel groups are added to it. A port can be a member of only one port channel. Adjust the load balancing. NX-OS performs default load balancing, determined by combinations of source and destination MAC addresses, IP addresses, and port numbers; but it is configurable. Use the combination that offers the greatest flexibility. For example, consider a port channel with a single destination MAC address. If the destination MAC address is the only configured criteria for load balancing, the traffic will always use the same link regardless of the conditions. On the other hand, including source addresses or IP addresses may result in a better balance. Set mode. For LACP channels, the mode of an LACP channel needs to be set as either passive or active and individual ports need to be prioritized. The mode for static port channels is always on. Set LACP ID parameters. LACP channels also need an ID, which includes the MAC address as well as the system priority value, which is used to calculate priority among devices. The higher the system priority value, the lower the priority. LACP channels actually require a number of parameters. Sometimes channel ports need to be put into standby mode. The LACP port priority value determines which ports should remain active, and which can switch to standby. Again, the higher value means a lower priority. An LACP administrative key figures out whether or not a port is even a candidate for a port channel. It may be precluded by restrictions, including the data rate, the duplex capability, or the shared medium state. Virtual Port Channels (vpcs) A virtual port channel (vpc) is a port channel enhancement. Each physical link that makes up one end of a port channel must terminate on the same switch. In other words, a server cannot use a port channel to link to two switches. But it can use a virtual port channel. With vpc, links that are connected to different network devices appear as a single port channel to a third device. It provides L2 multipathing, creating redundancy by increasing bandwidth, enabling parallel paths between nodes and load balancing. Ultimately, it allows for a simpler, more scalable network design. vpc provides the following benefits: Eliminates Spanning Tree Protocol (STP) blocked ports o Uses all available uplink bandwidth Prevents loops at the data plane layer o Logistics implemented directly in the hardware without CPU dependency o Traffic is forwarded locally when possible o Loop avoidance mechanism if the final destination is behind another vpc Allows dual-homed servers to operate in active-active mode o Active-active default gateways for servers Provides fast convergence upon link or device failure Allows for resilient Layer 2 port channeling Contributes to high availability clusters and VM mobility 2014 VCE Company, LLC. All rights reserved. 20
Deploying Virtual Port Channels A Cisco vpc is configured as a domain, with two identical Nexus switches and a downstream device. The device can be any network component that supports EtherChannel L2 switch, servers, firewall, NAS storage device, etc. The pair of Nexus switches function as vpc peer devices, connected by an L2, 10 GbE link carrying a vpc VLAN. The link is referred to as a vpc peer-link. An underlying protocol, Cisco Fabric Services (CFS), maintains reliable synchronization and consistency between the two peer-switches. vpc deployment often occurs at the access layer of the data center for connectivity between a network endpoint device to a vpc domain. This is referred to as single-sided vpc deployment. Access devices are directly attached to each of the peer-switches in the vpc domain. The diagram below has a single-sided topology. Another implementation is a double-sided deployment, which creates a default gateway between L2 and L3 network schemes. Some organizations also use a multilayer vpc topology to interconnect two separate data centers (Data Center Interconnect or DCI) at L2, extending the VLANs across two sites. In the end, the Vblock Systems network deployments are quite varied, and their configuration details are exacting. The diagram below, for instance, illustrates a Vblock Systems disjoint L2 configuration with uplinks to the customer network. In this implementation, traffic is routed to different networks at the fabric interconnect to support two or more discrete Ethernet clouds, connected by the Cisco UCS servers. Upstream disjoint L2 networks allow two or more Ethernet clouds that never connect to be accessed by servers or virtual machines located in the same Cisco UCS domain. vpc 101 and 102 are production uplinks that connect to Cisco Nexus 5548P Switch or Cisco Nexus 7000 Series Switches. vpc 105 and 106 are customer uplinks that connect to customer-owned switches. If using Ethernet performance port channels (PC 103 and 104 by default), port channels 101 through 104 should have the same exact assigned VLANs. 2014 VCE Company, LLC. All rights reserved. 21
Deploying and configuring a large vpc environment can be an extended project, and the information provided here should be considered an overview. Cisco publishes documentation with thorough details and best practices for implementing virtual port channels on Nexus switches. See Design and Configuration Guide: Best Practices for Virtual Port Channels on Cisco Nexus Switches. Generally, building a vpc domain is fairly straightforward, using domain commands to define global vpc parameters and subcommands to define options. vpc and LACP need to be enabled first. The following operations describe the basic process and the proper sequence. 1. Create a domain ID. The vpc peer devices share a single domain and a single domain ID. The domain ID on both switches has to match. Once configured, both peer devices use the ID to automatically assign a unique vpc system MAC address. Port types cannot be mixed in the same vpc member port. Furthermore, both sides of the vpc member ports must be the same port type. 2. After configuring the identifier on each switch, establish a vpc peer-keepalive link on both peer devices. The peer-keepalive link acts as a heartbeat between vpc peer-switches, and it needs to be operational on both devices. It has configurable timers regulating heartbeat intervals and timeouts. Source and destination IP addresses for the peer-keepalive link must be unique, and it should have a dedicated, L3 connection. The mgmt0 interface should be used with 10 GbE ports. 3. Set system priority value. This step is the recommended option to make sure that vpc peer devices are the primary devices on LACP. The vpc domain value can range from 1 65535. (The default is 32667.) Make sure that the priority value is the same on both peer devices. They must match, or the vpc will not be activated. 4. Set role priority value. Configuring the role priority value establishes a primary and secondary switch. The value can range from 1 65636. (Again, the default is 32667.) The switch with the lower priority is primary, and remains functional for unicast traffic in the event of a peer-link failure. The secondary device shuts down its vpc ports and its vpc VLAN interface to prevent looping. It should be noted that the system determines an operational role of each vpc switch also primary or secondary by actual system usage, and it may override the original configured vpc roles. 2014 VCE Company, LLC. All rights reserved. 22
5. Enable the peer-gateway feature on both the primary and secondary vpc peers, if appropriate. In some applications, NAS devices or load balancers bypass the usual routing-table lookup, and reply to the switch MAC address instead of the Hot Standby Routing Protocol (HSRP) gateway. The problem is that packets may be dropped by the vpc built-in loop avoidance mechanism. The peer-gateway feature lets the vpc peer-switch act as the gateway for packets addressed to the MAC address, forwarding them not dropping them without crossing the vpc peer-link. 6. Create the vpc peer-link. An L2 trunk port channel needs to exist as a vpc peer-link. Configure the vpc peer-link (VLAN) on both peer-switches and verify that it is operable. Again, ports are 10 GbE. Cisco recommends using dedicated peer-to-peer ports at least two ports and two line cards for high availability. The range command <vlan-id range> works best for configuring a large number of VLANs instead of configuring them individually. 7. Lastly, configure the peer-link port channel to connect to the downstream device. Again, this is no comprehensive list of procedures. Each step can include a number of very specific guidelines, options, and restrictions that still need to be addressed. The primary vpc peer-switch, for instance, should be installed on the left. Load balancing should be optimized by using source-destination IP, L4 port, and VLAN as fields for the port-channel load balancing hashing algorithm. The Bridge Protocol Data Unit (BPDU) filter should be enabled for Multilayer vpc. Bear in mind also that in mixed-chassis mode, an M1-series module must be used for L3 internal proxy routing and uplink connectivity, while F1 modules are used for L2 domain bridging. Double-sided topologies and multi-sided DCI topologies require that the domain IDs be different. (If not, they wind up generating continuous flaps on the vpc that connects the network layers.) These are just a few examples; the precise operations are site specific. Cisco publishes a number of best practices for vpc environments, including building vpc domains, configuring vpc components, mixed-chassis vpcs, DCI and encryption, the role of STP, L3 connectivity, and configuring network services. We should also briefly look at redundancy protocols. Hot Spare Routing Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP) make the network more highly available. HSRP (discussed later in this document) establishes a default, fault-tolerant failover gateway. VRRP automatically assigns available IP routers to hosts, making routing paths more accessible and reliable. Both behave similarly on vpc peer devices. A vpc domain functions as an L2/L3 boundary with a VLAN configured on each peer device. HSRP/VRRP runs on top of the interface. Both protocols operate in default active-active mode at the data plane, and no additional configuration is necessary after the protocol is enabled. The control plane is different though, and the primary peer device should be defined as active and the secondary as standby to make operations easier. 2014 VCE Company, LLC. All rights reserved. 23
Quality of Service (QoS) QoS features let customers define the most desirable flow of network traffic by prioritizing and policing network traffic flow and avoiding network congestion. QoS makes network traffic more predictable and efficient. Traffic control is based on the fields in the packets that flow through the system. Cisco implements a Modular QoS CLI (MQC) to define traffic controls. The process has three steps: 1. Define class to categorize a packet. Nexus 5000 series switches support six classes, two of which are predefined as defaults. The limitations for Nexus 7000 series switches are defined by classification criteria in the chart below. A class map can contain up to 1024 match criteria for a maximum of 4096 classes. 2. Create policies that specify actions to take on the traffic classes. There are three policy types, each with QoS parameters. Type network policies apply system-wide parameters, including pause behavior (or flow control, described below), MTU specification, multicast optimization, queue limit, and CoS value. Type queuing policies defines the queuing scheduling. Type QoS policies are based on Layer 2 and Layer 3 protocols. 3. Apply polices to a port, port channel, VLAN, or subinterface. For FCoE traffic, the Ethernet interface must provide a lossless service, and Nexus switches have two options here. The first is link-level flow control (LFC), which pause data transmissions during congestion. When the congestion clears, the transmission is restarted. A buffer threshold directs the traffic. When congestion exceeds the threshold, a pause frame is generated; when traffic comes under the configured threshold, a resume frame is generated. Priority flow control (PFC; IEEE 802.1Qbb) sometimes referred to as Class-Based Flow Control (CBFC) or Per Priority Pause (PPP) is similar to LFC. However, PFC is based on CoS. It allows customers to apply pause functionality to specific classes of traffic on the link instead of all the traffic on the link. PFC settings override LFC settings. If PFC is enabled, LFC is disabled, regardless of its configuration settings. Hot Standby Router Protocol (HSRP) Cisco s HSRP is actually a redundancy protocol for IP networks, not a routing protocol. It creates a default failover framework for routers based on priority. Two routers share the same IP address and MAC (L2) address and act as one virtual router. Like most failover designs, one router is designated as primary, with a secondary taking over if a failure occurs. HSRP groups can be configured in which a whole set of routers share an IP and MAC address and appear as a single virtual router to the hosts on a LAN. These HSRP groups are based on priority, with a defined Active router functioning as the primary and a defined Standby as the secondary. The Standby takes over as Active when necessary, and the next router assumes the Standby position. The Active and Standby routers send periodic HSRP messages (hello packets), and the remaining routers remain in a listen state. Priority and timers need to be configured before the HSRP group is enabled. Data integrity is a priority in HSRP environments. Existing HSRP groups are potentially vulnerable to HSRPspoofing software and even to unintentional data corruption, where connections to private networks impact existing HSRP groups. HSRP employs key-based authentication, using the Message-Digest 5 (MD5) algorithm. All the HSRP group member routers should be configured with the same authentication and keys prior to being activated. 2014 VCE Company, LLC. All rights reserved. 24
Virtual Networking This section details network connectivity and management for virtual machines. To review, Vblock Systems support a number of different network/storage paradigms: segregated networks with block-only storage, with unified storage, with SAN boot storage; unified networks with block-only storage, SAN boot storage, and unified storage. Segregated network connections use separate pairs of LAN (Catalyst and Nexus) and SAN (MDS) switches and unified network connections consolidate to a single pair of Nexus network switches both LAN and SAN connectivity. Vblock Systems virtual servers are managed and connected differently than physical servers and have different requirements for fabric connectivity and management. They use a virtual network switch a software application that interacts with VMware s ESXi hypervisor to virtualize the network environment. Virtual switches have the same capabilities of a physical switch, supporting multiple VLANs per virtual interface, L3 options, security features, etc. Vblock Systems customers have two options here: the VMware virtual switch (vswitch) and the Cisco Nexus 1000V virtual switch. The VMware virtual switch (vswitch) runs on the ESXi kernel and connects to the Vblock Systems LAN through the UCS Fabric Interconnect. It directs network traffic to one of two distinct traffic locations: the VMkernel and the VM network. VMkernel traffic controls Fault Tolerance, vmotion, and NFS. The VM network allows hosted VMs to connect to the virtual and physical network. Standard vswitches exist at each ESXi server and can be configured either from within the vcenter Server or directly on the host. Distributed vswitches exist at the vcenter-server level, where they are managed and configured. The Nexus 1000V virtual switch from Cisco resides on each Vblock Systems server and is licensed on a perserver basis. It is equipped with better virtual network management and scalability than the VMware virtual switch, and VCE considers it a best practice. Cisco Nexus 1000V The Cisco Nexus 1000V is a combined hardware and software switch solution, consisting of a Virtual Ethernet Module (VEM) and a Virtual Supervisor Module (VSM). Each Vblock Systems cluster has one VM running the VSM as virtual appliance. Each node runs the client VEM. The following diagram depicts the 1000V distributedswitching architecture: 2014 VCE Company, LLC. All rights reserved. 25
The VSM runs as part of the ESXi kernel and uses the VMware vnetwork Distributed Switch (vds) API, which was developed jointly by Cisco and VMware for virtual machine networking. The integration with vsphere is tight, ensuring that the Nexus 1000V is fully aware of all server virtualization events, such as vmotion and Distributed Resource Scheduler (DRS). The VEM takes configuration information from the VSM and performs switching and advanced networking functions, including LACP link aggregation. Each instance of the Nexus 1000V is made up of two VSMs and one or more VEMs (64 max). The VSM functions at the control plane, managing multiple VEMs as one logical switch module. Instead of multiple physical line-card modules, the VSM supports multiple VEMs that run inside the physical servers. If the communication between the VSM and the VEM is interrupted, the VEM has Nonstop Forwarding (NSF) capability to continue to switch traffic based on the last known configuration. Each Vblock Systems virtual machine has a virtual network interface card (vnic) that connects to the 1000V to send and receive network traffic. Several factors govern choice of adapter, generally either host compatibility requirements or application requirements. Virtual network adapters install into ESXi and emulate a variety of physical Ethernet and Fibre Channel NICs. (Refer to the Vblock Systems Architecture Guides for network hardware details and supported topologies.) Configuring Nexus 1000V Switches Like the Vblock Systems physical switches, VSM on the Nexus 1000V is installed in redundant pairs for high availability, incorporating a stateful switchover from the primary VSM to the secondary in case of failure. Configuration occurs in the VSM, which runs on a virtual machine and automatically propagates to the VEMs. VSM uses the same NX-OS CLI that physical Nexus switches use. Instead of configuring software switches inside the hypervisor on a host-by-host basis, administrators can use a single interface to define configurations for immediate use on all VEMs managed by the VSM. Other configuration interfaces exist: standard SNMP and XML as well as the Cisco LAN Management Solution (LMS). The Nexus 1000V is compatible with all the vswitch management tools, and the VSM also integrates with VMware vcenter Server so that the virtualization administrator can manage the network configuration in the Cisco Nexus 1000V switch. Base configuration usually takes place during Vblock Systems manufacturing. Advanced configuration is an ongoing process and is up to the administrator. Structuring the Nexus 1000V environment is the first step. SVS domain IDs identify the VEMs controlled by the VSM. The SVS connection includes the name of the vcenter data center and the vcenter Server IP address or DNS name. The next step is to create VLANs for switch traffic. In typical Vblock Systems deployments, Nexus 1000V links use three separate VLANs: Control VLANs for VSM/VEM connectivity, Management VLANs for VSM/vCenter Server connectivity, and Packet VLANs for internal connectivity. The Packet VLAN supports protocols such as CDP, LACP, and IGMP. The Control VLAN facilitates VSM commands to the VEMs and their responses, and it also sends VEM notifications back to the VSM. Communication between the VSM and VEM can occur over L2 or L3 networks. In L2 mode, both modules must be in the same L2 domain, and they communicate over the Control interface. However, L3 mode is recommended, using either the Management interface (the mgmt0 port) or a dedicated Control interface (control0). The Control interface is recommended for high-availability mode. System port profiles establish protected ports and System VLANs, so that they can forward traffic even if the VSM is not reachable. (I.e., System VLANs will always be forwarded.) They include control and packet VLAN uplinks to the VSM, ESXi Management VLANs, IP Storage VLANs, and VSM ports on the VEM. The uplink port profile needs to include the switch-port type the most common being the virtual Ethernet (veth) interface. Adding a VEM is a common operation, and veth ports perform L3 communication. 2014 VCE Company, LLC. All rights reserved. 26
Virtual Route Forwarding (VRF) and Virtual Device Contexts (VDC) The NX-OS operating system has inherited a number of virtualization technologies from Cisco IOS software, among them virtual route forwarding (VRF) and virtual data contexts (VDC). From a L2 perspective, VLANs virtualize bridge domains in the Nexus chassis. Virtualization support for L3 is supported through the concept of virtual route forwarding instances (VRF). A VRF can be used to virtualize the L3 forwarding and routing tables. Virtual Data Contexts (VDC) allows the device to be virtualized itself, presenting the physical switch as multiple logical devices. Nexus switches can be logically segmented into four different virtual switches or device contexts. Each VDC functions as a logical device with its own unique and independent set of VLANs and VRFs. It can have physical ports assigned to it, thus allowing for the hardware data plane to be virtualized as well. Within each VDC, a separate management domain can manage the VDC itself, thus allowing the management plane itself to also be virtualized. In its default state, the NX-OS switch control plane runs a single device context (VDC 1), which runs approximately 80 processes. Some of these processes can spawn other threads, resulting in as many as 250 processes actively running on the system at a time depending on the services configured. This single device context has a number of L2 and L3 services running on top of the infrastructure and kernel components of the OS. 2014 VCE Company, LLC. All rights reserved. 27
Validate Networking and Storage Configurations The base network configuration should be tested during manufacturing. But clearly, the network configuration goes through significant on-site adjustments to accommodate the applications it supports and other environmental considerations. That means it also needs to be validated. For example, if using block storage, SAN configuration components must be tested and verified. If using filers or unified storage, the LAN settings may need adjustment, particularly NIC teaming, multipathing, and jumbo frames. With regard to the SAN configuration, the overall connectivity needs to be reviewed in terms of availability. Host multipathing and switch failover should be checked to ensure that the VMs will be as highly available as possible. It is also important to review the storage configuration to verify the correct number of LUNs and storage pools and verify the storage pool accessibility. Also, make sure that all the deployed virtual machines have access to the appropriate storage environment. These activities require almost the complete suite of monitoring and management tools in the Vblock Systems with most tools installed on the AMP. Specific tools used during a deployment include, vcenter, Operations Manager, EMC Unisphere, EMC PowerPath Viewer, VCE Vision software, and Cisco UCS Manager. 2014 VCE Company, LLC. All rights reserved. 28
Network Upgrades Keep in mind that this study guide provides just an overview of the manual upgrade process and not a detailed procedure. Make sure you familiarize yourself with the network switch upgrade and installation documentation before attempting an actual upgrade. The bulk of the upgrade process for Vblock Systems networks involves switch software. Currently, Nexus and MDS switches are based on NX-OS. Older versions and Catalyst switches may be based on Cisco IOS (CIOS). The Nexus 1000V is also based on NX-OS. The switch upgrades are non-disruptive, although they can impact availability. The procedure begins with a formal assessment of the current network: All the network devices, connections, their operability, and compliance. VCE Vision software management system includes a compliance checker that uses predefined profiles as a benchmarks for compliance scans. Actually, VCE Vision software also maintains a complete Vblock Systems component inventory and performs health diagnostics. VCE Vision software reports reveal precisely the status of each Vblock Systems element, including all the network devices. Compliance scans can be run according to a user-defined schedule, but they can also be launched on command. VCE Vision software is a comprehensive management system, in no way limited to system upgrades. Still, it can be particularly effective for baseline assessments prior to upgrades. VCE Vision software resides on the AMP. Once the current baseline is established, the scope of the upgrade becomes clearer. All the Cisco switches follow the same procedure for upgrades, regardless of their firmware underpinnings. Cisco uses a Kickstart utility for NX- OS switches to upload the correct firmware package. Check the switch memory requirements before downloading the new software, and then reboot the switch. Since the switches are installed in pairs, the update should be applied to each switch, one at a time. That includes validation verifies the vpc consistency, port-channel connectivity, and Spanning Tree functionality. Be sure that the first switch is functional before moving on to the second. Upgrading the Vblock Systems network tends not to be an isolated event. As a converged environment, the network is interconnected with the entire system the fabric interfaces in the servers, for example, or the FC ports on the storage arrays. Any change has the potential to start a chain of events, especially changes and updates in the virtual environment. In fact, the Nexus 1000V switch is technically part of the vsphere upgrade, not the network upgrade. Given the complexity involved in upgrading a converged infrastructure, VCE has implemented a full-scale upgrade service, VCE Software Upgrade Service, providing everything from upgrade project planning through implementation and verification. Still, many customers prefer to perform upgrades in-house. Regardless, upgrades are based on the VCE Release Certification Matrix (RCM), which lists the hardware and software component versions that have been fully tested and verified for a particular release version of Vblock Systems. Updated components may include, but are not limited to AMP hardware/software Storage array firmware Switch hardware VMware vsphere vapps Plugins 2014 VCE Company, LLC. All rights reserved. 29
Security This document has already touched on a few security-related technologies: the MD5 cryptographic authentication keys for HSRP, for instance, and the VTP access controls and, of course, the security configurations in the routing architecture. Actually, Vblock Systems use a number of options for switches and routers to help implement access-policy security. These work in conjunction with customer access policies to define system access and prevent unwanted network connections. While policies are unique to the customer site, some common standards still emerge: Password implementation VLAN management and port security Physical security to the network equipment Enterprise-network access control Passwords are probably the most important aspect of network security. Passwords not only need to be changed frequently, they also need to be opaque. Routers and switches have multiple access and configuration points, and a password needs to exist on all of them. There are really only two ways to enter a Cisco device: Out-of-band management includes the console and auxiliary ports, and passwords need to be set on both physical ports. By default, no passwords exist and anyone can connect and manage the devices. In-band management includes Telnet, TFTP servers, and Network Management Stations (NMSs). These access points do not allow access by default, but again, passwords should be applied. VLANs and VSANs are secure entities in and of themselves. They create multiple broadcast groups, and administrators control each port and its resources by assigning roles to particular port groups and zones. Users cannot just plug their workstation into any switch port and have access to network resources. Because groups can be created according to the user s network-resource requirements, switches can be configured to inform a network management station of any unauthorized access. If inter-vlan communication needs to take place, restrictions on the router can also be implemented. Restrictions can also be placed on hardware addresses, protocols, and applications. Although not exactly a refined solution, Cisco logon banners can offer an effective degree of security. A logon message broadcasts only specific information, thereby securing the switch. It also notifies users about network policies and security guidelines. Disabling CDP also protects the switch. 2014 VCE Company, LLC. All rights reserved. 30
Additional Network Security Features The Cisco NX-OS software includes the following security features: Data-path intrusion detection system (IDS) for protocol conformance checks Control Plane Policing (CoPP) Message-digest algorithm 5 (MD5) routing protocol authentication Cisco-integrated security features, including Dynamic Address Resolution Protocol (ARP) inspection (DAI), DHCP snooping, and IP Source Guard Authentication, authorization, and accounting (AAA) RADIUS and TACACS+ methods for sending account records and storing them to Syslog in an accounting log on the security server. SSH Protocol Version 2 SNMPv3 Port security IEEE 802.1X authentication Layer 2 Cisco Network Admission Control (NAC) LAN port IP Policies based on MAC and IPv4 addresses supported by named ACLs (port-based ACLs [PACLs], VLAN-based ACLs [VACLs], and router-based ACLs [RACLs]) Traffic storm control (unicast, multicast, and broadcast) Unicast Reverse Path Forwarding (Unicast RPF) 2014 VCE Company, LLC. All rights reserved. 31
Troubleshooting Given the unique quality of Vblock Systems network implementations, network troubleshooting can be challenging for Vblock Systems administrators. With so many different, distinct configurations, specific known problems are difficult to pinpoint. Nevertheless, Cisco does provide serviceability features for network planning and improving problem resolution. The tools discussed in this section are mostly monitoring and diagnostic tools, and they provide the kind of logical, systematic analysis necessary for effective troubleshooting. The Cisco Switched Port Analyzer (SPAN) is a port monitor, providing nonintrusive analysis of all traffic between ports. SPAN session traffic is directed to a SPAN destination port that has an external analyzer attached to it. SPAN is useful for analyzing and debugging data not to mention diagnosing errors on the network. It can be used to monitor inbound and outbound traffic on single or multiple interfaces. The Cisco Call Home feature continuously monitors hardware and software components to provide email-based notification of critical system events. A versatile range of message formats is available for optimal compatibility with pager services, standard email, and XML-based automated parsing applications. Call Home offers alert grouping capabilities and customizable destination profiles. It can be used, for example, to directly page a network-support engineer, send an email message to a network-operations center (NOC), and employ Cisco AutoNotify services to directly generate a case with the Cisco Technical Assistance Center (TAC). Cisco considers Call Home a step toward autonomous system operation. If the networking device itself informs IT when a problem occurs, the problem can be resolved more quickly. Additionally, Cisco has implemented generic online diagnostics (GOLD), a suite of diagnostic facilities to verify that hardware and internal data paths are operating as designed. Boot-time diagnostics, continuous monitoring, and on-demand and scheduled tests are part of the Cisco GOLD feature set. GOLD allows rapid fault isolation and continuous system monitoring. 2014 VCE Company, LLC. All rights reserved. 32
Conclusion This study guide represents a subset of all of the tasks, configuration parameters, and features that are part of a Vblock Systems deployment and implementation. This study guide focused on deploying Cisco networking solutions in a VCE Vblock Systems converged infrastructure including how to configure and manage the network infrastructure on Vblock Systems. Exam candidates with the related recommended prerequisite working knowledge, experience, and training should thoroughly review this study guide and the resources in the References document (available on the VCE Certification website) to help them successfully complete the VCE Vblock Systems Deployment and Implementation: Network Exam. ABOUT VCE VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while improving time to market for our customers. VCE, through the Vblock Systems, delivers the industry's only fully integrated and fully virtualized cloud infrastructure system. VCE solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and application development environments, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. For more information, go to vce.com.! Copyright 2014 VCE Company, LLC. All rights reserved. VCE, VCE Vision, Vblock, and the VCE logo are registered trademarks or trademarks of VCE Company LLC or its affiliates in the United States and/or other countries. All other trademarks used herein are the property of their respective owners. 2014 VCE Company, LLC. All rights reserved. 12042014 33